I was just thinking about what future Artificial intelligence might play in the future of audio. Will it allow better component matching, better recording, better playback? As yet, I have not seen any components with AI, but I wonder if anything is in the works. Wonder if Paul has given AI any consideration.
I was also thinking of that in my high end and innovation thread combined with what’s possible to realize in the pure digital domain from source to speakers. I wondered if PSA would research there some time as they did with class D. But probably pure digital is still too lossy.
I can imagine a lot of help for customers from AI in terms of let music sound how someone likes it. Ok, that’s not what artist,s recording and mastering engineers etc. really appreciate…but imagine you could first time make all this extremely badly recorded and mastered stuff of great music from the 70s/80s sound like real music for you.
AI as is currently “accepted” is generally being applied in decision making issues and generally requires large learning databaseS and often accompanied by a large computing resource.
I can’t really imagine where in the hi-fi space that there is a lot of opportunity for AI.
Producers, engineers had had a lot of automation in their recording systems for years but many still prefer a mixture of manual & auto systems.
Quantifying music presentation is very hard - everyone hears slightly differently and even more so when their eyes are open.
That then suggests to me that there is scope for more automation of integration of real-time room analysis and correction but whether that requires AI or simply better/faster/transparent hardware and software is not clear.
Making music recordings more “ appealing “ to an individual can be undertaken now with a multi-band equaliser or DEQX, or other electronic DSP systems.
Are you thinking of an algorithm (or box) that analyses the recording spectral & channel balances/differences with an aim to get them closer to an individual’s “pre-programmed “ choice?? Doing this on-the-fly?
As much as AI seems to be everywhere, it’s not. AI is able to learn and make its own decisions. 99.999% of what today is touted as AI is nothing more than pre-constructed algorithms. Don’t let the advertising and corporate sales-speak fool you. AI is currently limited to a handful of public corporations who live on the cutting edge and have massive research budgets (Google for example), and government sponsored tech labs.
As someone who has worked exclusively in AI for the past 4 years I can say this statement is pretty accurate.
Yup. I’m a software engineer. The difference between what gets produced and what it’s sold as is mind boggling. Snake oil is like 50% of every product
Just brainstorming, but perhaps AI (or simply an SVD) could be used to analyze various fora and perhaps come up with some suggestions about what to work on in one’s system, or perhaps even answer some questions. Years ago I really liked the AudioAsylum: one of it’s strengths was that many of their members posted their systems so over time you could see how your system and preferences correlated with others to perhaps help narrow down who might have the most relevant advice for you, etc.
In that sense AI might be able to help with component selection or system debugging.
I’m not as optimistic about using AI to help with tuning playback of, say, less than ideal recordings. Simple collaborative filtering might be more suited for that: building a database of suggestions or DSP parameters for individual tracks or discs.
My immediate reaction was that AI could usefully be applied to the construction of suggested personal playlists by streaming services. Not quite the esoteric thinking expressed so far in this thread,
I hope it’s better than the “AI” used by Amazon to suggest what I might like to buy next…
That’s not AI, it’s marketing.
I believe some of them do this already, Tidal for instance. The algorithms used seems to be pretty basic or poorly trained though.