"The Irony of digital audio"

I posted a similar topic on the Asylum, and it produced/provoked interesting responses and comments. What do YOU think?

The other day some colleagues and I were discussing how, back in the day, the acceptance of consumer digital audio was facilitated by the fact that compared to LP playback, CD playback was tweak-free and didn’t require the care and rituals associated with record-playing.

We quickly learned that “indestructible” CDs could be made unreadable through rough handling, just like records. And we learned that read-errors and jitter abounded, and…

Decades later, where are we? while digital in its simplest form can still be straightforward and tweakless, we’ve found that for a variety of reasons, digital audio can be enhanced/tweaked/improved at every step in the playbackprocess: optical tweaks and read-until-right for CD, power supply and storage enhancements for files and streaming, USB and Ethernet cables can in fact make a major difference, and on and on.

Just look at the amount of column-inches on this page devoted to Regen, Jitterbug, and so on. I’ve heard experiments with similar but more elaborate devices, and even on an excellent system (in PS Audio’s Sound Room 1), the effect has been profound.

So: that simple, set-and-forget digital sound is proving to be every bit as fussy as analog sound. Bits are bits–kinda-sorta, most of the time, but-- we tend to forget about the “A” in DAC, the analog end of things–and that’s one (of many) area in which huge improvements can be made.

So: do you think digital audio can ever be perfected to the point where we’re all done, no further improvement can be made? Or will we continue to discover newer and even fussier elements that requiring massaging?

I tend toward the latter viewpoint, but would be interested in hearing the thoughts of others.