thing is, for some, consistency is vital (i count myself in this group).
eg with vinyl there were too many variables that could subtly (or grossly) affect the quality of reproduction, and it becomes tiring listening out for them.
hence digital was a boon and something i adopted early precisely because of the multiple variables and issues with vinyl (and the drop off in pressing quality throughout the 70s).
i am reminded of iso9000 and the redefinition of quality as consistency (which was itself an echo of the move to mass production with standardised parts earlier in the 20th(?) century).
I know JA wants to keep things āconsistentā in his measuring methodology, but to ignore the realities of more modern design implementations at the expense of accurately portraying its technical performance, to me, is hard to justify.
He was (and is) still measuring the jitter on Asynchronous Sample Rate Conversion DACs with a test that is designed to only measure the jitter generated by the DAC proper. But ASRC encodes any incoming jitter from the digital inputs irretrievably into the audio before the DAC proper and hence incoming jitter isnāt picked up by the measurement. The ASRC sound is better than non-ASRC DACs to some and worst to others. Iāve talked to him about measuring the wrong thing there too, and he is unapologetic about doing measurements with a consistent method across all DACs.
Other DSD based DACs with great sound quality reputations, e.g. the Playback designs measure poorly with JAās methods too.
In case JA, or anyone else for that matter, isnāt regularly evaluating his methods in an advancing world, he will be running behind at some moment in time. Perhaps that moment has come.
That said, the MK2 wonāt be the only device that falls victim. Maybe the first, certainly not the last. Can we already recognize a patern here and substantiate it?
Next would be what do about it.
Edit: I believe Ted just answered to quite a bit of the above.
Your comments reinforce my thoughts regarding component reviews in general. Nice example of a reviewer/tester bias. IME, at best a solid review can lead one to a component worth evaluating in oneās system prior to a financial commitment.
I get a kick out of JA stating that with some of the DS Mk I software releases he couldnāt measure any differences at all, but he could clearly hear, for example, a blacker background.
What Ted mentioned about Playback Designs and Stereophile is true. Hereās a review of their previous generation dac/sacd player where the measurement comments werenāt great.
IME measurements are a great tool for validating design considerations and effective implementation there of. Sonic qualities tend to go well beyond what can be measured, assuming the measurements are valid in the first place. Iām not anti-measurement, but attempt to come to terms with the limitations. For Stereophile, measurements differentiate them from the pack and thus sell subscriptions.
If I was still expecting much of magazine reviews or measurements at all, it would be that they at least know about and mention their limitations and shortcomings. Too high expectations once more.
JA seemed to be saying, it measures poorly, is mediocre on the audiophile checklist, but he really enjoyed listening to itā¦if I was in the market, and not really familiar with PS, I would see it as a pretty negative reviewā¦
The days of overtly bad mouthing a product, like the old TAS reviews by HP, are over. Instead they just end a review like John Atkinson did with the MK II review. Last sentence- āThis is a product would-be owners need to audition in their own systems before purchase.ā Thatās pretty scathing.
I just read again. Itās pretty amazing what people do with these measurements. He says measures bad but sound great and basically says Buyer beware. Now that I know he KNEW why it measured bad as Ted has told him that is a really crappy thing to do in a review.