Accuracy of digital data: USB vs. Toslink vs. Coax

There is a mental picture that I use to understand how digital distortion works, and where to best spend my money and effort. Maybe it helps others as well, or others help upgrade or correct this picture.

So, consider a perfect sine wave. Now turn it into a perfect set of samples, using any sampling rate you like. Everybody will agree, that in order to turn it back into the perfect sine wave, each of those perfect samples must be applied at the perfect point in time. In reality this is not happening. Imagine now the picture of the sine wave with each perfect sample a bit off on the time axis. The resulting signal is distorted. Furthermore, the distortions are not just simple harmonics (like in analog), but any kind of nastiness (well I’m guessing here). This is possibly a reason for the digital vs. analog. And it should also explain why you will not hear a difference in just turning up the volume with nothing playing and comparing the hiss you get. Applying arbitrary small samples at arbitrary times does just not sound any different.

The problem is that many (most?) DACs derive the timing of the samples at their analog output from the jittering timing of the digital input data. A reason why better digital cables, streamers, DDCs, reclockers, jitterbugs, etc. help is, that they reduce the timing error of the digital data that finally arrives at the DAC, helping the DAC to apply the sample at a more perfect point in time.

Another strategy from a system point of view is to design a DAC to just receive the bit perfect audio data, store it and do everything possible to generate its own clock with perfect timing for the samples at the analog output. This it what the more expensive DACs do. The more expensive, the more perfect the timing of the samples. I even believe the DAC is the real place where it counts. You can get perfect time anywhere in between in your chain, that’s nice to have. But if you dilute it again on its remaining way to the analog output, you don‘t get the optimum of your chain.

Gosh, that’s already longer than I thought it would become,…

3 Likes

Local clocking, by Ted’s fab algorithms on standard real time streams (on any of the inputs which is one of the strengths of his design) is the way, but also (in theory) using USB to transfer the data over to any DAC allows the DAC to be the final stage that assembles the real-time stream and therefore allows local (to the DAC) clocking on any DAC.

A USB connection is not a real time stream, so jitter will not affect the DACing.
Within limits obvs, if the USB bus stutters enough that the DAC doesn’t receive the data it needs in time then the real time stream the DAC is generating in its USB input stage will glitch, and of course a stuttery USB stream may affect the DAC circuitry in other ways.

Of course you are also in to reducing the noise and garbage that comes in over USB.

Hey, I really like this explanation. Even a dumb creative type like me can understand it. Makes a lot of sense. Only question I have is that IF the timing on the sine-wave is adjusted by the cable, how does the DAC know where it was supposed to be in the first place? Do the more higher-end DACs go out and find the piece of music and compare it to what it’s receiving - like “HEY, that bass line in Ted Nugent’s “Wang Dang Sweet Poontang” is wrong - better fix it pronto.” Well, I’m joking a little, but really, how does the DAC know how to put it aright?

Speaking of which, have you ever heard Gordon Goodwin’s Big Phat Band recording of “Hunting Wabbits 2 (A Bad Hare Day)” from the 2006 album The Phat Pack? Just heard it last night and was once again amazed at not only how well it’s crafted, but how great the recording is. Something happened that I’m not sure ever has - in the first few bars, there’s a triangle (instrument) being touched way in the background. At first I worried because it was quieter - even harder to hear - than before - is there something wrong with the tweeter? Why can I hardly hear it? Then it occurred to me - I CAN hear it, but maybe I’ve always heard it too loud before, that it’s SUPPOSED to be quieter? Does this happen?

So wait, are you saying that the computer creates an analogue out of digital - that it’s not really 0 and 1 pulses?

Yes, that is what I am saying.

No, they don‘t need to. Today‘s transmission of digital data guarantees the receiver receives exactly the data that the sender was sending. Depending on the connections you use, there exist different strategies. In the likes of coax, XLR, I2S, … you receive a 0 if the measured voltage is below a given threshold at a given point in time, or a 1 if it is above. The margins of error both on the voltage axis as well on the time axis are so large, that virtually no error can happen. You need a lightning striking your power line, a really crappy DIY cable with no shielding at all in a very noisy environment, or similar.

If data has to travel longer distances, like your LAN, you add protocols that detect and correct transmission errors. The data is divided into small packets and handled each one individually. For instance, let’s assume you want to transmit 8bit of data. You feed those 8bit (aka payload) into a well designed and agreed upon alghorithm that calculates a checksum, let’s say a 2bit checksum. Well designed algorithm in this context means, if any of the most likely damages happens to the payload, the algorithm yields a different checksum. The sender will now send a 10bit packet consisting of the 8bit payload and the 2bit checksum. The receiver then takes the 8bit, and feeds it into the same algorithm to calculate the checksum again. If this checksum equals the checksum in the received packet, the payload is accepted. If not, it is discarded and the sender is asked to retransmit. These are only examples, there exist many variations of course.

Being the creative mind you are, you may ask, what if the error in the 8bit payload is in a way that you get the same 2bit checksum, then you accept a corrupted payload, what if, … And you are right. You may receive a wrong bit from time to time. The strategies mentioned above, however, get the likelihood of wrong bits down to once a day, a week, whatever. This is what your engineering buddy at Siemens confirmed, too. If you are looking for effects that alter how your DAC sounds, flipped bits are not the reason (remember, once a day). If you have a flipped bit in PCM, you might have a very short spike (one wrong sample), manifesting as a pop at the speakers. In DSD where each bit has the same tiny significance, you shouldn‘t even notice.

Coming back to your question, ensuring the perfect transmission of the digital information is not a property restricted to expensive DACs. Any cheap piece of consumer electronics you can buy on Amazon or wherever will achieve that. This is no rocket science at all. I hope that has become clear now. What the more expensive DACs do is to only accept the bit perfect data, but ignore the incoming clocks, and generate their own very precise one with which to clock the sample out at the analog side, hopefully adding the perfect timing as well. But that’s the point of a separate question in your post. And it deserves a separate answer.

2 Likes

?

Hah, bullshit of course :grin: Read 0 if below and 1 if above threshold. Need to edit that.

2 Likes

and they might be better (or worse) at rejecting the noise that comes along the electrical connection :slight_smile:
not high enough to affect the bit recognition (lightning aside) but enough to maybe cause problems elsewhere in the DAC circuitry.

if only we had a high speed low cost optical IC (i.e. better than toslink)!

Edit - a proper AES/EBU connection can help here too of course, rejecting common mode noise on the line.

2 Likes

Wow, hadn’t visited here for a while, but I think you just explained “error correction” to me, which is what I thought - the sender sends the data, plus info on how the data is supposed to be received, and if it’s received correctly, it passes it on. If not, it says “uh, send that again,” and it’s resent. You’re right that this is what my Siemens friend said about our products. So now, either my new better USB cable is doing something else other than digital, or I’m inventing the better sound. Either way, I’m happier.