DirectStream and Sound Quality

@tedsmith

Technically speaking, what elements of a signal fed to the DirectStream DAC will affect sound quality? Let’s assume that the DAC is being fed a bit perfect signal whether it is PCM or DSD. Let’s also assume we are using I2S or USB and no Bridge card is installed. Let’s ignore external issues for the sake of this conversation.

I am going to assume that elements will include electrical noise on the ground line, electrical noise on the power line, electrical noise on the send and receive lines, and jitter. I am sure I am forgetting something so please add it to the list.

Also, how does the “Digital Lense” technology affect the DirectStream? What does it do? Does it just reduce jitter? How susceptible is the DirectStream to jitter and how much does the “Digital Lense” help?

Thanks!

IIRC the digital lens is just a RAM buffer.

I suspect that the advantage of the digital lens when used with the DS is to lower conducted and radiated noise from the source. Functionally it’s redundant with the FPGA, they both buffer the data and use a quality clock to remove data from the buffer.

Ground loops are one of the items that affect any system, even with a perfect DAC (or whatever) and no jitter, … Ground loops have current loops induced in them by any net flux thru the loop. Except for 60Hz and 120Hz most of that noise is high frequency, any such noise can be modulated down into the audio band by any non-linearities in the system (of which there are many.)

3 Likes

@tedsmith

So, if I can get an I2S or USB connection to the DirectStream DAC with a very clean ground with no loops, along with clean DC power, the sound quality would not be different from I2S/USB source to I2S/USB source as long as the DSD and PCM got to the DirectStream in bit perfect form. In other words, it would not matter if that I2S/USB source received the bit perfect music information over Ethernet or was a CD transport as long as the there were no ground loops or current loops and the DC power was clean. Correct?

Don’t know how relevant this is, but I believe the USB input on the input board of the DS goes through an Xmos unit (don’t know which one off-hand), which translates the USB to i2s.

This would be the same as the Matrix unit mentioned here quite often, which, I believe uses the Xmos U208 for USB - i2s translation, which in turn goes through another chip (I don’t know what LVDS transmitter chip it uses) that then pushes it out as i2s over LVDS (since ‘regular’ i2s is not meant to travel long distances - such as over an HDMI cable).

With the i2s input of the DS, the signal goes through an LVDS receiver chip (don’t know what brand) to ‘retranslate’ it back to ‘normal’ i2s.

So if you go computer > USB > DS you are actually removing the extra steps of i2s to LVDS transmitter to LVDS receiver to i2s that has to occur when you use something like the Matrix (plus the extra HDMI cable).

I’m NOT saying straight USB sounds/is better, but the chain is shorter. Other issues can be producing the perceived differences (when people claim the Matrix sounds better), such as ground isolation, power supply, etc. In addition, it COULD be different implementation, or even different chips. For example, the U208 in the Matrix is not even the current ‘hip’ model - I believe the current Xmos debutante is the U216.

If you have a PWT, DMP, Matrix, etc you can connect multiple inputs from it to the DS and then select between them with the remote. You’ll hear little or no differences between the various inputs because the DS rejects most all of the jitter and all of the inputs share the same groundloop issues.

In the DS the XMOS chip (for USB) and the Converse Digital SOC generate a lot of noise, conducted and radiated. So they are both at a disadvantage compared to the other inputs.

In general groundloops are very hard to get rid of, the safety ground on most equipment is connected to their cases and most often to the grounds on the device. Few devices have galvanically isolated inputs and outputs.

Another way USB and the Bridge are at a disadvantage is that they are often directly connected to a computer that’s not next to the DAC and that causes bigger areas in the groundloops as well as having a lot of noise from the computer itself…

The LDVS transmitter and receiver chips don’t add any significant noise or jitter, they don’t have clocks so they act mostly like any other logic chips that might be in the system (e.g. multiplexors, and gates, etc.) Any USB connection is orders of magnitude worse in any respect than the LVDS translators.

The DS uses a fairly stale XMOS USB chip, but newer chips may not be better because they have more cores (which is more noise) and some have the USB PHY in the XMOS chip (probably good) and others require an external USB PHY… The biggest difference in XMOS USB setups is probably their power supplies: some have built in switching regulators (bad because you can’t use a linear regulator instead, good because there’s less chance of radiation of switching noise since it’s much closer to the silicon it powers.) So using power wasting linear regulators on an older XMOS chip might be the lowest noise implementation.

3 Likes

@tedsmith

I guess what I am getting at is that the analog signal coming out of the DirectStream is going to be about the same regardless of what feeds it the signal as long as the signals are as clean and the data is the same. It won’t matter if the device feeding the DirectStream got the data locally off an SSD or remotely via Ethernet regardless of protocol used. The key is the music data being the same and the signaling being equally clean. If those conditions are true, the sound quality should be the same. Correct?

Yes, but the ease of getting a clean signal is often underestimated. In a past company we could easily hear the difference between using a fast seeking SCSI drive and a slower seeking SCSI drive. To go fast it takes more instantaneous current which ripples thru the PC’s power supply into the system at large. I used to judge the quality of a custom PC for playing audio by how different things sounded when the system was also ripping a CD. Both of these would be much less of a problem with the DS, but there are certainly systems where someone could hear the difference.

2 Likes

Let me use a specific example. Let’s say you are using a music server that talks to some kind of purpose-built super quiet (low electrical noise) end point that was capable of supporting a variety of protocols over Ethernet. Assuming one protocol does not cause the endpoint to use more system resources than another and both provide bit perfect data to the DirectStream, you would expect them to sound the same, would you not?

In other words, the protocol used doesn’t matter since the data is the same. What matters is how much noise is passed along with the data.

No, the protocol can matter because there’s a computer on each end of Ethernet or USB. They generate a different noise pattern depending on the processing they are doing - even if two protocols should use the same number of instructions per sample on average their noise will be different and unpacking losslessly compressed data will make more noise than playing straight samples since the irregular decompression pattern will be noisier than the simple loop.

Look at it this way: these days passwords can be cracked with a simple current meter on the power cord of a computer and with the right setup a password can be cracked by listening to the fan noise from a laptop. Tho subtle, we can tell which bits are zero when a processor does a multiply (at least statistically) based on very small changes in the external electrical noise from a system (and sometimes the audio noise.) These differences are harder to filter out when the processor doing the processing is in the same box as the DAC proper. We used to use different sized loops in our 8 bit computers to play music over a radio in the same room. The different releases of Snowmass only differed in the pattern of instructions in the display processor, but that was audible to many.

Doing a great job of isolating USB or a Ethernet connection for a reasonable amount of money is hard. I’ll try to do a better job in future DACs, but it’s not cheap (most available galvanic isolators actually generate noticeable amounts of noise: many turn a high frequency oscillator on and off for each bit !)

[Edit: I should mention that these effects are really small, they won’t show up in most systems, but they do in some others.]

3 Likes

I am using the new Uptone Audio Ether REGEN on the input of the Bridge II of my Windom DS DAC with fantastic results. I am streaming Tidal MQA material from a Roon Nucleus+.

The improvement is not at all subtle, and has only gotten better with burn-in. Sound stage expands in all dimensions, the noise floor drops dramatically, bass definition is improved and, textural details of instruments and voices are enhanced.

The theory is that the bits are not changed, but noise on the signal is dramatically reduced.

The bad news is they’re back-ordered into next year…

4 Likes

But, these patterns will not be substantially different since the overwhelming majority of the data being transferred is the same data. Although different protocols will have some different data, that data is essentially different every time so one protocol versus another would be indeterminate. Also, you are making an assumption that these differences would make a difference in the noise passed on to the DAC. That is in no way certain.

No, the patterns will be very different, as the processing required for each file type is different, the underlying stream protocols for the different types of data streams will be different, and the circuit paths may be different (in terms of output and input software and hardware stacks)

2 Likes

Wow, the most interesting Forum post since a while and a great analogy with the password/fan, thanks Ted!

I would love to sometimes get a simple instruction how to connect a streaming device (and which one) for reasonable money and a short description of the sq difference to a Bridge setup with all network and PC equipment in a different power circuit with galvanically isolated network connection as I have currently.

Also interesting to better recheck if the isolation really improves.

My assumption is, even if we get some sq difference reports, they will mostly compare different and simpler starting points.

1 Like

Not an assumption: tho I didn’t specify that the only thing that’s sensitive to the noise is the DAC, often that’s not the issue at all, it’s some other part of the system, but it’s still noise from the processor decoding the data.

It’s easy to see the low level differences of the noise on the scope in the DAC’s output from different source decoding algos over the USB connection. In the past I’ve heard those differences in all manner of systems very consistently: they were very similar no matter which DAC, digital connection (including TOSLink), premp, amp, speakers, etc. was being used. My tinnitus drowns them out today, but I can still clearly remember a couple of decades of listening to the differences.

3 Likes

I guess what I am pointing out is that the noise created by different protocols sending the exact same data is going to be variable and inconsistent. It may not affect sound quality at all. What may sound one way with one protocol one time might sound different the next. Suggesting that the what comes out of device is going to be noticeablely different based on the protocol used to get the data to it is a massive stretch.

In multiple systems it seemed about as noticeable as the differences in the various Snowmass releases (tho audio memory is pretty fickle. ) And, in general, most of the people around me didn’t seem to be noticing anything at all.

I guess I’d like to say that comparing differences to differences is almost useless. Differences this small are inaudible to most, are very suggestable to many and almost always not reproducible in magnitude system to system. Almost always smaller than changes in power cords, interconnects, etc. And certainly smaller that the kind of changes we expect when changing a component. Years ago they were very reproducible and consistent to me and (just a few) of my friends. But not often now. I can still hear them, but it’s more of a feeling than something objective.

4 Likes

And differences between protocols that you can hear, may sound better with some tracks and worse with others. The differences you can hear are not going to be consistently better or worse.

What would happen in your cosmos (or setup) if what Ted says would be also true for you?

I“m always curious when people try not to believe things that admittedly are aside of clear rules, even from someone who appears to be one of the best experts on earth for that matter and guesses around it…