I am not taken in. I’ve done the testing, just as you have. No matter what I do, now matter how I check it… I can find no evidence whatsoever that a $90.00 USB (HDMI, etc.) cable performs any better than a garden variety $6.00 one.
That said, I do find that the super cheapo $1.50 cables you buy for your phone at the corner store are generally so sub standard as to barely carry charging current. Wires the size of hairs… not good.
Your experience is different than mine and many others. But you are claiming an categorical negative (and in your own words based on a small number of experiments) and conversely I know full well that there are many who don’t hear a difference significant enough to buy/make/use different cables, but that has no bearing on whether it’s possible to hear or measure a difference. Also you obviously didn’t read links I gave on jitter…
For your links … we’ve been just a bit busy arguing.
I don’t know what you would consider a small number of experiments… I’ve probably put these things through current, spectrum, scope, rfi, long term reliability and, yes, listening tests a couple of dozen times over the years. Probably less often than you have… but then, I’m batting 100% “no difference” which is pretty convincing all by itself.
Yes, I am aware of the jitter issue. It exists at the sending end too. If the CPU isn’t there on time to send a data packet, it can happen… but generally it’s so minor as to be ignorable. Especially with well buffered receivers.
The secret here really is in buffering …not relying on real time data… keeping the data stream and the DAC as far apart as possible.
I don’t know how you can’t see that there’s no argument that buffering is a good thing, but that you need to use a clock to take the data out of a buffer and getting a quality clock to unbuffer and run the DAC is where the jitter problem lies. I’ll stop repeating myself because you are apparently either not reading what I’ve already wrote, you are not asking question about what you don’t understand or you are arguing for the sake of arguing.
High precision clocks are easy … a crystal operating at (say) 20mhz with programmable dividers for the various data rates… In a software driven system you can probably even program an interrupt timer to do the job.
Well except for the profound hope that a bit is still a 1 or 0, even in audio, I have to believe that jitter is possible and it could be (not saying it is) caused by a cable. However, that bit’s representation as a voltage sequence, that the sequence could contain clocking info, etc., and the meaning of that bit within the overall protocol certainly opens up a lot of places where bad things (lets call it jitter since we’re talking about it) could be introduced. I’m not saying I know any of this, just that there are so many moving parts that if any one of these parts is not behaving as a designer thinks it should, the jitter or other distortion we’re talking about could be introduced. Ideally, we’d have (in playback) direct access to the clock of the recording master - and that clock source would be perfect and thus the recording would be jitter free. However I know that’s not going to happen.
I’d be inclined to believe that the impedance of a circuit could change based on a particular cable and what was connected at the other end. If so, a similar circuit with glass fiber might eliminate the problem. If impedance changes were a root cause, this in turn might cause variations in the clocking somewhere. As for optical cables, I’d similarly think that turning a light on/off at M-Bit speed to drive a fiber link could similarly change the driving circuit and somehow color the sound by changing a clock somewhere. Maybe the LEDs driving the fiber are not responding well to the gates that drive them. I’m not saying I know any of this, but I have enough design experience to know how sometimes a 12 cent part can introduce subtle (Ted’s word is insidious) variations simply because nobody knew that part of the circuit was worth spending $2 on.
So I’m really curious to know what are the usual suspects? Are there any usual suspects? Presumably if we had perfect clocks (read that as crystals with infinite price) we’d be better off. However, what else could introduce the variation? More accurate diodes driving fiber? better transient response to solid state gates? If you were to try to pick the low hanging fruit, what (statistically) would be the best place to start looking?
In my experience most of the reason for delayed or jittered data is software. Programs that don’t multi-task smoothly enough to always have the next block of data ready to go or random interrupts from other software delaying a CPU time slice.
This is why I say that buffering is so important. Instead of thinking DAC, think of a storage device that plays music (kind of like an iPod)… you could load an entire musical selection into memory using USB file transfers and simply play if from memory, just like the iPod does. The delay this might incur can be eliminated by starting to play as soon as the first block of data arrives, then multi-tasking with one thread loading the memory buffers and the other playing the data.
This kind of scenario is easily done with modern CPUs and it would eliminate the jitter problem outright,
One wire can carry only one voltage at a time. A bit is a voltage… so no it’s not like any kind of sequence.
I purchased a Wireworld Silver Starlight HDMI cable. It sounded just so/so out of the box, so I will let in break-in for a few days. My AQ cable sounded crappy until it had a hundred or so hours on it.
So if wire is wire and bits are bits how come cables need to be broken in to sound their best? Maybe I, along with dozens of respected audiophiles I know THINK we hear cables change with break-in but we’re imagining it.
Two separate clocks will drift apart and the buffer will under/overflow. If you keep them from drifting apart how do you manage that given the papers I showed you?
Many years ago I designed a printer buffer… Conceptually it’s a circular buffer…
A simple little computer that reads data in from a serial port, stuffs it into memory on one end and spools it out to a much slower printer on the other end. Typically this would receive 19,200 baud data hold about 2 megabytes and spool it out to a 4800 baud printer. You know the buffer is full when the “next in” pointer is one less than the “current position” pointer… you know it’s empty when the “current position” pointer is one less than the “next in” pointer. Round and round until it’s all printed (or played).
(It was a bit more complex than that, having multiple inputs… but that is how the buffering worked)
So, lets assume this sequence…
USB DATA —> USB RECEIVER —> MEMORY BUFFER —> DAC —> AUDIO OUT—>
You would have to receive that data at USB speeds until the memory buffer starts filling… say 100 bytes or so. Now your software releases the DAC to begin sampling at it’s set sample rate. You have music right out of the gate, no delays. Now since the buffer is going to fill faster than it empties you need to keep track of where the new data is going with a memory pointer (NI) and watch that NI never overruns the DAC’s position (CP) when the buffer is full. Once the buffer memory is full you stop the incoming data and let the DAC run around the buffer until it comes up behind the NI by some value and restart the data transfer. Round and round.
Circular buffers are easy enough to program… CP = (CP & BufSize) + 1; … it will run around in circles inside the defined buffer size moving one position on each call.
With proper logic the actual clock speeds are unimportant … just so you are preventing collisions such as overwriting data that isn’t played yet or running out of stuff to play.
This BTW is how DirectX and other streaming software actually dispenses the decoded PCM data from memory to USB.
We aren’t talking about USB - we are talking about S/PDIF, AES3, TOSLink, I2S, etc. There is no feedback to the source, no flow control, etc. The bits keep coming from a (usually not high quality) clocked source. How do you keep a 10MHz clock from drifting away from a 10.0001MHz clock? (Which BTW in spec.)
[And BTW, USB has it’s own problems which typically are worse than those caused by embedded clocks.]
Actually, we were talking about whether cords make a difference or not.
Ok… if they are feeding you real time data, you are right, you are completely at the mercy of the source. Which probably means you should consider using other methods…
Not so with USB or FireWire or even BlueTooth. Here you have protocols that will let you implement what you need.
Then why does the Shunyata Research Sigma AES-EBU cable that I just bought and installed on my DMP sound significantly better than the Audioquest cable that it replaces?
It continues to fascinate me that well-educated individuals with some type of science or engineering background will appear periodically and declare “you are not hearing what you are hearing” because I know everything. They declare “truth,” attempting to intimidate by asserting their credentials and employing jargon. They declare they are being mistreated when you disagree with them.
My science education was entirely different. I learned all science begins with observation, often after experiencing something which is contrary to expectation or is otherwise fascinating. We do not deny the experience, but rather attempt to understand and explain. The more I study, the more I learn I know little. And that there are many vastly more knowledgeable than I.
These posters present with a closed mind and lose the opportunity to learn. This is not science, this is unthinking emotional dogma.