Hello everybody,
I’ve been reading here so far and have decided to contribute my 2cents here.
For a long time I did not believe in sound changes through ethernet cables or switches, until unfortunately I made the experience that I was wrong.
And that is making me sick, because I am an analytical person with software development background and need an explanation for everything.
I can absolutely understand what @rower30 explained, but there are still differences in the sound due to the ethernet cables.
In my setup unshielded stripped 2 x 2 cable for 100BaseT sounds best.
I read this in a thread in a German forum and just tried it out for fun - actually to be confirmed that bits are bits. Unfortunately I was disappointed and it really did sound better.
The cable consists of two loose AWG23 twisted pairs that I took out of an horizontal CAT6 cable without any sleeve or shielding.
At first I thought that it was due to the fact that the twisted pairs have a larger distance (they are loosely in the air). So I got 1 x 2 twisted pair cable with 100 ohms and shielding. This is available as a SPE - Single Pair Ethernet Standard, which is currently being introduced in the automotive industry. A cable made up of two strands of this cable didn’t sound as good as the completely unshielded one.
So I kept looking. I got myself a 1 x 2 Gore 100Ohm twisted pair cable with ePTFE dielctricum and shielding and built a cable from it. This sounds better than the Single Pair Ethernet, but still not as good as the completely unshielded one.
So, as @rower30 reports, the shielding has not only positive effects. If I naively imagine that, the ethernet signal that wanders around via reflections within the shielding is definitely more harmful than normal electromagnetic radiation in a normal household.
In the meantime I have also achieved sound improvements through intermediary transformers that derive interference via GND.
But where do the sound differences come from?
I have the following theory, which I cannot yet substantiate. I am about to buy a high performance oscilloscope to check this out. This whole speculation is just as upsetting to me as to @rower30 .
Here is my theory:
I am also firmly convinced that sound changes can only be made when converting from D to A in the DAC.
Ethernet is a differential signal. So the difference between the two voltages of a twisted pair should always be zero.
However, if the sum is not zero, the signal cannot be completely eliminated. The residual voltage can then negatively affect the clock in the end device during conversion in the DAC or also affect the signal to be converted.
This theory is also strengthened by the experience that ethernet isolators, which realize a symmetrization of the signal (time and voltage, but not completely), have a positive effect on the sound.
I can’t understand the whole thing about ethernet jitter. With this I have never had any problems with measurements in the end device even with cheap network switches. A high-quality clock in a network switch can at most have an effect on sound through better voltage symmetry, but not because this avoids jitter in ethernet transmission - that’s nonsense.
I am looking forward to your opinion.