Ethernet Cables and Sound

Thanks again! I couldn’t stand not to know what was going on. A year ago this was a hot item. Were differences only an imagination or were they really true and provable. I think I was completely on the wrong path last year I searched on waveforms with lengths of microsecond or maybe even nanoseconds and compared them with each other.

This year I started with exactly the same thoughts and because I again couldn’t find any prove of what I heard I started measuring longer samples. A few times I found a wrong sample and threw it away and used the samples that looked just like the reference samples of the Jcat ultra. What stood out was that the waveform from the Jcat even when zoomed in looked close to one other. The samples from the router were a little bit more different each time. Was this al that could be found? No just by luck I found wrong samples over and over and each time they came from the router and never from the Jcat.

As far as I know you are right about the Jitter. Jitter can be found only in the digital domain. Here every cycle or bit should have the same length in a perfect world. I got this question twice by now :grin: As far as I know the analog signal is a direct presentation of the bit stream. There are two types of audio clocks one is 24.576Mhz and one is 22.625Mhz. This is about the native sample raid. You can devide 24.576.000 by 48000 96000 and 192000. You can guess where the other is used for. With this in mind I called the differences between the two analog waveforms Jitter.

On the internet you can find how a PCM or DSD signal is build. If I’m right a PCM signal contains some bits for the volume. What I see in the bad waveforms is that not only the timing seems a problem, but also the volume. This is why I think you can see those differences in spike lengths.

My best guess is that in really bad networks many digital mistakes or jitter can be found. With every mistake you will hear a small SQ problem. So with every upgrade in the network you can limit the amount of mistakes. Be aware that every device no matter the costs or quality will have a certain noise level. Every device will be responsible for a certain amount of mistakes. So the better the powersupply’s in your network, the better the noiselevels of the used devices and the better the cabling the better the sound will be.

Yes that would be nice and very welcome if someone with more knowledge would step in.

By the way I think 396 is interesting as well :slight_smile:

You plugging a cable Bro?
Not our first Rodeo…just sayin’

Yep. The forth frame is insanely different

Not willing to spend the exhorbitant material cost, but happy to hear for myself in my own system.
(Without buying first obviously, the onus is on you to prove it to the Buyer)

Very much not a cable sceptic btw, Wireworld Platinum power cables through my system.
Why, cos they work and are better.

Any experiences in HQ Ethernet cables feeding the Bridge II?

I am interested in:

  • Furutech LAN-8 NCF Ethernet
  • Audioquest ethernet cables
  • CrystalNetwork Diamond cable

I never used those cords and don’t have a bridge II in my DSD. The inakustik cat 6 is a great and edges out the Inakustik Referenz in tone and timbre. The Referenz CAT7A is more lively and has more speed and attack on instruments and percussion and easier to pick out individual voices or instruments with but vocals and soundstage fuller on their CAT 6, neither cord is budget priced and both fantastic. It boils down to your taste in music. If you die for instruments and precision Imaging get Referenz if you want all around balance tone and timbre and less detail use CAt6. You can mix and match them in streaming chain. I have two of each cord prefer CAT 6 but don’t dislike the Referenz and use it for video streaming now. I would consider no other Ethernet cords for price and performance.

1 Like

10G Ethernet uses PAM16 encoding and has 1*10 to the 10th max error in a 100 meter link. The signal at the end of that link are far, far, far worse than the simple digital signal differences shown in this thread. Those aren’t a challenge to the error correction built into Ethernet or digital in general.

We hear ERRORS, and there are none between the DA and AD. The ends are the problem.

What matters, is the ORIGINAL signal’s error during th encoding process due to sample point jitter and sample period (frequency). This is what Ethernet “moves” no better and no worse. The opposite end adds distortion in the filter process back to analog.

We move trillions of bits with Ethernet and with the error correction protocols built in. As long as the data is repeated and moved along to the requirements of Ethernet, it is agnostic to noise or even the cable. Where the cable comes in, is extending the maximum distance we can go between repeater / error correction points. The cable just changes when we have a valid link, based on errors.

Curiously, all Etherent digital cables are tested in the ANALOG domain. Yep, 100% analog frequency testing. Don’t believe it? I’ll send you a UL confirmation test report on request.

Once the DA signal is on the cable, however good or bad the AD process is, it moves error free trillions of miles. What happens at the DA process on the other end is again where we add distortion.

That is the “magic” of digital. Once a signal is decided upon it moves error free. The key is at the ends of the process. Ethernet can’t decide what a one or zero is, that’s done at the front end…it’s job is to MOVE a one or zero error free. You get to decide what and when we have a one or zero. That job done, Ethernet goes about it’s business of moving the bits error free. At the receive end, the filters determine how to rebuild the signal to analog…if it even is filtered to analog. That circuit creates the errors based on it’s construction.

How the AD process is built isn’t Ethernet’s fault. Ethernet did its job…it got the data, correctly, from the other end and with proper CRC checks within the frame type selected.


Hello Galen,

Thank you!

I believe that and I know the Ethernet stream is bit perfect and error free. I also think the Dac is the problem, but it’s possible to help the Dac with a better signal, more easy to interpret. Better ‘1’ and better ‘0’. How this exactly works is still not clear to me. Is it noise or is it extra computing power is it the better clock signal or maybe a bit of everything. This can be done by the use of better Ethernet equipment cabling, powersupply’s, clocks with lower phase noise etc. Everything helps and it has proven to increase sound quality by a lot in many systems.

My take?

Ethernet uses what is called ACR, Attenuation to Crosstalk Ratio. I the signal meets that measure, it’s good to go. Yes, there are MANY aspects to meet ACR, but once it “passes” we get the advertised BER for Ethernet. Working on a problem that doesn’t exist is money not well spent.

The jitter at the front end is the error setting the data point within the time slice of the analog waveform at the exact right spot, or not, from the analog input. What impacts that accuracy is carried to the DA circuit by Ethernet, perfectly as it is recieved. That’s the key…as it is received.

The far end is similar, how does the filter reconstruct the analog from digital without altering the digital’s intended placement of the analog at point in time. This error is superimposed onto the AD sent to the DA circuit.

We can’t fix the error of the digital construct at the far end…we just try to keep it from getting worse…kind of like pure analog.

Once Ethernet is given the 0 and 1 data, it is nearly peerless in accurately moving them.

I think the time based accuracy of DA and AD is critical to MAKING the one’s and zeros exactly in the right “spot”. I’d spend more money here than the Ethernet circuit and for the benefits. AD and DA circuits are the real magic, too. Complicated and critical.

Just my analysis after 35 years designing Ethernet cables.

Galen Gareis

It’s not about the digital domain you need to worry about. Think of it this way. Your dac is connected to you amp in an analog manner. The outputs of your dac will pass any hf noise on the input side to the output side. The Ethernet cable connected directly/indirectly to your dacs input will pass any hf noise present straight through your dac to your amplifiers. So the Ethernet cable is an analog cable encoded with a digital signal. This is why different Ethernet cables sound different to each other with variances such as shielding and cable construction materials

1 Like

HF RF noise being “passed” is what we are told, verses what is measured. The FCC limits the egress of RF noise, and a long cable is an antenna. The opposite is ingress and is managed with HF filters to ground. There are strict limits to RF egress emission so that the ingress of those emissions isn’t going to upset RF systems. We attack the problem on the emissions side, not the ingress side but the RF filter technology works both ways. It is just easier to pick one or the other method to test.

UL labs also does what are called antenna test site measurements of electronic devices to test and PASS RF emissions levels. This gives you a class A or B certificate. The domestic emissions certificate is stricter than the industrial side as industry has hardened RF filter systems for the sheer bulk of noisy motors and such in a small area. We don’t do that at home.

It’s been said before but show me the data to support this idea that RF goes straight to the anolog circuit of an amplifier stage. Where’s the beef? Put a high impedance probe on the DC rails and the AC side and show me the nonlinearity in the audio band.

I’m all for being open minded, but only AFTER we prove there is actually a problem there. Saying one is there isn’t enough. Wire structure sound different Yes, but this still forces us to examine wire with all that we currently can use before we postulate new ways to find the effects. It isn’t an excuse to invent WHY, either. Don’t know should be adequate until we get repeatable tests.

AC circuits are fed with the DC side supply. Their linearity is DIRECTLY connected to that supply DC voltage. No question. The DC determines the gain linearity after all. Does that DC, when we test it under various conditions, change? If it is dead stable We have a decent power supply that mitigates RF. How much DC variation does it take to hear? Perfect is below audability unless you have more money to spend damping down below audability. Most all DC circuits use RF circuit shunts to ground at key points.

Does the AC side show RF ingress through the DA block? RF is superimposed onto the DC as an offset in the cable. The DA block ignores this noise as it reconstructs the digital system to picture perfect at the input to the analog filters. RF is removed at this point as well as at specific points in the circuit with low impedance RF shunts to ground.

We have to have a problem, not invent one. Use every repeatable means we have. Saying my audio gear has subpar power supplies and poor input RF supression isn’t likely based on the UL FCC compliance sticker on the back. If equipment is poor, tests can be done to prove it. If we say it is there but we can’t measure it, how do we know when it is gone? It can’t be an agreement, it has to be a fact.

The only way I know is to provide IDENTICAL structures with ONLY the article under inspection CHANGED. In my case it is the wire, nothing else is altered. A true repeatable test should show differences between the copper’s effect on the EM wave we hear as a signal. So far there is ZERO evidence we can do this yet. But, current tests have all been done. The rubber hits the road at the EM signal property moving down the cable. If it sounds different, the EM wave has to be different in phase or amplitude in the time domain dv/dt.

Galen Gareis


I spent some time swapping ethernet cables again. What surprises me some brands or structures give better highs and clarity. But one really really increased bass. Their placement in the digital chain matters The router to streamer had more impact than cable between router and endpoint.

The ethernet cables I need to use with DIgital Signal Processing active to correct room acoustics differs from those without active DSP. My flavor of DSP only attenuates frequency and not add or boost. Yes I use Linear Power Supplies. All cables used were measured as spec compliant and came with data or compliance certificates.

Since this came up I guess I’ll add some 2 cents. In my new house I’ve been setting up the core networking infrastructure - the foundational gear before even getting to the audio side networking. I’m using CAT6A for my cable modem, router, primary network switch.

I decided to get some CAT6A from Blue Jeans as they are one of a very few civilian sources of tested cables. I discovered that an identical length of BJ CAT6A (across multiple samples) consistently and repeatably tested with significantly lower throughput speeds on Speedtest (to the same server, at the same time of day, from the same computer in my house) than some unknown brand CAT6A cable I got as surplus from work. Our IT people make these up from spools. When I saw this I decided to extend the test to some random but highly rated CAT6A off Amazon. This other cable also had significantly higher throughput speeds (this time all three were tested and retested at the same time). Not sure what any of this means.

True!! I think this I where the problem lies, but how can we solve this? Maybe when we buy a MSB dac than we have solved most of the problem, but with the ‘normal’ dac’s we have to deal with the noise another way.

For example I removed the TCXO clock and played on the build in TTL Chip clock signal for a while. Immediately I could hear the sound quality increased. So the phase noise of the TCXO clock close to the dac chip is a already a problem.

Besides this the Ethernet signal never reaches this dac chip, because of the protocol change to in my case I2S. This protocol (If I’m right) is sensitive to Jitter and serves the dac chip what ever passes through. For data the Ethernet signal is error free and bit perfect, but what if the data is to late? Or isn’t this possible?

Best regards,

I pnly did listening tests. The BJC CAT 6A while better sounding than generic brands in sound quality. Certicable sounded better. All my Inakustik rerenz or CAT 6 sound the best.But it depends on DSP which I prefer where. My Netgear r is also a switch.

The Inakustik CAT6 wins over if using room correction. Without DSP theReferenz 7 to streamer and CAT 6 to end point is way better sound than all CAT 6. CAT6 had too much bass and narrow stage. But fix bass with DSP then Inakustik CAT 6 does everything best in my system chain for digital.

In English, what does a $500 ethernet cable do that my $10 BJC CAT6a doesn’t?

For me it works without issues. I have tossed three BJC Ethernet cables away due to endless issues.

Just as a point of interest the CAT6 and CAT6A are not built to the same standard. I think generally for me here the CAT6 are better for use in a music playback system. Galen has explained the differences but I do not remember (or care) what they are.

You got my curiosity. Would you mind explaining the “endless” issues with BJC ethernet cables. This is not a poke, but your experience would benefit me and possibly others. I am using the BJC Ethernet cable and to date have not any any real issues. To be honest, regarding ethernet cables, I have not experimented with “the high priced spread”. :blush: