So I guess we were talking the same language. What hung me up is the idea that any manufacturer would implement a half duplex endpoint on purpose! I must have a severe lack of imagination but I cant wrap my head around any use case where a half duplex connection would be superior to full duplex. I’d always thought that half duplex connections were due to use of hubs rather than switches or with wireless where carrier-sense multiple access with collision detection (CSMA/CD) is used to arbitrate access to a common carrier portion of the network.
As a user, I’ve always valued a particular cable technology primarily based on throughput and any cable length restrictions. Since half duplex requires CSMA/CD and this in turn can cause throughput interruptions/delays, I’ve never thought it would be used in real-time audio. But for the record, if I had such a device I’d be tempted to install a second network interface card (NIC) in my computer and use a crossover cable direct to that component. That would give me a dedicated network defined by the cross-over cable with no chance of problems caused by network congestion (this ignores all DHCP problems of such a connection which I’m only sure are solved via Windows gateway connections).
Most operating systems (post Windows 7) and switch manufacturers (NetGear, Linksys, etc.) give you full duplex connections out of the box. Gigabit NICs/drivers, per 802.3 specifications, require auto negotiation of duplex and speed. Following screenshot shows one of my NUC network adapters (a screen I’d never looked at until your post). The duplex/speed settings were consistent for other machines in my network. While you could manually set the speed and duplex to other values using values (in the drop-down menu), it shouldn’t be necessary. As you see, Windows 10 in my Cat6 network auto negotiated 1GB full duplex.
Networking can be very confusing. And I suspect many audiophiles treat networking like a black box. But it isn’t too bad if you know what to look for. Music/video streaming media servers typically prefer the user datagram protocol (UDP) which is fully supported by full duplex connections. What UDP offers however is a one-way “fire and forget” datagram for the data which means that there is no error correction for bit errors. However, if the cable and NICs are matched, the bit error rate for gigabit ethernet is 1E-10 which I think works out to a worst case of no more than 1 bit error in 30-40 minutes of listening depending on encoding.
One interesting factoid I learned today was that in cat7 half duplex, all four pairs can be used in a transmission. I wonder if in full duplex, if two pairs are used for transmit and 2 pairs are used for receive… CAT6 full duplex uses 1 transmit and 1 receive pair. Of course, use of either cable would require a suitable NIC at each end for the connection to use the cable to its full capacity.
Networking confusion also arises because you can get bit perfect file transfers between computers. What isn’t so obvious is that file transfers use TCP connections. However TCP connections use checksum and request retries on bit errors/dropped packets because you can’t afford an error in file transfer. Therefore it is tempting to reason that if a multi-gigabit file can be transferred without error, why can’t audio? The difference is in the protocol: a TCP file transfer has no time limitations whereas I’d think real-time audio via UDP must arrive with microsecond precision for the audio to sound good.
UDP does not guarantee packet delivery meaning packets could be dropped/delayed based on switch congestion. Adding to the dropout problem is that UDP is multi-cast meaning that streaming a packet to a switch means the packet has to go to all other switched connection - the source doesn’t know where in the network the receiver is. The receiver only knows to listen for packets from a particular source address so only the “designated receiver” will actually use the packets. But this means, if there are more than 1 streaming source in a small network there can be capacity problems. UDP is excessively chatty which is why it is mostly a home environment (or small subnet) protocol due to network congestion/capacity.
Capacity issues can only be mitigated by making sure the network path between all sources and all destinations can always handle the traffic. Unfortunately, some switch manufacturers oversubscribe the internal processing power of the switch so that although each NIC on the switch is 1GB, an 8 port switch may only be capable of 2-3 simultaneous switched connections before it starts dropping/delaying packets. Not a problem for TCP due to retries, but bad for real-time audio. Imagine that you’re minding your own business listening to your system and your wife starts streaming from the IPAD to a TV and your son starts playing a multi-player game. Depending on your network topology/capacity, you may start to experience dropouts because your switches/routers cant handle the traffic. The only way to mitigate this is to change the network topology or capacity.
I’ve read a lot of posts on these forums where various users have “solved” sound issues with various cables, switches, etc. What I don’t understand is what advantage a cat 7 cable could have in a gigabit network. Similary I don’t know why a cat6 would be better or worse than cat6a with 1 GB NICs. While cat 7 has individually shielded pairs and is speced for 10GB, if the NIC at one or both ends are 1 GB, the best that can be hoped for is a 1GB switched connection. Since you’ve (rower30) engineered cables, perhaps you could provide some insight into hazards of using cables that are mismatched with the NICs (if there are any).