Any connection generates noise, on average each connection makes tradeoffs as to which noises they minimize. I2S is good in that it’s balanced and has the clocks separated from the data - on the other hand it’s only designed for short connections (i.e. no cables.) That doesn’t mean that I2S with cables is necessarily bad, but different choices might have been made if that were the goal. EMI radiation and pickup (and hence the length of the cable), quality of connectors and interfaces between connector pins and wires, bulk impedance, signal levels, etc. all affect jitter and noise down a cable. With I2S those effects can be more easily handled than, say, S/PDIF, etc. but they are still there.
Possible to do MQA in the DS’s FPGA - possibly, but ignoring all of the IP and company to company logistical issues there’s still the fact that everything in an FPGA affects the noise of the whole and that just the presence of the MQA code in the FPGA, even if it’s not being used, will cause more jitter and noise in the FPGA. With a certain amount of energy and time I can make more of a difference in the quality of the FPGA output doing other features than ameliorating MQA’s negative footprint (especially if the MQA code ever changes). It’s much more logical to put the MQA decoding upstream where it only affects the streaming code. And FWIW we already know that some upstream MQA implementations (including those from MQA itself) won’t make the average DS customer happy. That doesn’t mean that we wont find a good place to decode MQA, it just means that the obvious places don’t work well yet.
I can’t speculate what other people like or don’t like - but I have a good idea what things in an FPGA do to the average feedback about DS releases.