Is it a good idea to have the transformer outputs of the directstream driving another pair of input transformers?
At the moment I still have a preamp/buffer in the chain, functionally I’m also using it to do the XLR to RCA conversion, as I also prefer (like many others) the XLR outputs of the directstream. My amplifiers are Audio Note Shinri, single ended input only, 250mV input sensitivity. I don’t need much gain, it gets plenty loud enough.
So, what if I go balanced out of the directstream and then use a 1:1 Lundahl input transformer (low gain) at the end of XLR cable directly into the amps. Or better yet, perhaps 8:1 step down (Directstream on high gain) at the rca inputs of my amp. I’m guess step down will give me more dynamic punch and potentially drive my amps similar to having an active buffer/preamp.
My thinking is probably “no” it’s not a good idea, because an input transformer is unlikely to have a high enough impedance to suit the DS output stage. You ideally want something with 10Kohm or higher.
250mV is an unusually low sensitivity! Sounds like a good pre-amplifier is called for if you really want to use that amp with your DS.
Unless you are running long 5m+ cables to you amps and need the common-mode-rejection of a balanced cable, I would simply run unbalanced using only the plus phase output. If you are running long cables and need the noise rejection then you want a 4:1 step-down transformer to maintain CMRR such as the Jensen JT-10KB-D.
So I’ve tried some Ramm elite 8 cable - really good. Currently wired as RCA-RCA, but the shielding is grounded at the source end and has signifcantly narrowed the performance gap between XLR or RCA outputs on the Directstream. So cabling does matter (of course it does).
Anyone tried direct comparison of pseudo balanced (used just positive from XLR as suggested above) vs RCA? Or since I’ve grounded the shielding on the RCA cable to source am I effectively there already?
Lower input sensitivity is a good thing, isn’t it? Directstream DAC doesn’t lose dynamic range with its volume control, and feature I love is the ability to select lower gain.
No, a blanket statement that “lower input sensitivity is good (or better)” would not be valid. Input sensitivity relates to the maximum tolerable input voltage for the circuit. If you send it anything higher the following circuit will depart from linearity (ie clip or distort) or in extreme cases be physically damaged.
You need to match the output level of your source and the input sensitivity of your amplifier circuit so that the first doesn’t exceed the second. Most modern consumer audio products are designed around a nominal 2V RMS signal level while pro audio gear runs at 12.3V RMS (in a balanced config).
The downside of an amplifier having too high a sensitivity is that you may not get sufficient volume out of the business end. The downsides of it being too low are first that you have to do something to reduce input levels and second it becomes even more critical to lower the noise floor on your input side.
That’s particularly true when connecting the DS DAC directly to an amplifier. In fact, you do lose dynamic range when using the digital volume control because you’re dropping the audio signal down towards the fixed noise floor of the sigma-delta modulator (SDM). This is why the additional analog attenuator exists: it sits right before the output sockets and drops the entire signal (including the noise floor) by 20dB… which is 90% in terms of voltage!
Finally, the RCA sockets on the DS MkI are just connected to the positive phase of the XLR connectors. You can use the XLR connectors with single-phase wiring if that works for you in regard to shielding etc but there’s no difference at all in terms of the signal you’re taking.