Ted, I feel that is always the wisest decision. I think you personally might feel that way either due to just being intelligent. Or due to being a seasoned Engineer. Either is a compliment from me of course.
I do not judge things, anything really unless it is done double blind.
I may think something sounds very good, or bad. Then upon DBT I am rather surprised.
I am still wondering if anyone knows if the MSB Select DAC II is in fact a ladder DAC? If so, color me stupid. I would really like to know this. I will just ask MSB Monday I guess. I am so dumb I have no clue.
Okay, ELK. I understand why you often question my sanity. I am truly a fool. I always thought it was a custom chip DAC. You would think a man that spent that money would have known what he had purchased. Duh. Well, nonetheless I must say it sounds very good to me. Regardless, I must now go hide my head in the sand. I am a real idiot.
âWhen it comes to the business end of decoding digital audio, MSB donât do delta-sigma. Theirs is a preference for multi-bit solutions, not taken off the shelf but built in-house and from the ground up using high-precision, laser-trimmed resistors. Eight of MSBâs own âHybridâ [R2R] ladder DAC modules can be found slotted side-by-side inside the Select DAC II.â
I am honestly amazed at how inept I am. I cannot believe that I did not know this. Thatâs all it is? Why so expensive then? I mean yes the sound does justify it. However you would think Metrum could do this then?
Oh I remember the Young DAC. That one was nice too.
All that laser trimming of resistors, and, in the audionote one, it has some kind of servo stuff to get down to 24 bit accuracy using the resistor ladders. The precision required for resistor based DACing is very difficult to achieveâŚ
One thing I am not aware of is how MSB and other companies handling the issue (in their R2R implementation) that the Resistors will invariably become non linear at higher temperature. And that will impact the sound quality.
The dCS says in public forum that this problem is not solvable. And hence they think R2R is an impractical approach. And hence they think that over Sampling is the right way to go. Probably many other dac manufacturers also think in the same way. May be even Ted @tedsmith.
Remember building narrower PCM DACs (i.e. DACs with few input bits) is easier than building a 16-24 bit PCM DAC. Most 16-24 bit âPCMâ DACs these days use (usually multiple) narrower PCM DACs and oversampling (e.g. most DAC chips, MBS, dCS, the DS, etc.)
Also not all ladder DACs are R-2R DACs. Thermometer DACs use long strings of identical resistors, they are used as a voltage divider so drift of values over tempurature doesnât matter (if they are all are approximately the same temperatureâŚ)
The thing I wonder about R2R is how they implement non-oversampling filters without a lot of ugly problems. Everything I know says the higher the sample rate and filter bandwidth, the better the sound ala DSD. 44.1khz without oversampling needs pretty gnarly filters, esp on the analog side.
But then, I used to have a Peachtree Audio DAC with an NOS/NAS mode, which I though actually sounded better (more analog). But there was some loss of the highs and I could tell some distortion from the aliasing. Iâm guessing there was an analog filter involved.
I simply looked at their web site and noted that they say
"We can operate our current DAC modules at up to 6 MHz for PCM and up to 50 MHz for DSD. "
Perhaps I jumped to a conclusion or perhaps thereâs some different use of terms (e.g. perhaps they do upsampling outside of what they call their DAC.) But designing something to operate at many times the sample rate of any known source seems like itâs designed for upsampling. I donât want to argue about it. I do agree with you that doing upsampling so you can use a simpler analog reconstruction filter is almost a no brainer. Itâs also the case that for lower rate PCM and without using upsampling itâs impossible to build an analog reconstruction filter that both avoids aliasing and gives all of the frequencies represented in the input.
I donât honestly know. I think they use multiple R2R DACâs inside of it. Now that I know what it is. I do know that they boast about âmanyâ DAC modules in it. Maybe that is how they do up sampling? It is additive?
MSB says âWe can operate our current DAC modules at up to 6 MHz for PCM and up to 50 MHz for DSD. This gives us lots of headroom for future sample rates and audio formats. â.
It seems their design is to be future proof for whatever the future holds for sample rates. That seems different than it necessarily being designed for upsampling. Though obviously oneâs server could upsample prior to the MSBâŚ
Out of curiosity, what was the process you went through to conclude upsampling to DSD 1024 produced the best sound? And how was âbestâ defined? Was âbestâ as being the most like analog/ vinyl, or was it a different criteria? If different, what was it? And what part did measurements vs listening tests play? And weâre those tests blind?
The DS upsamples to 56.448MHz, which is DSD1280. 56.448MHz was chosen simply because itâs the LCM (least common multiple) of 352.8kHz and 384kHz. Any smaller sample rate would require a fractional sample rate conversion. Itâs simply less math to go to 147 Ă 384k and 160 Ă 352.8k. No listening or proof needed.
Initially I went to a sample rate that was 10 Ă the rate of DSD64, but we had the room to go to 20 times so I did: it adds the feature of playing DXD (roughly 32 bit 352.8kHz PCM) No listening or proof needed.
After the upsampling to 20 (10) Ă DSD64 I downsample to quad rate DSD (double rate DSD for 10 Ă.) The DS hardware uses a clock that supports up to DSD512, but code in the FPGA to support DSD512 hasnât been written (and likely wonât ever be on the DS.) Put another way I convert to the highest rate I could with the software budget I had at the time. Higher output sample rates allow a gentler digital filter to remove more of the high frequency noise. Obviously a good thing: No proof or listening required.
Before you ask I chose the system clock simply because it was the lowest phase noise clock available. Lower phase noise directly translates to lower distortion in the output. No proof or listening required.
FWIW customers almost universally praise the changes in sound quality brought by lowering noise. I kind of doubt that most do double blind tests. If they like a given release they use it. If not they use the previous release. The proof is in the pudding, but itâs also what simple math indicates.
Tho we do listen to multiple versions of the identical bit outputs of a potential new release for the one that fiddles with the sound the least, the FPGA code is never designed around any particular sound, tone or whatever. Instead itâs the result of the most accurate (mathematically correct) implementation I can do at a point in time: if I find a bug I fix it, if I realize that I can use a new technique to get more accuracy I do. No proof or listening required. I also find ways to lower noise with each release. Lower noise gives a more accurate rendition of whatâs encoded in the input bits. A proof shouldnât be required. Nor is a listen test required. More accuracy has always sounded better to the majority of customers.
FWIW Iâve released a couple of releases of software without ever listening to them (my system was on the fritz.) It doesnât matter, I know they sound better because they have lower noise and better fidelity to the input bits.
The goal of the DS is simple: render the sounds described by the input bits as accurately as possible.
Guyâs, Donât feel that MSB is so special. It is just expensive. I know, easy for me to say. Really, be happy with what you have, I honestly hope that the TSS is better. I would like to see the little guy win.
On that note, the DSS is simply incredible. It is this old and he is still supporting it. Try that with a phone or laptop. I think it sounds outstanding. If you clean up the USB or I2S it makes it even better.