In a previous post, sfrounds1 reports unhappiness with his new DS sr. with the Snowmass upgrade. The vast majority of responders cite insufficient break-in time as the cause though he has 60 hours. Some responders indicate as much as 500 or even 1000 hours are required.
If such lengthy break-in time is necessary, should the component ship with a manual that cautions the owner that a lengthy break-in time is needed before the DAC will live up to its potential? Does the engineering team that designed the DAC acknowledge so extensive a break-in time is necessary?
There are two different break-in’s being talked about. The DS requires a huge amount of time to break in initially - some hear something magic or at least pleasentaly different right out of the box, some others don’t enjoy the sound until the unit has 100s of hours With a new DS and a broken in DS side by side for comparisons, I’m still hearing noticeable changes 200 hours in. The changes out at 500 hours are smaller than my day to day hearing changes or the changes brought by power quality changes over a day. I have no doubt that many have more sensitive ears and wait longer.
Each release has the potential for needing a little burn-in of it’s own (in so far as it’s using hardware that was never used by previously installed releases.) That is a much smaller length of time no matter what, and perhaps not noticeable at all.
Was wondering what’s meant by this. Are there literally circuits/components/devices on the board that are un-utilized by some builds of the FW but which ARE utilized by others?
To the general notion that a firmware update/upgrade requires “burn in” I would have to assume that’s just mistaken, even if for some strange reason the current FW uses different electronics inside the chassis. Might take our ears a while to get used to the changes, but i cannot imagine there is any physical (as in related to physics) element to this…or am I totally missing something?
The FPGA is a sea of hardware that gets used differently with each release of the software - if a previous release wasn’t using, say, a multiplier or a block of RAM then when new release starts using it there can be a (small) “burn-in”. This kind of stuff is pretty twitchy and obviously depends on how long which releases were running. I suspect that for most people the difference isn’t striking, but after hearing the differences when using two identical compiles of the same source, it’s obvious that this stuff is real. (To be clear the FPGA with either of the two compiles would generate the exact same bits given the same inputs bits, i.e. the only differences are which gates are being used for each particular function in the FPGA.)