The next DirectStream update?


Super impressive room, and of course system. About your ceiling height, would it have been possible to lower, dig down, your floor maybe a foot? Low ceiling is my problem too.


Probably not, we’re one of the few houses in the neighborhood without springs or running groundwater in their basements. (We didn’t know about their issues when we bought the house, but we were lucky on that account. In any case we put French drains around the perimeter of the house which all drain into a 55 gallon sump at the bottom of our lot which then pumps any ground water, gutter water, runoff, etc. into the sewer in the front.)

On the other hand the thought of lowering the floor to effect a raising of the ceiling didn’t cross our minds.


We have a water problem too.


That’s funny :slight_smile:


Ted - and the other stones sufferers here will, I’m sure, be thrilled to learn that, under a microscope, they’re beautiful…


So Ted,

Just curious (darn these forums! No peace!)…

As you work on the software for the next update… or you new DAC, what is it that you are targeting? Bug fixes? Known issues? Experimentation only? Known sound deficiencies?

I used to write software and I know I always had dead bodies stinking up pieces of buried code.

Bruce in Philly


Yes :slight_smile:

For the DS software update I’m trying to rewrite the PCM upsampler (I mentioned that above somewhere). That should both help PCM and lower FPGA generated jitter/noise overall which should help the sound quality of everything. Like always, there may be a gotcha that stalls out that new feature. The code for 44.1k is limping, but that means almost everything is working: the new data flow control is working, the new upsampler is close. But there’s still significant work to do to modify a few things to deal with the new data: fit in the new filter coefficients for 88.1/96k and 176.4/192k, correct the deemphasis filter, test, test, test… Since this is new code I’ll probably not be doing much archeological cleanup on other code, but:

Tho I don’t know if I can make a difference I’ll revisit the pops going from PCM to DSD and vice versa. The new upsampler will have different timings for that transition so revising the code that tracks the PCM/DSD boundaries needs to be done. This won’t (directly) address the pops that some players add in DSD to DSD seeking or that they add on PCM/DSD transitions or sample rate changes, tho I’ll be looking over the auto-mute-on-transitions-code.

If all goes to plan the PCM sound stage may deepen a little and transient timing may improve a little. There shouldn’t be any apparent change in frequency response, but PCM should be a little more cohesive. In so far as I lower FPGA noise, you’ll get the expected blacker background and it’s effects (but there probably won’t be a measurable level change in the noise floor.)

Obviously the fix for blurbles in loud/dynamic music that Redcloud introduced will be included. I’m (once again) hoping to get a UI for setting the I2S signal polarities for each input.

At a slower rate I’m working on using the newer tools for the new DAC’s newer FPGA.


Very excited about an i2s polarity setting!


Ted, are these “pops” what some of us are hearing when we are using the DMP as a player through the DSD? I just commented on these in the forum topic on the DMP’s latest firmware.


The only pops I’m talking there are those exactly at the transitions: a transition from DSD -> PCM, from PCM -> DSD, from one sample rate to another, when a clock starts or stops… They usually are quiet and in general they aren’t caused by the DAC but by the way people might have processed their DSD files, the way a player sends bits as it changes from one sample rate to another, or players that go to PCM for a moment between DSD tracks. If the DSD’s FPGA detects a transition it ramps the volume down until the new stream is going and then ramps it back up. I’m pretty sure the pops that most people are talking about on the DMP are a separate issue.


Hi @tedsmith
I have been reading some of the theory surrounding the conversion process/FPGAs/non linearities etc. and I would like to know your opinion on the following (if you can find the time of course):

  1. Would it be possible to eliminate some conversion steps from PCM to DSD?
  2. Could you further Improve on linearity in the DSD to analog conversion?
  3. Could you use a software based NOS approach in the future for DSD or PCM sources (wild idea of course…)?
  4. Could you further distribute the FPGA load to prevent overload/non linearities? Or will a faster FPGA be required to achieve this objective?
  5. Could the meta stability of the entire system be improved by:
  • further distributing the FPGA load
  • a more stable / faster clock / multiple clocks more dedicated for their purpose
  • a more stable / faster FPGA
  1. Do you have full insight into critical FPGA loads that currently disrupt the conversion processes?
  2. Can you currently simulate overload issues with your C simulations sufficiently?
  3. Do you think using more dedicated linear power supplies non-linearities/overloads could be prevented?

Thanks for all the good work!


All in all I think I’ve missed the point of most of your questions, but I hope I give enough detail to indirectly answer them.

I’ve been doing that from release to release, and that’s exactly what the current effort is all about. The huge upsampling ratio of 256 from 44.1k to 11.2896MHz isn’t something the Xilinx tools expected so I’ve always had to use multiple approaches. Now that I’m writing my own upsampler I’m not constrained by the existing FIR filter compiler.

Single bit analog conversion is inherently linear. That’s among the biggest benefits of using a single bit analog conversion vs. say five or six bit conversions like most chip DACs use. That doesn’t mean the system as a whole is as linear as it could be, I’ve fixed a bug or two in the past and indeed Redcloud was the biggest step in that direction. Some of my other experiments that are in progress will also look ahead further in the sigma delta modulator to more accurately track the input which will help system linearity overall.

I’m always a little confused by what people mean when they say NOS - literally that means no over sampling, but many really are talking about no reconstruction filter. Since the hardware is designed around the passive output filter that’s not going to change (and it would sound like crap without a filter, the aliasing from multiple MHz of loud noise into the audio band would swamp the whole signal.)
Similarly not doing oversampling for PCM to single bit DSD (which is all the output hardware is capable of) doesn’t make sense. Perhaps if you tell me what you intend the benefit of an NOS approach might be I could give a more constructive answer.

That’s a part of what I do with each release: the fewer resources I use in the FPGA the more the design spreads out (at least in a relative sense.) Having fewer transitions/unit area in the FPGA causes less FPGA generated noise, which is good. There aren’t “overload/non linearities” in the FPGA, the tools keep ground bounce, power draw, etc. from causing any errors, but they don’t really care about the noise generated as long as it doesn’t cause any bit errors. With each release I try to spread the work out smoother in time so that there will be less noise generated as well.

I’m not quite sure what you are asking about here. Any time there are multiple clocks interacting which aren’t directly derived from each other you need to worry about metastability. It’s a big part of FPGA software design. Most of the issues are resolved internally by design: everything runs synchronously off of the sample clock and simple multiples of it, no synchronization of the data flow is needed. With the incoming bits from the digital inputs the first thing I do is sample them with a very fast internal clock (with about a 6ns resolution), then they are handled synchronously thruout.
If instead you were alluding to the chaos of the system (the vast sensitivity of the output on small perturbations over time), that’s an inherent part of sigma delta modulation: quantizing to one bit means virtually infinite gain in a feedback loop (a single bit change in a 70 bit register can affect all downstream bit values.) Still part of looking further into the future during sigma delta modulation will lessen the chaos a little.
Adding more clocks just causes more metastability issues (any time two clocks interact you can have problems.)
Perhaps you are talking about noise generated in the FPGA. The faster a given technology runs the more noise. Newer generation FPGAs lower noise by running on lower voltages and/or they can do more work when generating the same noise level. An newer/bigger FPGA should allow less “work” per unit area and hence allow lowering the noise. Also as I mentioned above (and in earlier release “notes”) I’m trying to lower the amount of processing needed with each release to help too.

I’m not quite sure what disruptions you may be referring to. If you are referring to FPGA generated noise, I have a reasonably good mental model but only the tools know for sure and they aren’t telling :slight_smile: I’m learning more with each release and I’ve talked to Xilinx engineers who have been helpful (it took some work to find engineers who understood that I was really talking about hearing the real analog jitter and noise in the FPGA, but it turns out that that problem is very similar to keeping the noise down with newer bigger FPGAs so they can keep the clock rates up and the power down.)

Are you talking about the Redcloud bug where there was a register overload with very loud signals? Then sort of: my C++ code measures the maximum values in each of the registers used in the simulations and reports the bit widths needed - of course that means I need to run the C++ simulations over all possible things we might listen to. The bug in Redcloud was caused by me changing a place that added four values together (and hence needed two more bits than any input used) into a place that added five values and I forgot to accommodate the possibility of the (rare) carry. In general I can assume the worst case bit streams and calculate the bit growth well, I just flubbed that one.

Sorry, once again I’m not quite sure which non-linearities/overloads you might be talking about. The digital path on the inside of the FPGA always has an extra bit on the top to accommodate any small rounding errors in filters etc. and the analog hardware has a built-in factor of two of output range to give a sense of ease to the presentation. Allowing the DAC to go to a volume of 106 with the attenuator engaged allows the users to use up about half of that extra headroom and we’re running the digital part a little louder than the original hardware was designed for, but none of this is an issue if you aren’t running Alice in Chains at 106 all of the time (perhaps people like the tube like soft compression with that loud music :slight_smile: )



Ted gave us what we have…
his shirts (in various PS Audio Youtuve videos) reminded me of the gift to not to sweat the “small stuff”.
Just one of his gifts to the audio community.

Were in good hands guys.



You are the best, Ted. Brilliant !


I generally only understand 20% of the technical explanations Ted gives but I do enjoy reading them. We are all lucky he works for PS Audio. :grin:


What I understand is that the DSD is the best money in audio I’ve ever spent. There’s nothing even close!


It is the gift that keeps on giving! Ted keeps giving us a better sounding DAC with each update! It is so great to be able to get substantial improvements without having to spend big $. I have spent money with other companies for updates that resulted in very subtle improvements. This is so much more satisfactory!


Ted, I’m getting the feeling that you rock out to Alice in Chains ?


I’ve definitely blown some circuit breakers with it (not at home but at a couple of audio stores) :slight_smile:
Porcupine Tree is good too. It’s just great to hear all of the voices and other things in the wall of sound each in their own places and easily distinct from each other. I listen to a lot of other stuff as well, but those two groups are good for hearing each thing in it’s own space in a mess of music.

Here’s a list of items I use to test each release:


I find rock music is some of the easiest that either turns to mush or becomes harsh when a system has an issue.

When heavy dirty guitar sounds really good, typically I find everything else sounds good too. The exception to this is some modern rock tracks that are DR4 and clipped. Although, I find DR range by itself to be a poor indicator of SQ. I have been surprised by Chevelle’s last 2 albums, La Gargola and The North Corridor. DR5/6 material but had no idea up front till I scanned it later. Sounds good because the recording chain, via producer Evil Joe, is really good due to him tweaking mic placement and analog things before jumping to EQ.