The next DirectStream update?

Man,that’s great news Paul…can’t wait… and thanks guru Ted!!!

Thanks, Ted and Paul, I did not think it could get any better than Redcloud!

Thank you to Ted for keeping us moving forward! I can’t wait to hear his latest creation.

Wooo Hooo!!! Can’t wait!!

How exciting! We’re getting an upgrade for free. How often does that happen in one’s life? That’s one of the real value virtues of PS Audio.

1 Like

Hey Ron,I got a lovely upgrade when I married my second wife…not free though…:rofl:

5 Likes

What’s better than that, Mark? Lucky man!

1 Like

bootzilla - I just went from Mconnect streaming to Roon Nucleus+ w/ Sbooster plugged into P20. Holy crap is it amazing. It took 30 minutes to gather all the meta-data (initially) and I only have 6,000 files on my NAS. I LOVE STREAMING! I am just blown away! Another upgrade I would pass out… In addition, I have not even loaded the “sweet” BHK tubes yet… want to wait a year to get it all settled out and then hit it… To think I was going to add a Basis turntable to the system. I am so glad I don’t have one sentimental bone in my body… dodged that time crushing components (turntable, cartridge, preamp, table isolation platform, cleaning brushes, cleaning machine… and endless time cleaning…)… WOW, that would have been serious drama… in addition, I have had my system (including DMP) for 6 months and have bought (3) SACDs. I thought I would buy a lot of SACD cause the streaming sound quality would be degraded compared to DMP. I stream 99% of the time and on occasions 1% SACD or CD. Since I have just re-ripped (dBpoweramp) my 275 CDs, I can’t wait for this upgrade. I am so glad I have such a small collection of CDs and zero vinyl (sold 20 years ago).Ted is a genius and Paul is the man!

Also, I just ordered the Sbooster Ultra for the MKII and will install that next week. I thought I may have an issue w/ MKII power output and Nucleus+ streaming w/ DSP filter. After 15 hrs streaming w/ DSP engaged (parametric, not convolution filter), the Sbooster was not even the slightest bit warm.

1 Like

I usually do the DS Sr first, then it takes only a day or so to build the Jr version (most of the time.) Most of the files are shared, but some have a few differences that I need to be careful with. Then there’s the time it takes to get the 20 builds and the day or two (or three?) that it takes for PS Audio to listen…

3 Likes

I am looking forward to this new release. Out of all my PSA kit the DS is my favourite. First PSA product I bought and the catalyst !

1 Like

The biggest new item is the new PCM upsampler, it uses fewer resources than the old PCM upsampler so that allows more freedom in doing everything else in the FPGA.

In the new upsampler I’m can get more accuracy in the coefficients (with fewer actual bits) because I know my data better than the Xilinx tools. I am also using a more sophisticated algorithm to round the coefficients. It tests for the best performance of the filter with each possible rounding of each individual coefficient iteratively - this takes some time.

I’ve widened the data path from the PCM upsampler thru the rest of the DSP chain to the SDM: I added another bit on the top (which should never be used, but…) and three more bits on the bottom (since I have them after the upsampling.) This doesn’t really add any accuracy (the input only has (at the best) 24 bits of accuracy after all) but it’s better than rounding/dithering back to 24 bits.

I’ve added a few general workarounds for ticks and pops: how much they help in any system depends on the source hardware/software.

  • The FPGA now accepts PCM 0’s as DSD 0’s in DoP, this can smooth out DSD seeking in some programs like newer foobar2000 releases.
  • I explicitly clear all buffers in the DSP path from the time a transition is noticed until the new data reaches the sigma delta modulator.
  • I changed the ramp up time after a transition to around a second and made the “ramp down” time instantaneous. This will also help you to notice when the source/wires/network, etc. is causing skipping which could have just caused a little hash in the output. Now you’ll know you have something to fix.

The I2S input polarity UI would be too much of a distraction in PS Audio engineering at this time. I’ve decided to put it off another release… Sorry.

I rebalanced when some of the work is done in the FPGA to lower FPGA generated noise.

Paul and Darren have only heard the first cut at the first item (which always filtered to 22k even for 192k inputs.) I suspect that the newer versions of the first item and the other things will further increase sound quality.

13 Likes

I’m enjoying my DSD more and DSD will improve. But I think the trend is more like “the lower the input rate the more improvement”: 44.1k/48 will improve more than 88.2/96, both of which will improve more than 176.4/192…

5 Likes

@tedsmith

Would it make sense to split the signal by channel and run each channel to its own FPGA to reduce the load/noise generation of each chip?

Also, out of naïve curiosity, could you handle the internal data processing like a balanced cable does: duplicate the signal 180° out of phase, run all digital processing and analog conversion on both the original and out of phase signals, the do a difference comparison to regect the noise?

1 Like

Interesting ideas, but at first thought:

I’d get a much better bang for the buck by using a newer generation FPGA like I will be in the TSS, lower currents -> less noise. Also, optical isolation might be cheaper/easier than another FPGA. Another stage of re-clocking would be pretty effective too.

I deeper detail, some parts of the FPGA code are parallel between channels and would benefit from splitting to two chips, etc. But most of the code is either identical for both channels (e.g. the input processing, other bookkeeping, …) or serial for the channels (pipelined). That pipelined code doesn’t generate more noise per unit time so it wouldn’t benefit from two FPGAs.

The weirder thing is that the sigma delta converter is chaotic, any differences in data or timing (no matter how small) can get magnified over time until the two sigma delta modulators are switching essentially independently. Also metastability is a real issue in FPGAs, the code takes care to deal with it, but it can change the times things get processed (by a full clock cycle for each instance) even if the answer is the same later, e.g. two chips running the same code won’t generate the same noise pattern. Subtracting two almost identical patters magnifies any differences from the norm.

Still I’ll add the germ of the idea to my bag of possible tricks.

3 Likes

Handling (2) channels SI (Signal Integrity) is a lot easier done in one FPGA with increased resources than it would be to separate them on an FR4 PWB. If this was very high frequencies, no way separating them. In addition, every FPGA is physically different with routing when instantiating an abstract design (VIVADO).

If I take your response literally I don’t understand what you are saying. The fuse maps are identical for a given FPGA part number, identical inputs (and an identical “Starting Placer Cost Table”).

If you are saying that changing anything in the source, (e.g. the polarity of any signal or a version number) changes the output drastically you are correct. If anything in the source is different the noise isn’t the same…

FWIW We use twenty different seeds and generate twenty different FPGA placements (which would give identical audio output bits from identical digital inputs) and listen for the placement that generates the least noise.

Ted - correct you are… by going to (2) FPGAs’ you have exponentially increased the complexity of the design for what payback… none if you can scale up the family (both resources & I/O)… If you use a floor planner you can get a little more creative with function placement.

Great news. Very excited for the update. Thanks!

Ted, a general question…

If it’s possible/meaningful to differentiate, what was/is your biggest challenge to improve a DAC in terms of most real sound compared to previous state of digital art:

Tonality
Soundstaging/ambiance/transparency
Pace/timing
Dynamics
Detail retrieval
Resolution
Top/low end extension
Airyness
etc.?

I certainly don’t spect you’d aim for or be able to choose one of such aspects in development, but the question is more if, during your efforts to improve generally, some of those characteristics improved much more quickly/stronger while others still are on a lower level.

My impression is that some were very good already in quite early digital technology while others improved just recently.

1 Like

For the difference comparison, I guess I was going for allowing signal through that is common in both streams, that way the difference in noise between streams wouldn’t matter. In that case I guess it wouldn’t be necessary to invert the phase.