DAC's computational power

Is it at times hard to to do an analogue between a DAC’s measured perfornance and raw computational power as in… GHz?
Can we throw a rough estimate of today’s top DAC’s straight computational performance or would this need some wonky fuzzy logic to do?

Could you thoroughly tell me the architectures of DACs? I have a hard time “seeing inside them”…
Essentially, what are DACs thoroughly?

I know I sound stupid, I know what a DAC does but I don’t undertand the routing and architecture.

there’s a couple of threads on here (related to Ted’s DAC) with this sort of info, maybe use google to find it though as the local search engine ain’t so hot.
I don’t think a straight co parison between gen purpose CPUs and a DAC is very easy to do, a CPU simulating a DAC might need lots of GHz, but that would be from running the simulation which is not the same thing (or good use of silicon).

If you are asking about the DirectSteam DAC, then it’s a hard question to answer. The DS uses a FPGA which can be configured in many ways with multiple clocks and you can do as much in each clock tick as you want, for example doing 10 Gig multiply accumulates per second with the built in DSP blocks, but even then there are still over 14000 cells to do random logic with… the not yet released TSS dac will be using a little larger FPGA that can do about 49 GMACs per second. The DS DAC doesn’t need to do that much each second, but the hardware could.

I’m pretty sure that doesn’t answer your question. I’m not even sure if you are asking about the DS or some simpler DAC chips…

1 Like

Interesting. Well, I guess my question is a bit broad but you surely gave some intriguing info.
I certainly am asking about DACs in general, knowing that some architectures certainly aren’t directly comparable to computational power, there’s just so very many forms of differing compromises…
So as I now understand it, modern PS Audio DACs are based on processors. Very clever, why not more common? May I ask how the jitter management is handled with all those circuits and clocks ticking? Also, when does a bigger and bigger toroid become superfluous in a DAC? It has to be oversized right, but how much?
And WHAT exactly is the benefit of monoblock DACs?
Please elaborate.

Reclocking is done after the FPGA has converted its inputs to single bits at a high rate. That clock also drives the FPGA so it runs synchronously. The FPGA never looks at the clocks of the incoming data so those clocks don’t pollute the output with jitter. The DS product page on the main PS Audio site has a deeper level description and answers your explicit questions except for

Things like sizing transformers, picking certain wiring, how boards are routed, etc. depend on many things in a product. Tho many like big toriods, not all do. For example I prefer split bobbin power transformers. But that doesn’t really matter, there’s no way to explain why your question doesn’t make sense in a small post.

Monoblock DACs? They are a bad idea. Dual mono analog with common digital control/processing makes sense. Keep the two analog output channels as isolated from each other as possible, but keep them time synced in the digital domain to maintain soundstage stability.


Righto. Thanks Ted.

If we call a DAC a box with digital inputs and analogue outputs, then it seems like the computational power you are referring to is really the DSP that these boxes may contain.

In the case of the DirectStream DAC, the DSP is performed by an FPGA.

@tedsmith is the man that can best explain the differences in power of that FPGA vs what a PC with both an i9-10900K CPU + NVidea 2080 Ti GPU can do.

I use the latter to do DSP in a PC using HQPlayer to upsample to DSD256 with very intensive modulators, and then feed this to a DSD DAC which operates in NOS mode (converts DSD direct to analogue).

I’ve also had a DirectStream (and I miss it!) and Rob Watts’ Chord DACs, which use FPGA to do the heavy lifting DSP.

So can we create a function that translates an FPGA’s computing power to a generalized standard of representing computing power?
To be honest I don’t exactly know what such a standard is…

There’s no point to this. Each DAC is a separate balance of resources/compromises, etc. You could use a $1000 graphics card to compute for a dac, but custom compute logic produced for a mass market DAC chip is close to free. The general purpose FGPA of about the power for a highend DAC costs around $20-$50. An expensive FPGA easily outperform the CPU and graphics card mentioned above in raw compute power, but isn’t well suited for the job without some serious work. For a DAC proper, that is making audio, compute power is the enemy, fast implies noisy both electrically and fan noise, etc.


Can FPGAs be used in clusters (in DACs)?

Sure, but why?

1 Like

Well well, this is getting more and more interesting.

The thing that makes me confused here is: How can I get my computer to process a DAC’s functionality and have it cleanly to an output stage?
Please explain to a novice.
Thank you.

A virtual DAC is an interesting thought, in that you could model different DAC technologies at the flip of a software switch.
I suspect it is not going to be as good in terms of SQ due to extra hardware and noisy digital chips…

1 Like

Yes, this is where especially optimized isolation comes in. I have a Schiit Eitrs USB->Coax converter/isolator that uses isolation transformers to purify the possibly quite unclean USB signal. And it works, obviously.
A virtual DAC processed on a PC… Just add isolation transformers, oversized ones, before the output stage?
…and obviously this wouldn’t be a general “PC” but a dedicated unit with optimally selected components. These do exist on the market for audio centers but they’re not doing a DAC’s work.

Okay, I’ll go idealistic. When quantum computing reaches a reasonable level, we’ll have qubits doing a DAC’s workload in a jiffy, right?

I certainly don’t know why, maybe I will some day and FPGA clusters will already exist in DACs by then. I understand somewhat what they do on a general level. Why not have more?
I guess modern ones available aren’t close to being saturated by a high-end DSD hypersampling DAC’s computational demands.
…Could you throw an estimate how little of your FPGA is “at use” with full conversion at play? Could it render the Mandelbrot set with reasonable rate while doing the DSD, if told to?
Sorry. I’m an inquisitor of stupid questions, the answers just often happen to be good.

FPGA’s aren’t general purpose processors, you program them to do a specific task (which of course could be to render a Mandelbrot set and be a DAC.) As with many real world problems there are multiple resources each of which has it’s own constraints any one of which could be a limit. Right now ROM is a limit in the current DS code. In the past it’s been things like compute power, clocking resources, density of routing resources, … I find ways of rebalancing resources to allow new things I want to do. I could always put a $70,000 FPGA in a DAC but clearly that’s not needed, I chose a FPGA for the DS that had about twice the power of the one I used in my prototype. I knew that would give me room for growth but not add to much to the price of the final DAC.
If you want to learn more about FPGA’s you’ll find 100’s of Megs (probably gigs) of documents here:
FPGAs & 3D ICs (xilinx.com)
Obviously there are many other makers of FPGAs out there with slightly different balances of resources.