How is Ted Coding the FPGA?


Is it optimization of the FPGA compiler settings that narrows it down?
Does simulated annealing come into play?

[Frode: Great thread. I edited the title from Cont. so that others will know what it is about. Elk]


Nope, in general I’m lessening the creation of noise and jitter in the FPGA by learning a few things - how to cause less noise in the first place and how to keep that noise away from what matters the most. By doing this the specific layout become less critical. Since the first releases where fixing a bug lost us some of what the first users liked a lot about the DS I’ve been looking for ways of consistently lessening the variability of the builds - for a few releases I just locked down the positions of some of the most critical items, but after a while this kept us from reaching the best sound rather than helping us. Since then I’ve been trying to find new coding techniques that generate fewer problems for us in the first place.





Maybe a wacky idea, but Ted what do you think about an approach of making the FPGA to “work hard” in times when it has in reality “less to do” - to make sure it’s constantly loaded? It may sound wacky, but when one thinks about the fact the worst noise is noise correlated to the signal, wouldn’t it be better to have constant noise all the time?

I don’t know much about FPGA coding, I’m having just very very high level overview of it, maybe it’s even complete nonsense, it’s just really an idea which bogs my mind for some time already :)


You are right… FPGA programming is just about as you describe by default - everything is always running all of the time, but you use selectors or multiplexors to choose which of several things you want as your output, or whether to ignore the inputs entirely and keep the last output.

Its sort of like doing both the “then” part of an if and the “else” part of the if and then using the predicate of the if to pick which output you really pay attention to. Similarly state machines do the all of the work of all states in parallel but only pay attention to the answers that come from the current state.

Newer FPGAs have special features like not using the clock for a region if none of the inputs are changing to save power (in CMOS you only use power when a signal changes value), but typically I don’t use those features for exactly the reason you mention. However there’s a balance: if you can keep the power from sagging or the ground from bouncing you generate less noise, so you want do enough to get the job done but not too much in every clock time.


Will we see a more powerful FPGA in a future DS?

I know that this will have a high impact on the price, though.


It’s too soon to think too seriously about the DS Mk II.

There’s an FPGA with the exact same foot print and a little over 1.5 times the resources at a commensurate price. If we released a slightly upgraded DS that would be a logical choice. All other things being equal we’d need about a price that’s about $100 higher just for that.

But probably a better choice is a new FPGA family the Artix-7 FPGAs. At about that same higher price we’d get about 2 to 2.5 times the resources (and higher speed capability.) It has a separate footprint so there’s be more work, but if the hardware changed much it would be the way to go.

But as I said it’s way too early to be working on this. We are still quite a long ways from the capacity of the current hardware. I have plenty of work still left to do in software.


I vote for the Artix-7 FPGA and Jensen transformers in the output stage…


Ted, my own feeling is if were talking $100-300 (let’s say) across the range of repsin options - for the DS priced the way it is (let’s say not the cheapest thing on the block) - then it’s “go for the best” be d@mned (not saying that would be the most expensive). That would include the output stage.


No, no, no. I shouldn’t have said diddly squat about pricing. I took the question literally and answered it literally. We are far from needing more FPGA power at the moment, in fact all evidence points to using a bigger FPGA causing more sound quality issues.

When we get there we’ll have more experience with sound quality issues and will make the best choices we can at the time. Who know’s maybe a memristor would be the best output device :) (not likely.)


or tubes in the output stage…smile

Ted Smith said We are still quite a long ways from the capacity of the current hardware. I have plenty of work still left to do in software.
Neat. Very exciting!


And good to hear!



We’re all of the changes in the “operating” code or were there tweaks to the conversion / filtering schemes too?


Greg in Mississippi


I’m note sure if the question is about PP vs Yale or Yale Beta vs Yale Final

Yale evolved into a purposeful attempt at getting back what some of us thought was lost when going from 1.2.1 to PP. At first I was just trying out some new ideas on lowering internal noise and jitter in the FPGA. But a little into that work I realized that it enabled me to combine the best of the filtering of 1.2.1 and PP: I suspected that that would get back some of the involvement from 1.2.1 and it’s deeper soundstage but also that it would keep the detail and wider soundstage of PP. I know that the jitter/noise work would help with a blacker background so I expected that to have a synergistic effect with the filtering work. A blacker background is always good since helps with detail (if it’s there in the source) without being fatiguing.

From Yale Beta to Yale Final the actual changes were much smaller, but by moving the polarity inversion to earlier in the processing I got a bigger than expected bonus in lower noise and jitter. I knew something was better for my listening preferences, but I didn’t know how much better it might be for others.