Will there be an update to Windom or is the DS to be orphaned?
Ted ain’t done yet. DSD and DSJ will continue to get updates as long as Ted can. He’s been a busy dude, so I don’t want to give an ETA for when the next update could be.
Encouraging I guess.
Further to the original question, being a Linux and otherwise free software user and supporter, is there a scenario in which PSA/TS releases the code and documents the hardware in such a way to allow the community to make their own tweaks and modifications (invalidating any warranty of course)?
Given the fact that we as DSD owners purchased a FPGA based DAC because of the upgrade-ability contrasted against the fact the PSA has effectively one man with limited resources (time) assigned to the task, this seems like reasonable concept.
Id love to hear the PSA/TS thoughts/philosophy on this.
Ted will certainly have more to say, but this idea did come up a number of years ago. I brought it up to the president because I knew he would have an opinion. He mentioned the warranty side of things and said there’s no way we could offer a warranty and allow folks to adjust how the unit fundamentally works. He treated it like a HW change, and considered it voiding the warranty.
It’s not likely: the code is mine and contains trade secrets. I’ve always been open with PS Audio with everything I’m doing and guided them through the code. They have the rights to the code if I crump or for any reason can’t implement what they need. I’ve also talked freely about it with others. Still I’m not inclined to deal with the time (and potential hassle) of managing an open source effort. That could change, but probably not in the near future.
Also, from a practical point of view, I’m not sure that there are many audiophiles who have a digital signal processing background that write Verilog and have experience with real time programming. They aren’t closely related fields.
that would be a pretty small group, i’m imagining the Venn diagram now.
open source projects need a wide group of devs to succeed else they wither. too many users and not enough devs can be a bad combination - OpenSSL suffered from this fate until some big users pledged cash recently.
Do you have any idea when the next mountain top will debut?
No. I’ve been working on quad rate I2S input for a while now, the extra sampling speed required causes growth in the area consumed by that circuitry. That’s a pretty normal problem. I keep beating my head against the wall till I see the light
I’d like to have something sooner than later, but there are a lot of currently very busy people who would need be involved.
Well, I don’t understand much of that, but thanks. I won’t hold my breath during the wait.
I’ve been trying to envision what it means to “program” an FPGA. The best I can come up with is imagining a breadboard with many chips already mounted and some wires that go between various areas of the breadboard. The FPGA is then that breadboard with “sub-chips” (gates, multipliers, ands, ors, registers, etc.).
Before FPGA loading, none of the pins of any sub-chip are connected to any of the wires. Then during the FPGA chip loading process, gates in the chip essentially “solder” specific sub-chip’s pins to specific wires thus creating the desired circuit(s). In the process, presumably some of the “sub-chips” would be used and some not depending on how many sub-chips the specification required vs the number of “sub-chips” available. Different program versions presumably would not use the same pathways through the FPGA because the specs were different.
To create the chip loading values, the FPGA programming language specification would know the chips architecture and starting at a (semi?)random part of the FPGA, start creating a circuit (according to the specs) by randomly gating pins to wires - and following the wires to the next part of the board - until either 1) the circuit was complete or 2) there were no wires available to connect or no chips of the appropriate kind on that area of the board (a boundary was reached). If 1), then the circuit would be considered complete and the “compilation” would stop. If 2), then the “compilation process” would have to backup some number of steps and then either complete or backup farther until the circuit could be completed. Assuming the above is correct this would seem to explain why you give 20 different compilations to PSAudio for listening tests because in reality, you’re giving them 20 different ways to configure the FPGA “sub-chips” according to the specification. Some board layouts would presumably “sound better” because the layout was cleaner (less noise from crosstalk with the wires).
Is the above a reasonable way to think of FPGA programming? If so, I understand why you say FPGA programming is essentially a mechanism for an electrical engineer specializing in signal processing to realize a 1-bit DAC circuit rather than traditional programming in the sense of a stored program executing some series of commands sequentially. If the above is not correct, where did I go wrong?
If you bang your head against the wall until you see lights, it will take us longer to get new stuff outa you. Please cut it out.
Your model isn’t too far off. There are multiple levels of abstraction available to the programmer, from a “gate level” (flip flops and blocks that can map any four (five, six… depending on the FPGA in question) boolean variables to a 1 bit output), to little macros (e.g. 4 × 8 memory or wide shift register…), to tool generated specializations of bigger blocks (e.g. a complex multiply, FFT, IIR or FIR filter…), to LabVIEW, Matlab, etc. models, to C (C++?), to implementing a general purpose processor and running, say Linux on it, and adding lots of custom operation accelerators…
The mapping of higher level (non dataflow) languages is non obvious without some experience.
Anyway, the higher level options are only supported with newer FPGAs. I’ve always liked assembly language anyway, so Verilog is great for me. Tho I also really like modern C++, it’s the wrong level of abstraction of FPGAs for me.
The tools for the FPGA that the DS uses, use simulated annealing to place and route the boards. I believe they also use ripup and reroute which is more structured than backtracking (ripup can undo any previous route to clear up some area/resource, whereas literal backtracking goes to a previous state by undoing work in a stack like manner.)