PS Audio Music Server In The Pipeline?

Thank you for taking the time to respond, Ted.

I think the theoretical posit I was making was essentially:

Since with the DSDAC the source clocks aren’t recovered, used, or otherwise referenced, if one could eliminate any sort of interference from upstream components (noise on the lines and/or groundloops created in the system, etc.) and yet still have those components deliver a bit-perfect steam to the DSDAC, then the DSDAC would perform optimally, and each bit-perfect source would sound the same.

If this is in essence correct, I think in practical terms this means that to improve the sound of ones system (if it contains a DSDAC), the priority should be to eliminate noise on the lines and/or groundloops created in the system, etc. Making this the priority may fundamentally change how one structures the system that precedes the DAC.

Nice! Amazing tubes. I would love to install them in my mono blocks, but they take 16 in total!

16…ouch!

Luckily just the pair here, and boy do they sound good.

And pushing the envelope with BubbleUPnP, I’m currently listening to a playlist combining local library, Tidal, and Qobuz: 8840 tracks or 650 hours, at the moment.
It’s a really nice “radio station” of artists and genres that I like, and it’s a long time between repeats.

Can someone point me to a post about what exactly Air Gap is? I’d love to know more.

Very little has been shared so far except that it is suppose to be the cats meow for isolation

Don’t think more has been shared than what Paul has posted here. New Tech, bruh! : )

I doubt it’s been described much so I’ll try my best. The Air Gap Audio Interface is something we invented so you won’t find much about it online. It will first show its face in the upcoming server.

The AGAI is a digital audio interface that connects two subsystems together through the air. It’s one step removed from a fiber optic cable where the audio data is transmitted via light through a fiber cable. The idea is isolation.

As I have mentioned before the problem with digital audio systems is contamination. You have a noisy computer inside a server and as it chugs away at its tasks it jitters and pollutes the output signal feeding the DAC. This is why, we believe,
FLAC sounds differently than WAV even though the bits are identical. A FLAC file requires far more bit crunching to extract the bits than does a WAV file. Those crunched bits contaminate the final output signal.

Imagine now that the noisy computer inside the server were not in the server and was a mile away. Its noise would not be a problem as long as we took its output and regenerated it in a Digital Lens. Since a mile long chassis might not sell too
well, the next best thing is to physically isolate the two systems within a single chassis. To do that we need separate power supplies, physical boards, and at the end of the proverbial day, a completely isolated connection between the computer and the output
Digital Lens. That’s where the AGAI comes into play. By bridging the physical gap between the internal noisy computer in the server by light traveling through air, we get excellent isolation.

In the upcoming Ted Smith Signature DAC he’s taken the problem of isolation one step further than we have in the server. In the TSS the noisy digital processing happens inside its own chassis and the quiet, clean analog happens in its own chassis.
The two are connected again through light, but because the physical distance between chassis and air gap is measured in inches rather than the required fractions of inches, we use a fiber optic cable between the two.

Of course, one of the limitations of fiber optics is bandwidth—getting high speed data through TOSLINK doesn’t work but that isn’t a limitation of light or fiber optics, just TOSLINK. Some of the highest speed data in the world travels on beams
of light.

Whichever method is used, AGAI or high-speed fiber, light transmission of digital data offers the possibility of getting noise and jitter out of the signal and gets us that much closer to perfection.

3 Likes

Hey Paul,

For those who have the DSD DAC with bridge, how would that interface with the new AGAI. Would we be able to take the bridge out of the loop or what do you anticipate will work for all the DSD units out there.

Many thanks,
Jason

1 Like

Paul, 3 questions concerning this:

Assumptions…
Setup 1:
Assumed a normal Bridge installation consists of

  1. a PC (to control the server and library which are both on the PC or even directly on the NAS)
  2. a NAS (which holds the music) and is attached to the PC over network connection
  3. an interface (in this case is the Bridge card) that takes the LAN signal from the PC and renders the files to the DAC
  4. Assumed in this setup 1) and 2) are isolated from the audio power cirquit by a separated power cirquit and the LAN signal ist isolated bby a galvanic separation from the Bridge card.

Setup 2:
Given the Octave server inherits 1) and 3) and maybe even 2) (in case of an internal SSD drive) in one chassis and therefore a common power cirquit, then the measures taken above under 4) would have to be done sophisticated in this case as they have to isolate the same within one chassis which has one common power feeding for network/PC and audio components. A separation and isolation of power supplies inside the unit is necessary as well as a kind of galvanic isolation of the LAN as under Setup 1.

Question 1: is it correct that you call this separation/isolation inside Octave the “AGAI” and that you call the interface between the renderer (e.g. Bridge or Octave) and the DAC “digital lens”?

Question 2: is it correct that then “AGAI” replicates the measures taken in “Setup 1”, to accomplish something similar within one chassis (whatever is assumed to be better)?

Question 3: is it correct that then the pure “digital lens” interface of Bridge III and Octave will be quite the same?

I hope this is understandable…

I think there is some confusion over the terminology. From what Paul has said in the past, Octave will be the server software. It will run on what we’ve referred to as Bridge III (which would replace Bridge 1 or 2 inside the DS or DS Jr) and standalone servers that are more capable than Bridge III. I don’t think the external server has been named yet and Paul has said there will likely be different versions over time (e.g., likely a Stellar version eventually). The external units should have the usual complement of outputs to accommodate PSA and other DACs. I would expect owners of PDA DACs to connect to the external servers using I2S. At least that is my understanding. Paul’s description of the AGAI interface would seem to relate to the external box. I doubt it would fit onto the Bridge III card (there is only so much hardware that can be squeezed onto something that small), but only PSA could answer that.

Sure AGAI won’t make it into Bridge III (but probably the new digital lens if my above theory is valid) and in case of the mentioned measures is possibly not even necessary…but I understood the Server HW is currently also named Octave.

My understanding is that Octave is the whole enchilada: both the hardware (server in a box) and software (GUI).

Apologies for my incessant beating of the same drum in this Forum but for an expenditure of about $100 it is possible to assess the gains from ‘light’ isolation (aka a fibre-optic link) by inserting a couple of TP-Link wire-to-fibre ethernet converters, connected back to back with a few feet of fibre optic cable, in the last wired ethernet leg to the renderer/DAC.(Preferably with a linear power supply for the ethnic converter closest to the renderer/DAC.)
I’ve been benefitting from the associated improvement in SQ since 2015.

1 Like

Thank you Paul for the explanation of AGAI. I’m looking forward to hearing it in my own system.

Regarding isolation from the computer/router, I just yesterday put in a Baaske MI 1005 5KV passive Medical Ethernet Isolator just before the BII. I am shocked at how relaxed and natural the system now sounds. A true revelation for my Berylium drivers which were obviously picking up the hash from the Mac Pro 2013 and router in another room, on it’s own Quartet. I can only imagine how much the server will contribute to a highly resolving system. I’m hoping it surprises everyone in the same way the BHK 300’s balanced topology proved to surpass the 250 in more ways than expected.

Your explanation of why wav. or aif sounds better than flac is the best answer I’ve read. I’ve always wondered exactly what why it was, even on 6 cores and 64 G ram. Are these subtleties of isolation and decoding even measurable?

One of aspects it would be great if you could address as you finalize the designs, is at the other end of the product’s life cycle. Say 8 or 10 years from now, when the hard drive interfaces that exist today will be considered outdated. Is there any way to design a modular input board so that as the dominant drive interface evolves, PS could just offer that new input module?

If this is possible, I think it would go a long way towards future proofing the server and giving people confidence that the product will be worth the investment for the long haul.

Also have you though about experimenting with a battery power supply bypassing the AC for critical listening? Luckily the server shouldn’t draw much power and with the massive isolation you’ve designed, the last vestige of hash could be the one from the power supply back into the power conditioner.

A separate battery supply could be sold to power an external drive, automatically connecting to AC when needed to recharge.

Also if Wifi is included to be able to easily add new music to the server, (or stream music to it?) it would be great to be able to power that entire circuit down, to have the convenience when desired with no sonic penalty. It might be advantageous to have it as a module itself, which could be replaced eventually as a new wireless standard emerges, giving the unit several years more life in a modern home network.

I imagine there will be no way to pull music from the server for non critical listening set ups or to a Sprout with adaptor. I believe it was stated that upnp or dlna is not included in the server, but I’m wondering if it could implemented later in software or some other method.

AGAI is built into the server not external. The server will connect with your DS through all the standard means: I2S, XLR, RCA, USB.

Let me see if I can help. In the first situation you have succesfully separated the server from the player and the storage. You have also eliminated any jitter from the server because of the Ethernet connection but you haven’t solved the problem.

Let me explain. In the Bridge scenario the three components are the server (PC), the storage (could be on the PC or a NAS), and the renderer or player (the Bridge), the controller (the iPad or whatever).

If you send a command on the iPad to play something the server responds by connecting the Bridge with the memory location of the track and it begins streaming to the Bridge. The Bridge is now the computer I was referring to and it is chunking
away decoding FLAC to WAV and you get all the noise and hash I mentioned. We use a Digital Lens on the Bridge to lower jitter but we cannot do anything about the power supply and ground plane noises the Bridge is polluting the DAC with.

So yes, the LAN offers some isolation but not where it’s actually needed.

That said:

Question 1: is it correct that you call this separation/isolation inside Octave the “AGAI” and that you call the interface between the renderer (e.g. Bridge or Octave) and the DAC “digital lens”?

I am not sure what this all means. If you’re using an Octave server you won’t need/use the Bridge. You will plug the Octave server’s output into the DAC’s input as if it were a transport. The Digital Lens inside the Octave server is what cleans
up and dejitters the final output signal to the DAC.

Question 2: is it correct that then “AGAI” replicates the measures taken in “Setup 1”, to accomplish something similar within one chassis (whatever is assumed to be better)?

No, or sort of. Reread the detailed list I first posted.

Question 3: is it correct that then the pure “digital lens” interface of Bridge III and Octave will be quite the same?

The same in principal not in actual operation.

Thanks much Paul!

You misunderstood me on question 1, but that’s not so important now as I understood, that with “PC” you were not (only) referring to the PC holding the Library SW in a Bridge scenario, but also to the Bridge itself rendering the track from memory. Althought in a Bridge scneario one can put the Desktop PC and NAS on a separate power cirquit, indeed the power supply of the Bridge itself currently can’t be isolated from the DAC’s power supply (you just could do so in a new DAC design).

In the Octave server I understood you do isolate the audio cirquits from also the complete rendereing cirquit. I agree that this must be an add. main benefit. If I’m correct, this is also the main technical and price difference between Bridge III and the Octave server as all other functions are quite comparable I assume.

Given this background, I just wonder why the Bridge perfoms so good compared to an external 3rd party streamer scenario, as the latter could in terms of power supply possibly be better isolated from the DAC than the Bridge.

Lots of great ideas and thoughts. Thanks. Battery power is always a double edged sword on any number of levels, but still, great ideas.

Likely because they don’t use the Digital Lens technology at all or in the same way we do. It’s been a constant key to our success.