Atomic clocks are addressing a problem that’s irrelevant to audio: Long term clock accuracy. It doesn’t really matter for music listening if the clock is accurate for years. What really matters is how much the clock jumps around during a second or so. It’s that jitter in time that frequency modulates the audio.
Atomic clocks actually aren’t designed for low jitter (at least that’s not their primary design.) In fact they don’t use the atomic reference at all in the short term. They have a free running clock and they push it around based on periodic reads of the atomic reference. It’s the quality of that free running clock that matters for audio and whether varying it’s rate adds phase noise that will affect the audio… If they sound better at all in a system it’s just because they are better built overall than another clock, not because they are better for audio.
The ability to change the frequency of a clock to match an external source is exactly at odds with a low phase noise clock. That’s the primary reason that PLL’s have a bad reputation in audio. The act of trying to control the frequency of a clock is adding jitter to that clock
Conversely running a clock for any distance, going thru various impedance discontinuities (e.g. cables and their connectors), being subject to ground loops and other interference, going thru conversions to optical and back, etc. all add jitter. There’s no method of distributing a clock that doesn’t add phase noise in the frequencies that matter to audio.
Then there’s the issue of having multiple clocks in a system. If a new clock is added to a DAC what’s the DAC supposed to do when that new clock runs at a slightly different rate than the incoming data? Asynchronous sample rate conversion is the standard answer, but what really does is encode the clock rate differences into the output audio, making it impossible to separate that jitter downstream… Not a good design for audio at all.