There are multiple things going on. When JA did the measurements he was using DS software that was measurably noisier than current software and that had some low level distortions. (He remeasured it again later https://www.stereophile.com/content/new-firmware-measurements)
Anyway in his original analysis he was confusing noise with resolution, which is ironic because he also complained about signals well below the noise floor (e.g. the low level linearity tests and tests on signals at -120dB) Most of his tests aren’t easy to do correctly in the presence of noise, e.g. noise could easily mask the width of the bottom of the jitter test.
Here’s the theoretically perfect test tone I generated with Dunn’s original formula:
Here’s the output of the DS:
They match within a fraction of a dB (I don’t know why JA’s weren’t level matched.) The theoretical plot I gave used 65536 points in it’s FFT and the scope I’m using uses 1048576 points for the FFT so my peaks should be 16 times narrower. There are no extra peaks within the measurement granularity of my scope:
Hence there is no measurable jitter within the resolution of my tools (the plot is about 4.8Hz per bin.)
All of this is beside the point - the J-Test can’t measure the jitter of the DS - it assumes jitter comes from anomalies in the AES3 datastream and that that noise will affect the AES decoder (the clock and data separator.) The DS doesn’t use the recovered clock from any input in any manner - it just pattern matches the data and dumps the decoded data in a buffer. To measure the jitter properly would require a crystal with lower near term phase noise than the one the DS uses (pretty hard to find) and the DS would need to be probed right at the reclocker.
Put simply: there is no measurable transfer of input jitter to the analog output. The jitter I talk about is the jitter and noise generated by the FPGA, power supplies, etc. that leaks thru the reclocker or into the analog output. That noise swamps any jitter influence.