Time Domain/Extension Defined

I’ve heard about this concept of time domain or extension that suggests inaudible spectral content contributes to how and what we perceive. It’s supposedly that thing that creates “air” and other such stuff. I’ve been searching unsuccessfully for anything that might explain it and finally came across a couple of references I though might be of interest. I’ve only started to digest but it’s already starting to make sense.
Rupert Neve Life Beyond Measure

1 Like

This book has some good definitions of the critical time and space issues involving music playback and the perception of same.

1 Like

Thanks for the link. These perspectives hit home with me. @rower30 helps us address these timing differences with his iconoclast cables where he supplies Vp data based on frequency. It makes recorded and reproduced music sound closer to live music.

What also has stood out to me is reducing noise floor caused by RF or EMF increases spatial since in the music too.

Good stuff. Time and phase: the final frontier. Or maybe just the current one.

Though I’m not so much of a fan of DSP on the repro end of things as I am the recording end, I would be interested to play with a Trinnov system, as it is one of, if not the only DSP for monitoring I know of that addresses phase and timing along with the ususal frequency and amplitude.

And of course if you can do it somehow without having to fix it after the fact, that would be preferable.

Acourate and Audiolense do a much better job. I know acourate and it is wonderful. In the past I posted some some time domain measurements showing before and after. Mitch works as a consultant, and can help anyone get the most optimal correction from either software.

Everyone who listens to digital should give a serious thought about DSP. It does wonders

This is the link to my measurements here at the forum.

1 Like

Better than what?

Isn’t this one of the more major claims of Wilson Audio starting with the Alexia and up the model line from there?

This subject is well understood in the SONAR and RADAR world.

1 Like

They can’t fix the room, unfortunately🤷🏻‍♂️


No but you can predict how the “room” in any location is going to act by taking multiple measurements of the current ocean conditions including bottom depth, bottom type, temperature, wave height, wind speed, etc…

1 Like

Right - which is the concept with DSP at the end. Rupert Neve is coming at it from the recording and studio monitoring perspective, saying that unless you have at least 50kHz response in the recording gear, you’re introducing time/phase/frequency distortions that you can’t do anything about once it is baked into the signal.

1 Like

Exactly, measure what you have and adjust accordingly.

That’s the theory at any rate.

1 Like

DSP has made the biggest single difference in the quality of the playback in my room of anything else to date. REW didn’t work for me but Acuorate has worked surprisingly well. And it costs less that a mid range speaker or interconnect wire.

We have to be careful with all this. It is TIME for sure, but be somewhat cognitive that it is a percent the “C” value of about 3.00 E8 m/s. That’s FAST, even if we see just 5-10% of that in the bass regions.

Yes, we can change the data lines slope some to be better in the audio range but the total differences in the numbers are still small even if the percent of difference are seemingly large. This is why longer cables aren’t a total disaster!

We have two separate issues. The PHASE is the alignment of all the frequencies at the very start of the test. At RF the phase is zero with frequency. At audio it moves from near zero at 20 KHz to -45 degrees at about 20 Hz. All the frequencies aren’t lines up “straight” at the starting line.

The second part is once the frequencies move down the cable, they travel at a different Vp (group velocity) until we get to RF, where the RF Vp is very close to the SQRT (1/ dielectric constant) and can reach 95% the value of “C”.

Both superimpose one top of one another. It is really happening, but it is hard to pin a “value” on what we hear even if we are time based in our hearing more than just amplitude. Does aligning the Vp improve things? The numbers, like so many of the numbers in audio, seem too small a change to matter. Cables sound different so SOMETHING(s) matters.

My experience seems to say timing changes in a cable move what I want to hear in the right direction. Or, is the better design for analog frequencies moving other unknown variable(s) we hear “more” than time-based improvements? I can’t design to that, though!

The series I and II ICONOCLAST speaker cable and IC are definitely time based different based on the designs. Most report the series II seem to be better sounding cables. I can report what the designs do that we know of, but that’s as far as I can take it. What we hear is ALL of the rest added in too.



Fascinating stuff and a humbling example of every lesson shedding light on now much there is to learn.

In beginning to read up a bit on the topic of “life beyond measurement” I’ve come to notice some manufacturers will spec two amplifier frequency response ranges. One is the audible 20-20k the other, for those that I’ve looked at anyway, is anywhere between 3Hz-45KHz (M1200) and 3Hz-500KHz (Schiit Aegir).
I have no idea if these extended frequency ranges are related to the topic of “life beyond…” but I am curious what they represent. I’ve not been able to find anything. Ideas?

The easy answer is if an amplifier is loaded wrong with capacitance or negative feedback, it changes into an oscillator. The two, amps and oscillators, are actually close cousins to each other. The BW of an amplifier effects where the oscillations will start and how easily they can change the audible range.

Great care must be taken to design the properties of an amplifier such that normal function frequency range isn’t hijacked by unanticipated high frequencies.


1 Like

Many thanks Galen for opening another door. I’m experiencing a new sense of empathy with those lacking a rudimentary understanding of all the many systems and subsystems that must operate seamlessly for vehicles, ICE or otherwise powered, do what they do; deliver people and stuff to their destinations in the same condition they left.
This topic certainly is not rudimentary but I think, for me anyway, that it opens up paths to understanding why other components have the effect on SQ that they do.
I found this AudioKarma thread discussing oscillation troubleshooting that helps understand the topic without delving too deep in the weeds. It’s interesting that the final post echoes your “close cousins” comment.
If I understand the implications correctly, undesired oscillations somewhat demystifies the overall effect of power conditioning. If noisy power is audio GIGO it makes sense that clean input waveforms would eliminate all the crap propagating through and affecting outputs. One of several comments in that AudioKarma thread struck me is the possible effects of non audible frequencies subharmonics in the audible range.
Lesson learned: Stop reading if you want to save money on your audio system :slight_smile: