New PS Audio speakers?

Even in an anechoic chamber, I can’t conceive a situation where any kind of dsp can ameliorate the dips/peaks/phase issues caused by the crossover alone - unless the output was coming from a single (crossoverless) driver, which would be impossible with that kind of bandwidth. Plus the chart indicates that a Rythmik F18 is in the mix. I like Rythmik subwoofers - almost picked one up until I got a deal on a JL Audio - but I think even they would say their subs aren’t capable of what is in the graph.

2 Likes

You’re more than welcome to come for a listen. Orlando, FL

That’s known as the Harman curve that has much science behind it.

Ha, no, that’s the in room measurements taken at the listening position.

There are four subs in the mix as the graph shows. (2) FM8 and (2) F18 all Rythmik.

Dirac will never do what Audiolense does.

Watch this video for more info.

1 Like

But it’s not? Where did you find this was supposed to be the Harman curve? Inquiring minds want to know™.

1 Like

Floyd Toole has documented this a lot with his time with Harman.

Room curve targets

Every so often it is good to review what we know about room curves, targets, etc.

Almost 50 years of double-blind listening tests have shown persuasively that listeners like loudspeakers with flat, smooth, anechoic on-axis and listening-window frequency responses. Those with smoothly changing or relatively constant directivity do best. When such loudspeakers are measured in typically reflective listening rooms the resulting steady-state room curves exhibit a smooth downward tilt. It is caused by the frequency dependent directivity of standard loudspeakers - they are omnidirectional at low bass frequencies, becoming progressively more directional as frequency rises. More energy is radiated at low than at high frequencies. Cone/dome loudspeakers tend to show a gently rising directivity index (DI) with frequency, and well designed horn loudspeakers (like the M2) exhibit quite constant DI over their operating frequency range. There is no evidence that either is advantageous - both are highly rated by listeners.

Figure 12.4 in the third edition of my book shows the evolution of a steady-state “room curve” using very highly rated loudspeakers as a guide. The population includes several cone/dome products and the cone/horn M2. The result is a tightly grouped collection of room curves, from which an average curve is easily determined. It is a gently downward tilted line with a slight depression around 2 kHz - the consequence of the nearly universal directivity discontinuity at the woofer/midrange-to-tweeter crossover. I took the liberty of removing that small dip and creating an “idealized” room curve which I attach. The small dip should not be equalized because it alters the perceptually dominant direct sound.

It is essential to note that this is the room curve that would result from subjectively highly-rated loudspeakers. It is predictable from comprehensive anechoic data (the "early reflections curve in a spinorama). If you measure such a curve in your room, you can take credit for selecting excellent loudspeakers. If not, it is likely that your loudspeakers have frequency response or directivity irregularities. Equalization can address frequency response issues, but cannot fix directivity issues. Consider getting better loudspeakers. Equalizing flawed loudspeakers to match this room curve does not guarantee anything in terms of sound quality.

When we talk about a “flat” frequency response, we should be talking about anechoic on-axis or listening window data, not steady-state room curves. A flat room curve sounds too bright.

Conclusion: the evidence we need to assess potential sound quality is in comprehensive anechoic data, not in a steady-state room curve. It’s in the book.

The curve is truncated at low frequencies because the in-situ performance is dominated by the room, including loudspeaker and listener locations. With multiple subwoofers is it possible to achieve smoothish responses at very low frequencies for multiple listening locations - see Chapters 8 and 9 in my book. Otherwise there are likely to be strong peaks and dips. Peaks can be attenuated by EQ, but narrow dips should be left alone - fortunately they are difficult to hear: an absence of sound is less obvious than an excess. Once the curve is smoothed there is the decision of what the bass target should be. Experience has shown that one size does not fit all. Music recordings can vary enormously in bass level, especially older recordings - it is the “circle of confusion” discussed in the book. Modern movies are less variable, but music concerts exhibit wide variations. The upshot is that we need a bass tone control and the final setting may vary with what is being listened to, and perhaps also personal preference. In general too much bass is a “forgivable sin” but too little is not pleasant

Idealized room curve

2 Likes

Yes… you show the anechoic, on-axis, FR response in your response to me but it matches your in-room, off-axis, response? They are not supposed to be the same.

1 Like

This is not the Harman curve, and the Harman curve is for headphones.

Mote importantly, a heavily corrected speaker response has little to do with a speaker’s actual native sound. Nearly any speaker can be bludgeoned into submission with digital processing.

6 Likes

(Diatribe Warning):cowboy_hat_face:

Agreed. If you don’t like the sound of your speakers or your room, change the speakers or room. I’ve been through a lot of room correction over the years, and while it has gotten better, I always end up preferring no DSP in the end (koff, koff) subjectively. Speakers and rooms are inherently not “perfect”. Like life. Like Music.

Room correction has gotten even more popular with the rise of the home studio, where the engineer is using headphones, and/or poor monitoring, a compromised room, etc. - and as a tool, and an inexpensive solution to that problem, it is arguably necessary if you want results that translate to good systems as well as cell phones. However it is not what anyone would consider an ideal reference.

As far as what we as audiophiles would consider “good” recordings, altering the painstakingly created, performed, recorded, mixed and mastered program material coming out of the speaker at the last moment to “fix” a speaker/room issue is an insult to the artist and engineers. It does not make it more like what was intended any more than MQA does.

For that matter, DSP’ing your speakers is saying to the speaker manufacturer, “I don’t think you did all that great of a job, because it doesn’t measure ideally”. Build your own perfect speaker, with or without DSP and try and sell them. Let us know how that turns out. DSP on the output is a compromise solution to problems either imagined, or if real, that should be addressed otherwise. IMO. My two cents.

I actually suspect it is something more or less than two cents, however I don’t have the proper measuring gear to give you a more accurate assessment.:man_shrugging:t2:

10 Likes

For 35 years I have resisted the temptation of various equalizers and DSP mechanisms to correct the sound in any of my listening rooms. I have used component selection, speaker placement, and minor room treatments to coax better sound out of the room. For the most part I have been happy with the results. To my ear many global corrective methods get in the way of my enjoyment of the listening experience. Then there is the corrective measures which introduce their own set of problems, and the temptation to over do it with adjustments. In the end I leave it up to the final mix and the mastering/mixing engineer’s choices. I can understand why some may wish to pursue DSP, but it is not a path I wish to pursue.

5 Likes

Well…it may be possible but, I have yet to achieve a better result in my room with my kit w/o room correction. (I use Anthem Room Correction via my Anthem Pre/Pro.)

I am on a journey of continuous improvement with my room and system like many here on the forum. To date, changes to room configuration, speaker placement, kit, and the addition or placement of some room treatment can and has resulted in better performance. However, re-running ARC after such changes always results in an improvement over bypassing ARC.

All that said, ARC seems to focus mostly on lower frequency smoothing and does not take a “heavy-handed” approach, if you will. Also, I would agree that DSP should be a tweak; i.e., second to the room treatment and set up for ensuring a great speaker response in a given listening room.

2 Likes

Yes. I think anything that results in something one prefers subjectively is by definition good. Not telling anyone not to do what they prefer. Typically I have found the effects of processing the entire signal, or even just the bass, if there is delay introduced - always started out sounding better. You listen for the thing on the graph that is now smoothed, and there is huge mental reinforcement in that. “I KNOW it is better. See?” (points at graph)

Then I’d always start scratching my head at some point, wondering what was off. Take out DSP - and while the “problem” I was addressing with it would come back, it would sound more natural to me. And I’d then go after other, more traditional means of addressing it.

And I’m talking including a $4k DEQX. With an external processor, there is of course the issue of an added stage of electronics and cables and A/D, D/A, etc. vs. a fully digital signal chain where DSP doesn’t have to go through more stuff. Even so, it does tend to generate noise (at least) due to the bit crunching.

I now use the DEQX to delay and EQ a pair of 18" subs flanking the listening position (12’s up by the speakers, not run through anything aside from their own knobs, though no phase adjustment/delay).

3 Likes

A sound approach, if you will. :slight_smile:

I think we are in (somewhat) vehement agreement.

I don’t (at least so far) do graphs. I just do comparative listening and try to hang on to the system changes I like and leave the others behind.

I have thought about stripping ALL of the various tweaks from the basic components of the system and starting over on speaker and kit placement and tweaking the (only three) placement of absorption panels I have deployed; and then systematically adding the tweaks back in one at a time to make sure I have the best possible system synergy with what I currently own.

But the thought of undertaking this task and the time commitment to it has prevented me from taking such drastic action. At least so far!

Cheers.

3 Likes

A high quality diatribe. :+1:

I agree; every system I have heard with DSP sounds better with DSP off, even if the frequency response is not as “good” and the measurement graph tells me the DSP version is “better.” I do not know what it is, but something is off with DSP no matter how much money is spent.

I have one friend in particular with spectacular gear which sounds wonderful without DSP. But when he switches in DSP the music dies, but he lights up. :slight_smile: A great example of listen to what you like. Me, no DSP; him, lots of DSP.

I have heard a good number of systems with DSP below 100Hz. This can be an improvement to my ears. It can often however still sound odd in some way.

7 Likes

Digital processing sounds digitally processed.

Not that that is a bad thing… unless you don’t like the sound of digital processing :slight_smile:

9 Likes

You are vastly more succinct than us. :slight_smile:

5 Likes

I love it when logic meets pragmatism! Makes life easier and avoids unnecessary discussion :wink:

2 Likes

Exactly.

Jason, this was an informative presentation. I’d like to see you interview Mitch Barrett.

1 Like