MQA Controversy

Ted Smith said 24/48k sounds a lot better than 16/96k or even 16/192k.
Now that's interesting.

I have never done the tests myself, but I know of those that have and they say the opposite. In fact they claim they really cant detect bit depth until it goes below 14 bits.

Mind you these same people claim to have taken a master tape, used a DSD analog to digital converter on the output of the mater tape machine (it was one of those from the Tape project), then Saracon to convert it to 44.1/16bit and found it very close. The DAC used was a heavily tweaked DAC using one of those old Phillips Double crown chips. I have one myself and it does sound good - but compared to the DS a bit ‘euphonic’ for want of a better word - the DS is more neutral.

I have zero idea why the disparity.

Thanks

Bill

Multiple times over the years I’ve lobbed a bit off at a time (with and without dither) to listen to what changes, as I get older some of the changes are less obvious and there’s plenty of material where the differences seem minor to me, but averaged over a lot of material the missing bits seem pretty obvious. I don’t think I can easily hear the difference between 18/96 and 20/96 any more on average material, but 16 vs 18 especially at 44.1 is still quite a difference to me.

The differences are much more apparent with dynamic recordings with real ambient noise/room tones, etc. Especially in the “cracks”. Compressed material or completely artificial material hide the differences to me, probably because they never get quiet enough to listen to, say, piano decays, cymbal decays, echos in a cathedral, etc. To me it’s the micro dynamics and realism that suffer with too low a S/N ratio.

With DACs I had access to years ago it seemed like 44.1 sucked and higher rates were much better, but with the much better DACs I’ve heard in the last 15 years and the proper selection of filters 16 bits doesn’t seem to cut it on a lot of music I like.

dvorak & timm: (with apologies to everyone else for the continued side thread) -

That’s cool! As we exchanged before, I understand the chain, but have not played with the app that allows higher digital output, and what follows is why. My statement/question still stands, though maybe on one leg ; )

Ordinarily, the SB makes a 705 kbps FLAC out of anything higher-res. I assume you’re playing back a .dsf file, which I haven’t tried. While playing back, for example, the 24/192 AIFF of Kind of Blue, under More Info it says:

Volume Adjustment: 1.56 dB (1.10 dB to prevent clipping)

Bitrate: 9216kbps CBR (Converted to 705kbps FLAC)

Sample Rate: 192.0 kHz

Bit Rate: 24Bits

So, a couple of things here that go into my use of the term “unmolested” (read: “for real bit perfect”). First, it is making a significantly lower bitrate stream, and playing as FLAC. This is the same rate it shows when playing back a Redbook file, and is Redbook data rate. Plus (and I’ve not delved into this) it is digitally reducing the volume. This is not the same as the volume setting you make with your remote, for example.

When you play this stream out to a DAC, the DAC will typically see the 24/192 “wrapper” and that will be displayed - but I’m still wondering if that’s what has been reconstructed, bit perfect, in the DAC. I guess if I’m going to believe displayed info, I have tended to believe the “More Info” display. My understanding of the architecture of the SBs is that this is how they work. Perhaps the app fixes that, beyond simply enabling the chip’s full spec.

Beef, that’s definitely not happening in my system. The SBT has no knowledge of DSD, and DSD-over-PCM requires a bit-perfect (no SRC, no gain adjustments) 24/176.4 PCM container to transport the 2.8824 MHz 1-bit PDM samples to the DAC. The fact that the DirectStream DAC shows “DSD64 1-bit” on screen when I play DSD64 content in Roon confirms that the SBT is doing bit-perfect transport of 24/176.4 PCM. There’s no reason to think it would be doing something different with 24/192, but I’ll doubly confirm with a test file from PS Audio sometime this weekend.

bhobba said

Now that’s interesting.

I have never done the tests myself, but I know of those that have and they say the opposite. In fact they claim they really cant detect bit depth until it goes below 14 bits.

Mind you these same people claim to have taken a master tape, used a DSD analog to digital converter on the output of the mater tape machine (it was one of those from the Tape project), then Saracon to convert it to 44.1/16bit and found it very close. The DAC used was a heavily tweaked DAC using one of those old Phillips Double crown chips. I have one myself and it does sound good - but compared to the DS a bit ‘euphonic’ for want of a better word - the DS is more neutral.

I have zero idea why the disparity.

Thanks

Bill


I’ve enjoyed a DirectStream DAC now for about 2 1/2 years! When I first read about MQA in The Absolute Sound, it seemed a bit “over the top” to me, but it peaked my interest. When Tidal started streaming MQA, I first bought a Meridian Explorer2 DAC ($199 on Amazon), and was pleasantly surprised how good MQA sounds. I later replaced the Explorer2 with a Mytek Brooklyn DAC, use it for MQA streaming, the DirectStream DAC for everything else.

I find that some albums sound just a little better in MQA, but many (to me) do sound significantly better. I find MQA offers more controlled and “rounded” bass, highs are a bit more defined, cymbals that are well recorded have the visceral “timbre” I remember from my rock and roll drummer days, voices, stringed instruments and especially acoustic pianos sound more “realistic”, sometimes I’d describe them as “analog sounding”. In general the MQA versions sound a bit more “musical” and more “dynamic”.

Having said that, I find that with “normal” 44.1/16 thru 192/24 FLAC and AIFF files sound better using the DirectStream DAC vs. the Brooklyn.

With Universal Music joining Tidal and Warner Music, I think MQA has “arrived”.

Ted Smith said

Before swallowing any technical information from MQA at face value read, for example: https://www.xivero.com/blog/hypothesis-paper-to-support-a-deeper-technical-analysis-of-mqa-by-mqa-limited/

Interesting paper. A few points:
  1. As described it will not work. The second folding destroys the last 7 bits of the first folding so it cant be decoded.

  2. What I have read it does is a variant of sparse coding - it convolves it with some Bessel function. For simplicity articles I have read used a triangle function. It can be applied to any bit-rate that is 2x, 4x, etc etc, When it unfolds it simply linear interpolates it, of course other convoling algorithms lead to different ways to upsample. It also automatically does the shallow aliasing filtering. Then its uses the folding method describes in the paper.

Personally I would not do it the way they describe. I like the idea of sampling at 176k using a convolving function and zeroing out the the bits below the nose floor but using a more modern compression algorithm - optimfrog. I did a few experiments using 16bits as sort of an averaged number of bits and you generally get something like 60mb - 70mb. You can then upsample it to 352 or even higher.

Thanks

Bill

bhobba said
  1. As described it will not work. The second folding destroys the last 7 bits of the first folding so it cant be decoded.

  2. What I have read it does is a variant of sparse coding - it convolves it with some Bessel function. For simplicity articles I have read used a triangle function. It can be applied to any bit-rate that is 2x, 4x, etc etc, When it unfolds it simply linear interpolates it, of course other convoling algorithms lead to different ways to upsample. It also automatically does the shallow aliasing filtering. Then its uses the folding method describes in the paper.

Personally I would not do it the way they describe. I like the idea of sampling at …

Without specific references I don't necessarily know what it is that you say won't work - you have too many pronouns with ambiguous referents which I can't resolve.

If in your point 1 you are referring to figure 2 in the paper: the bits of the 48-96k band are encoded into the low bits in the 0-48k band. This isn’t hard because they assume that the level of the input signals falls with frequency. This assumption is also convenient because it implicitly preferentially preserves the folded information from the last stage.

If that’s not what you are referring to but you are still talking about the hypothesized MQA encoding process, the claim in the paper is exactly that MQA destroys the low bits of the original signal (but preserves some of their info via dithering.) I.e. That MQA can’t be lossless. But remember it has the 2nd band of the low bits available to use in decoding the next band at each unfolding. i.e. they assume that, say 1/2 of 1/3 (~7/24) of the bits is enough to represent what matters in the 2nd band at each unfolding. Since they assume that there aren’t significant levels in the 2nd band at each stage it’s not hard to get that compression.

If I missed your point and/or my descriptions aren’t clear and if you think that the paper’s hypothetical MQA isn’t correct, you might want to annotate their figures to show what you think fails or …

For your point 2, indeed the paper does mention sparse sampling theory as a likely method for compression. (e.g. the text next to the “Lossy Compression” rectangles in Figure 2.)

For your third paragraph I don’t know if you are talking about your alternative to MQA or your alternative to the papers’ strawman alternative to MQA. I presume that the paper’s strawman chooses flac just because it’s sufficient to achieve the required compression and it’s ubiquitous so they don’t need to justify some other alternative. I think many of us could come up with MQA alternatives that achieve their stated compression goals and their goals of a nested hierarchy of resolutions that can be dynamically selected based on available bandwidth but without using proprietary technologies.

In any case, the points of the paper are that MQA (or any proprietary format/processes, etc.) isn’t needed to achieve the compression and other stated goals of MQA. And MQA as described by MQA is a lossy process.

Dvorak - Ohhhh, I get it now - it’s not using the DAC in the Touch or Squeezebox Server. Kind of an ethernet to optical converter.

I may have mentioned this before, way too many posts to find mine.

I don’t know if this is possible, but I think an ideal solution if we are stuck with MQA, which does sound better partially unfolded, than the 16/44 files on Tidal. Would be for Meridian to make a box, could be the same case as the Explorer DAC, that would do all the MAQ processing and then output it as the fully unfolded digital signal. Design it so it just passes non MQA untouched, maybe have it convert from USB to spdif optical or coax.

If that would be possible we could then send it to our DAC of choice. I think if they can sell the Explorer for, what is it now? $199 or $299, they would sell lots of them at that price. I would like to get the full MQA, but I would rather stick with partial decoding than change DACs. And if I do upgrade my DAC it would be about 95% certainty that it would be a Directstream, and that is not going to become a MQA DAC.

By doing the partial unfold in Tidal’s software, I think a total software/firmware solution is possible. The have back pedaled on both their claim of having to know which ADC was used, when they started doing batch encoding, and that they needed to have access to each brand of DAC with the partial unfold, so I am suspect about the final steps.

For streaming, I don’t see that going away, so we are going to have to live with MQA if we want to get the most out of Tidal. And finally, Tidal is no longer putting all Master files under What’s New. They have added “M” in a box next to the titles like they do with the “E” box for explicit content.

I just listened to Lou Reed’s “Magic And Loss” partially unfolded, it sounded very good. I can’t say I have ever heard it in hi-rez as a comparison. I find Tidal’s 16/44 very inconsistent, some good, others that can’t compete with a CD played on the PWT. They only post provenance when it is included in the titles. Like with the Jethro Tull files that state they were remastered by (I forget the guy’s name, and I have a 75lb dog in my lap, so I can’t get up to go look). But you get the idea.

Just thought that I’d add my 2 cents to what I’ve found to be a very interesting and thought provoking discussion.

Prior to the availability of MQA on TIDAL, all of my listening at home had been at the Redbook level (via TIDAL, via CD’s, or my own CD’s stored as uncompressed FLAC’s). In my main rig, a first generation Bluesound Node handles the streaming chores. Typically I only use the Bluesound in my main rig as a renderer. I have it hooked up by Toslink to a Bryston BDA-1. Via its analog outputs the Bluesound’s DAC does the full monty (i.e. the complete unfold). Via its Toslink I only get the benefit of the first unfold. However, although the Bluesound has never sounded better to my ears using its analog outputs as it has listening to MQA recordings, it is still no match to listening to the same MQA recording using the Bryston DAC, which I tend to use with upsampling engaged. These MQA recordings via TIDAL have been my first taste of higher than Redbook resolutions in my own system. I can’t say that all of the MQA albums have knocked my socks off sonically, but there are some gems to be found … Brad Mehldau’s “Blues and Ballads” is the best thing I’ve listened to so far.

I totally get Paul’s concerns, especially with respect to reservations related to the proprietary nature of MQA as well as its inherently lossy nature. On the other hand, to the extent that I can enjoy the sonic benefits streaming at higher than Redbook resolutions (especially without having to add any new kit to my system) I’m a happy camper. So as long as we neither cripple ourselves nor limit other options, I think we should be careful not to throw the baby out with the bath water nor to let the perfect be the enemy of the good, but understanding that as audiophiles it is in our nature to always be seeking something better.

Ted Smith said
Without specific references I don't necessarily know what it is that you say won't work - you have too many pronouns with ambiguous referents which I can't resolve.
How it describes folding is first you do frequency splitting. Say in the first case up to 48k then 48 to 96k. You compress the 48k to 96 k into the last 7 bits of the 0-48k. I will call this the first folding.

This reduces it to a 96k sampled stream. Then you do exactly the same thing again. I will call this the second folding. Split it into 24k and 24k to 48k. But when you put the compressed 24k to 48k into the last 7 bits you overwrite the last 7 bits of the first folding.

When you unfold it you get a 17 bit dithered version of the first folding - but the last 7 bits are missing - it cant be used to reconstruct the original 192k.

You can get around it of course by doing what you say - but I don’t see how you can encode the already encoded bits plus the 24-48k into 7 bits. Maybe they use less bits in the first encode.

I think its a small point because they do a sort of version of sparse coding in the first folding that doesn’t touch the bottom 7 bits so there is no issue unfolding it. It has the advantage of doing the shallow roll-off they like at the same time so avoiding brick-wall filters.

For a triangle convolving function they explain it here:

I personally don’t like this second folding process ie the frequency splitting one - I much prefer simply to directly compress it using the idea of zeroing out bits below the noise floor. My experiments show using a more modern compression algorithm, namely optimfrog, you get files as small or smaller than MQA anyway. Yes it’s just a ‘straw-man’ off the top of my head possibility to show MQA is not the only way like was proposed in the article.

Your final point is IMHO the key and very important point - its simply not necessary to achieve its aims, at least for streaming.

Thanks

Bill

I guess everyone here has different catalogs … different methods of listening etc… which leads to different views.

For myself? Practically 90% of my listening is w hi res stored locally. So you can see why this doesn’t interest me. Taking it a step further - if MQA does at some point become the de-facto standard… it essentially messes with my world. Why do I think that? Because if I want something new in a high res format - I believe things will just move to this ‘new’ format as the ‘quality’ / ‘masters’ format hence decreasing my ability to purchase new music in my format of choice. Why will my format of choice go away? Competition of a new masters format.

Personally I think it will fail as MLP did before it. I just don’t think they are very good at marketing this type of initiative. And this go round seems worse than the last in my opinion

How long have we been talking about ‘bit perfect’ here?

Time will tell I guess.

A little MQA weirdness:

I have a BlueSound Node2 that has an unexplained reaction to some MQA files downloaded from 2L. Before the purchase of the Node2 I had downloaded several of the “Free Testbench” MQA tracks from 2L and, as I do with most downloads, changed the track names and tagged them. In the process of renaming the tracks I inadvertently removed the .mqa extension. Since, at the time of download, I had no way to take advantage of MQA the tracks remained idle.

After some time with the Node2 I decided to try those 2L MQA tracks to see how they sounded compared to 2Ls high res version. When the MQA tracks were played the BluOS app did NOT show the MQA logo and the PWDII reported the resolution to be 44.1. To find out why this was the case I downloaded another MQA track from 2L and did not change its name ( leaving the .mqa extension intact ). That track did show the MQA logo and the PWDII reported 88.2. Weird, I thought.

Then I downloaded a couple of David Elias MQA tracks from Bandcamp noticing that by default there was no .mqa extension and sure enough they acted just like the 2L tracks with no extension. Once the .mqa extension was added to David’s tracks they behaved as expected. David had no idea why this should be the case and figured it’s a BlueSound issue as the same tracks sent to him worked fine through an MQA DAC even without the .mqa extension ( of course that’s a different setup not including a Node2 ).

Why I like MQA:

I don’t care about bits lost or not lost, about lossy issues or any other techie stuff concerning the MQA compression scheme. I do know, in 90% of the comparisons, the MQA sounds smoother. Downloads were costing me close to $60 per month so Tidal was a cost effective solution. The fact that Tidal just happens to have MQA is just a bonus for me at no extra cost. Most of the MQA stuff is not to my taste, so my Tidal playlists consist of maybe 2% MQA tracks. Maybe that will change in the future.

CD or Downloads?

Paul: Maybe CD sounds better but CD is not an option for me for 2 reasons:

  1. Got tired of drive mechanisms of expensive transports / players failing only to find the repairs expensive or not even doable.

  2. I never play entire albums. My listening is to playlists such as “Jazz”, “Symphonies”, “Violin Concertos” etc… Try that with a CD player.

If we play tidal it’s a no Brainer we want or need Mqa. I have no intention of buying Mqa as a new replacement of my library same with dsd . Further if I have an option to buy a download in three formats … PCM …dsd or Mqa . Dsd is the choice and will stay that way .

now why any DAC maker would decide not to go full Mqa I get that part and as ps audio rolls firmwares it may be too complex and this too is fine.

What I don’t get is putting it down as a bad thing to have come to us period even if just software partial unfolding it’s very nice but it does not polish a turd and we have plenty of that music sadly.

Playing tidal on a select two in partial software mode is still way beyond there normal 16/44 . This is true on any DAC I tiired or own.

The only music I can say that is beyond total Mqa is some music I own that is just stellar a Rare for most of us. Tidal has given us a jewel and as thy add more albums and now a much better way of knowing what is Mqa or not that 25 a month is an amazing offer.

The final unfold is on oar with going from hifi to Mqa this means again a no Brainer to have it. For anyone who thinks it’s good or bad and how a technical paper should keep anyone from having better sound. Just play an album you own on cd . Find it and play it on tidal , they play it in Mqa and smile and the next level of separation and increase in quality . Then post of ps audio should find a way to mKe it happen. This is the same stance ps audio had with dsd . In short it only has to be better for the users not a technical paper , I get Mqa is yet one more thing to design and a long wait to have it implemented about a year or more right. If I may give ps audio some advice mKe it happen for your owners and let the, decide if it’s better. Sorry for my post but this is same story done to dsd ,

As a semi-literate tech person, I've been enjoying the deep dive into the inner workings of the digits and bits of MQA. Many thanks to all participants.

However, perhaps more relevant to MQA's future than the missing bits is the missing $$ for the streaming industry. Streaming revenues reportedly overtook sales revenues of physical media last year, but streaming is not yet a profitable business model. Here are two fairly recent articles that explore this conundrum:

https://www.forbes.com/sites/quora/2016/09/06/the-streaming-music-industry-has-some-serious-financial-puzzles-to-solve-to-become-profitable/#1fb4069a6dfd

and

http://pitchfork.com/features/lists-and-guides/9986-the-year-in-streaming-2016/

From the Pitchfork.com article:

Profitability is still a challenge for the streaming business, to put it politely: Pandora has not yet managed to be consistently profitable; Spotify has never turned an annual profit; Tidal lost twice as much money in 2015 as they did the previous year; and it’s no coincidence that players like Apple, Amazon, and Alphabet, the corporate parent of Google and YouTube, have other, more lucrative businesses. And there aren’t many other companies left to write a check for a music streaming service. The labels, too, have to hope that streaming can grow fast enough to offset an ongoing decline in downloads and physical sales.”

So far, streaming services have been relying on exclusive releases (and not SQ) to boost subscriptions. Perhaps, the streaming services plan to use lossless and hi-res streams to justify higher subscription costs down the road. That may work, but it won't be easy. The industry basically poisoned that well by setting the value bar too low at the get-go with free/cheap mp3 files.

Excellent post, Howard. I have long said that streaming is an obscene free ride (as I hypocritically enjoy it) and can’t figure out why anyone would say that $19 a month is too expensive for access to nearly all of the music in the world. But what do we expect? Nobody likes to be asked to pay for something they’ve gotten used to enjoying for free. That said, now that so many people have come to enjoy streaming, I think all the services should suspend their free offerings completely. Show people how important streaming is to them and then say, “You want it? Pay for it.” It would also be nice if musicians got their fair share. Before streaming there wasn’t a music lover alive who didn’t spend $200 a year on music ( adjusted for inflation of course). Suddenly that’s too much? People have become too greedy.

I pay tidal 25 per month n never say a word but thanks. N now that Mqa is on the scene why would I buy music u less they don’t have it or a keep sake.

timm said

if MQA does at some point become the de-facto standard… it essentially messes with my world. Why do I think that? Because if I want something new in a high res format - I believe things will just move to this ‘new’ format as the ‘quality’ / ‘masters’ format hence decreasing my ability to purchase new music in my format of choice.


This concerns me as well, especially because MQA also wants to get into the studio and want their process employed in ADCs as new recordings are made.

1234 said

If we play tidal it’s a no Brainer we want or need Mqa.


I disagree. We have plenty of bandwidth and thus do not need a lossy format to stream better than Redbook. I would vastly prefer streaming real, actual high-resolution music than an ersatz pretender.

Howard said

However, perhaps more relevant to MQA's future than the missing bits is the missing $$ for the streaming industry. Streaming revenues reportedly overtook sales revenues of physical media last year, but streaming is not yet a profitable business model.

This is a fascinating aspect of a lot of tech companies, particularly those Internet based. Many do not make a profit or very little profit in light of their market penetration and revenues. The irony here is streaming outsells CDs, but CD sales actually make money. Streaming does not.

vhiner1 said

Before streaming there wasn’t a music lover alive who didn’t spend $200 a year on music ( adjusted for inflation of course). Suddenly that’s too much? People have become too greedy.


We always need to remember that not everyone behaves the same. Many full-fledged, card-carrying music lovers spend less than $200 on music. They already have a library they are happy with, listen to radio and other free sources, add only recordings that are important to them and do not sped willy-nilly, etc.

Due to health issues, and Social Security not going up in 2016, and this year I get $2.50 more per month, while my Part D went up from $0 deductible 2015, to $360 in 2016, and now $400. Then 3 of my 5 meds were moved up a tier, so last year after deductible it cost me $22, now this year $55 a month.

The COLA which calculates cost of living didn’t go up mostly due to lower gas prices. The thing is seniors and the disabled don’t drive very far.

I think I spent less than $100 last year on music. I was paying around $30 every six months for Sirius, but cancelled that. I have cut every corner I can, and am barely surviving. That is why I am worried that Tidal will go from around $20, to $40 for MQA.

My whole financial plan for the future is winning the lottery. So I am basically screwed. I’m lucky I put together a good system, and have a nice sized music collection, when I could.

So, not everyone here is well off. I love steak, but I eat a lot of toaster waffles. My oven works, but the range top is out. I had coil go bad. Luckily I had a big pot of water on there, 5 minutes earlier I was cooking Italian sausages, that would have blown up into my face. As it was, it looked like an arc welder. Bang, then burn. The big stock pot, a nice heavy stainless steel pot, ended up with a quarter inch whole burned right through the bottom. Then there are my teeth, I have 2 loose teeth, I can’t afford to get fixed. It never stops. So, I doubt I will buy any music this year, beyond Tidal. I did buy Jerry Garcia Band Vol 8 download. It was from Milwaukee 1991, I was there, of course;-)

Elk said

This is a fascinating aspect of a lot of tech companies, particularly those Internet based. Many do not make a profit or very little profit in light of their market penetration and revenues. The irony here is streaming outsells CDs, but CD sales actually make money. Streaming does not.

Here's a “path of least resistance to profitability” thought experiment to stimulate some conversation.

Back in the day there was a way to “stream” high quality (for the day and age, and bandwidth limited to 30Hz – 15 Khz) stereo analog for free called FM radio. Advertising paid the bills. In the internet age, ads still work as a way to “pay” for free content. Google (ahem, “Alphabet”) seems to do OK with this model.

So, for consideration: would those who care about streamed MQA put up with commercials coming along with their streamed content?

Or, alternatively, how much would they pay to have the ads removed (as is the case with many “free” apps)?

In the long run, someone has to pay or streaming goes away as a stand-alone business model. If that comes to pass, then I suspect the streaming content provider choices will narrow to Apple, Amazon, and Google who can afford to subsidize the costs. And would they consider supporting lossless streaming?