FLAC file 1411kbps vs 870kbps

I don’t understand the bit rate associated with FLAC files. I have downloaded audio files in FLAC formats, with big variations in bit rates, some were very high, 6000-9000kbps, some were 1411kbps, some were low, e.g., 580kbps, 870kbps. What do they mean regarding sound quality? Since they are all in FLAC format, can I assume that they are all lossless from the original? Thanks for the advice.

The bit rate of the same audio signal will vary depending on the level of compression used in the FLAC file. 1411kpbs is an uncompressed 16/44.1 signal, or 0 compression. As the compression level of the file increases, the bits per second played (as calculated using the FLAC file size) decreases. It’s somewhat of a silly measure to use for FLACs, in my opinion.

While there should be no difference in the uncompressed data (this is lossless, after all), the processor in the player must work harder to play FLACs encoded at higher compression levels. This may cause degradation of sound quality in some instances and is why some people choose to use 0 compression or simply store the data in WAV format.


Another thing confuses me is that some FLAC albums have all the songs with the same bit rate, e.g., 1411kbps, while others have every songs in the same album with different bit rates. How does that come about?

I’m sure @tedsmith or others can chime in with a much more thorough explanation, but I’ll take a shot.

1411kbps is uncompressed 44.1/16 audio (aka CD quality). So basically it’s encoded losslessly and uncompressed as FLAC.

When the kbps rate is lower, it’s the same lossless audio but it is compressed, as @Peanut_Butter mentioned.

Maybe a different way to think about things… When the audio file is uncompressed, FLAC says I know that no matter what a 1411 kbps bucket can fit all the data I need for CD quality audio. The bucket might be completely full, or it might have a lot of empty space depending on the track. Since FLAC is not compressing anything, it doesn’t care about the empty space.

Now when FLAC is compressing the audio file, it takes the audio and fits it into the exact right size bucket. So if one song can fit into a 870 kbps bucket after your desired level of compression, that’s how FLAC will encode it. If another song off the same album fits into a 540 kbps bucket, that’s what FLAC will use.

I’m sure this isn’t a perfect analogy, though hopefully it helps you understand a bit better.


The variation comes from the way compression works. At its simplest, compression stores the relative change between values in a stream of data rather than storing the actual data values themselves.

For instance, let’s say you have a data stream that looks like this:
50, 70, 40, 100, 90.
A compression routine might store the first value, 50, and then store this: 20, -30, 60, -10. This isn’t a lot of savings in storage space, but it is smaller.

If the data stream had less variation, like 50, 51, 51, 50, 52, then the compressed stream would be 50,1, 0, -1, 2. That’s a lot less to store.

FLAC is a more complex algorithm, but works much the same in principle. If you encode a very dynamic two minute passage of music, the compressed file will be substantially larger than a two minute recording of very subdued music, even if you use the same compression level when encoding. This accounts for the different bit rates you’re seeing.