View Single Post
Old 07 July 2006, 12:13 pm   #4
Shy's Avatar
Join Date: Jul 2004
Posts: 372

As for samples to use, I don't suppose you'll get results that are any better than your 128kbps test's results if you choose yet another set of not very problematic samples.
Demanding samples reach a bitrate much higher than 192kbps with any codec using standard as well as low quality settings. Which brings us to the fault in testing codecs for transparency by the avarage bitrate of the encoded audio segment as opposed to the avarage bitrate of the entire musical piece that segment is extracted from.

Naturally, audio segments that are truly demanding and which expose a codec's capabilities, cause a bitrate higher than the usually much lower bitrate in non demanding audio with the same quality setting. An entire encoded song can have a bitrate of 180kbps while a demanding segment from it can require 280kbps.

There are cases where one codec would allocate much higher bitrate to a problematic audio segment with its standard quality setting than another codec with a similar setting and thus achieve much better quality. And sometimes, the codec that has a lower bitrate achieves similar quality to that with the higher bitrate. And sometimes, a codec allocates a much lower bitrate than another codec and still achieves higher quality than the other.

The issue is not with the bitrate, it's with how efficient and "smart" a codec is with its bitrate allocation for varying degrees of audio complexity. The bitrate which needs to be taken into account is the avarage bitrate of an entire musical piece, and not of a problematic segment.

So to summarize, a test limited to a single avarage bitrate of encoded audio segments can't expose transparency efficiency of codecs. Instead, all codecs would get similar ratings, considering that the audio to encode is not problematic enough to actually expose deficiencies, so personally I can't see the point in such a result.
Shy is offline   Reply With Quote