FK> Frank Klemm <[EMAIL PROTECTED]> and
RL> Rob Leslie <[EMAIL PROTECTED]> wrote:

RL> I've been doing some tests on the accuracy of decoder implementations.

FK> A full quality-optimized decoder will always fail the test.
FK> The "round to the nearest integer" is the worst thing you can do.

FK> If the Output is a AIFF floating point PCM file, there is no rounding to
FK> the nearest integer (in the first approach). Otherwise you have to
FK> quantize the result of the MP3 decoding process. The most often used (and
FK> worst) operator for that is the "round-to-nearest-integer" quanization
FK> operator. It generates the best technical SNR and the worst audible SNR.

RL> So, if I understand you correctly, you claim a quality-optimized decoder
RL> which does something other than round-to-the-nearest-integer as the final
RL> decoding step will sound better than one that does, but consequently also
RL> won't pass the ISO/IEC 11172-4 compliance tests for computational
RL> accuracy?

RL> Can you give an example of a decoder which does this?

FK> May be mpg123 in october. I try to convince Michael Hipp to use my
FK> ultra-fast round-to-nearest-integer routines°°). If he uses these
FK> routines, it becomes very easy to use other quantization routines by
FK> simple replacing 2 routines by another and recompiling. Currently the
FK> quantization is spreaded around the whole program.

I guess I don't get your point. I understand your concern about quantization
noise in the last step of the decoding process, but this seems unavoidable, no
matter how you go about it.

The amount of quantization noise is directly related to the precision of the
output, since as you noted, floating point output incurs no rounding. But even
floating point output has only a certain level of precision, and quantization
has already taken place.

My purpose was to measure the computational accuracy of audio decoders in the
manner described by ISO/IEC 11172-4. To do so requires obtaining the PCM
output from the decoder, in whatever precision. You'll notice I was only able
to obtain 16-bit precision from most of the decoders, so for these decoders
the quantization noise is presumably +/- 1.526e-5 in the best case, and indeed
this shows in the results. The compliance tests, however, allow this to be up
to +/- 6.104e-5 and still be compliant.

I think it's clear that output with more precision can reduce the resulting
quantization noise, but that doesn't negate the test results for decoders with
less precision, nor should the quantization strategy have any bearing on the
tests. The point is to measure the difference between the decoder's output
signal and the reference output signal, both of which have the same sampling
frequency. If you want to optimize the output quality by resampling to a
higher frequency based on internally higher precision samples, fine, but this
doesn't change the fundamental computational accuracy of the decoder as
measured by the compliance test.

Do you disagree, or aren't the compliance test results therefore valid?

FYI, the strategy I took with MAD was to provide full (28-bit) internal
precision output from the decoder library API and let the application decide
how to scale or resample it to however many bits of precision it wants. So
your idea of replacing the round-to-nearest-integer routine with something
else would be simple with MAD.

Cheers.

--
Rob Leslie
[EMAIL PROTECTED]
--
MP3 ENCODER mailing list ( http://geek.rcc.se/mp3encoder/ )

Reply via email to