I think it is certain that this problem is caused by noises
concentrated on pure tones. 
A noise on a single MDCT coefficient increases as it's
amplitude increases. This is because quantized values are
actually values powered by 3/4.

According to the theory of psychoacoustic, it is a wonder
why using average noise rather than MAXNOISE succeeds.
Perhaps this is the answer of this question:
The reason why vbrtest problem is not apparent in most cases
is that if a calculated noise of an MDCT coefficient exceeds
the masking threshold of it's SFB, this means that amplitude
of that coefficient is large and thus it makes larger masking
itself. So, real masking of it's frequency is larger than
calculated masking of it's SFB.

Perhaps the most clear solution is to calculate masking of 
all MDCT coefficients, and use MAXNOISE. But this is too
expensive. Perhaps a reasonable solution is detecting peaks
and calculating these maskings separately.

--
Naoki Shibata   e-mail: [EMAIL PROTECTED]

--
MP3 ENCODER mailing list ( http://geek.rcc.se/mp3encoder/ )

Reply via email to