Just a though;

The way I've understanded, the compression bases heavily on the
quantization. It's selected with two methods; 

- The compressed bitstream with quantized values should be more
  than the selected bitrate.
- The max value of quantized values in cant be over (8191 + 14)

 The evaluation is done this order. 

 I cant see WHY the later have to be done by iterations. 

 Based on the original in quantize() & simplified;
...
 step = pow(2.0, quantizerStepSize * 0.25);
 dest[counter] = nint(pow(fabs(source[counter]) / step, 0.75) - 0.0946);

 Why iterate the buffer with different quantizerStepSize -> step?
Just calculate source max value and use inverse function to calculate
quantizerStepSize.

 The bitstream size checking is done with binary search, with function 
that takes TOP / BOTTOM values and halfs the range for after calculating
bitstream. Better way to implement this is to have a estimated value
(based on the quant_init or even better, use the result from previous) 
and start adding a step for given direction. When the sign changes,
half the step. The current method takes around 9 iterations of huffman
compressions, using the previous value as start and first step 8.0,
the iterations drop to 5 - 4. (I tried this one, at least it work ;-)

 This idea can be used for better quality; quantizerStepSize is a double
value, but with the current implementation the iterations are done with
integer steps..

 ..Or have I misunderstood something critical?

- Juha Laukala



Reply via email to