In the full command line switch reference it says that encoding with -h is about 
20% slower than encoding with the default setting (I'm strictly talking about CBR 
encoding). When I try this with the binary 
that is true. But when I use the lame.dll in the various rippers the encoding time for 
normal and high quality is the same.
I therefore conclude that in these cases the quality setting has no effect. What do 
you think about that?
Some time ago Mark Taylor kindly answered to this question that the frontend 
should use the "LHV1" interface to handle HQ/LQ. But apart from that: if the 
encoding time is the same, the quality is the same, isn't it?

Best regards,
Wim Speekenbrink

--
MP3 ENCODER mailing list ( http://geek.rcc.se/mp3encoder/ )

Reply via email to