> On 04/11/2023 12:38 PM CDT Nigel Redmon <earle...@earlevel.com> wrote:
> 
> 
> FWIW, I set a “global” value (an instance variable in my plugin object) to 
> something like 1e-15 (-300 dBFS) or 1e-18. For most samples it’s simply 
> quantized out, and only changes zero samples.
> 
> I negate it periodically—every time my plugin process is called with a new 
> set of buffers is convenient. Toggling the sign keeps highpass filters from 
> removing the offset.
> 
> Add it to the input sample for further processing.
> 
> For typical processing that’s it. Toggling gets it through DC blockers, 
> lowpass filters don’t decay to denormals. But if you’re doing something 
> unusual, you can inject the value again at any point as needed.
> 
> If you skip toggling, you’ll have to be more aware of things like highpass 
> filters, and likely need to inject in more places. Testing a plugin is pretty 
> easy if you have a DAW that has processing meters for plugins. Run it with 
> audio, for a baseline, then stop the transport. If you’re got a denormals 
> problem, the meter for the plugin will rise.
> 
> For the excessively paranoid (“but…how can I be certain it’s not 
> happening?"), it’s not that you need to guard against a denormal ever 
> happening, but against decaying into a long string of consecutive denormals 
> at the sample rate.
> 

So how big are the processing blocks?  Like 32 or 64 samples?  So, if *one* 
denorm happens during this block and there's an exception or something goofy, 
the cost of that exception is not too bad being amortized over 32 samples?

What you're doing, Nigel, makes a lotta sense (seems to me that it might be 
better to toggle the sign *every* sample and put it up at the Nyquist 
frequency, but then maybe some simple LPFs, with a pole at z=-1, will kill 
that).

But Steffan got my sentiment right.  We just shouldn't have to think about it.  
If we do our math correctly, we can make sure our algorithm never does division 
by zero nor square roots a negative value (assuming it's not complex 
arithmetic) and we should *never* have to deal with NaNs nor INFs.  But normal 
processing might land in denorm territory and we *should* *not* have to be 
worrying about that.

If it's a 64-bit processor and we're just doing math (not worrying about or 
using SIMD), we should not have to be thinking about quantization or 
overflow/saturation or scaling until we are computing a fixed-point output word 
that is going to a DAC or getting written to a fixed-point wav file or getting 
sent to a codec (that's expecting fixed point samples).  Otherwise we just 
shouldn't have to think about it.  We should just do the math like the textbook 
equations say.

So the fact that, to be responsible DSPers, that we have to tickle the samples 
to keep them away from denormals, that just makes me grumpy.  And I am still 
skeptical about Sampo's defense of the hardware decisions not to simply fix 
this.  It really *isn't* that much logic to fix denorms going in and out of the 
ALU.  It shouldn't be a problem.  I just cannot fathom why, in 2023, that 
denorms cannot be dealt with accurately and routinely, with no MIPS penalty, in 
a modern general-purpose FPU.

--

r b-j . _ . _ . _ . _ r...@audioimagination.com

"Imagination is more important than knowledge."

.
.
.

Reply via email to