> On 10. Apr 2023, at 12:25, Julien Brulé <julien.br...@obspm.fr> wrote:
> 
> very instructive interesting debate
> is it about reproducible legacy code ?

More or less. In the past (let’s say computer medieval age), you needed a 
strategy for filters running in floating point to avoid running into denormals 
by an ordinary underflow. This could be accomplished by adding a very small DC 
or noise component to the input or injecting it somewhere in the filter 
structure. This keeps the numerical range above the underflow condition. With 
fix point arithmetic, you’ll enjoy limit cycles, which can create some funky 
noises in reverb tanks. 

I’ll ry to sum up the state of discussion:

- RBJ's POV is, that todays CPUs should be able to seamlessly switch into 
denormals processing. As it seems (thank you for mentioning the DDI01001.pdf), 
that’s not the case. 

- Strategies like setting FPU flags to flush to zero or suppressing the 
underflow exception (FTZ / UFE) doesn’t seem to cut, since they might be 
time-consuming or might affect the processing of your dear neighbors in the DAW 
session or the host itself.

- Stefan Stenzel proposed to gain up the input by some hundred dBs and gain 
down the output accordingly to push out the likelihood of an underflow, which 
leads to an interesting compander scheme. 

- Another strategy is to switch back to fix point arithmetic. 

> maybe i m too funky ...
> DDI01001.pdf

OK, here it is, underflow is still an exception. 


> ps: how you deal with the discovery of the None or Nan ?

NaNs are very rare, since they can be produced numerically by dividing 0. by 
0.. So, avoiding a division by zero or replacing a division by a multiplication 
with the inverse in case of a  constant resolves that. For denormals, we’re 
still stuck in the medieval age, see above. 

Best, 

Steffan 



Reply via email to