If you turn the dial on the time machine back far enough, the only option was 
fixed point. In that era, designers were forced to consider overflow and 
underflow. Instructions that saturate might take care of overflow, if you don't 
mind the signal getting destroyed. Underflow would lose the signal entirely, so 
gain structure needed to be carefully designed to avoid having the signal 
disappear right before it would get amplified. Denormals weren't an issue 
before floating point.

Software could handle floating point by using instructions to align the 
mantissa of two numbers so that they could be added or subtracted. 
Multiplication and division simply considered the exponent on input and output, 
with the gained or lost significance being basically ignored. But software 
handling of floating point is slow, because a single operation requires 
multiple opcodes.

Eventually, hardware was designed to handle all the automatic shifting of 
mantissa bits, and the related adjustments of exponents. However, no matter how 
much you handle in hardware, there is still a limit to precision, even with 
floating point.

Hybrid hardware plus software systems would handle as much of the floating 
point calculations in hardware as possible, and use interrupts to trigger 
software implementations to deal with abnormal conditions.

TANSTAAFL

My understanding is that you have two choices: Only handle normalized numbers, 
and live within that particular limited range, or extend the handling to cover 
denormalized numbers, but suffer speed costs and still have a range that is 
eventually limited. Here's where the details escape me, but handling of 
denormals isn't free - it generally requires more steps. I'm not even aware of 
whether it's possible to handle denormals in a fixed time slot.

Whether interrupts are involved or not is separate from the consideration of 
whether handling of denormals takes more time.

Given the limitations of the mathematics, the floating point hardware offers 
options - or at least it did in a certain era. You have the tradeoff between 
time and precision. For audio processing, we reached the state of the art where 
real-time processing was possible in my lifetime - what little math I learned 
in school (convolution, FFT, etc) was never calculable in real time on 
processors when I was in college, but by the time I had a job and could afford 
digital audio gear it had become possible to perform convolution in real time. 
Once you implement real-time signal processing, that tradeoff between time and 
precision becomes important to decide.

Although generic CPUs are used for many applications, once you start processing 
audio in real time, you want to avoid missing the real-time deadlines. 
Designers of operating systems could not necessary make the decisions about 
these tradeoffs for their customers, because they didn't know how the customer 
was going to use the processor. Thus all of these settings.

So, yeah, RBJ asks whether we can finally stop thinking about this and expect 
the tradeoff to be a thing of the past. In my estimation, the answer requires 
detailed analysis of how floating point operations are handled in hardware. 
Without reviewing the state of the art, all I can say is that infinite 
precision is not possible in real time. Whether denormal processing is possible 
in real time would require a few more details than I have at the moment.

Brian Willoughby


On Apr 11, 2023, at 3:27 AM, STEFFAN DIEDRICHSEN wrote:
> On 10. Apr 2023, at 12:25, Julien Brulé wrote:
>> very instructive interesting debate
>> is it about reproducible legacy code ?
> 
> More or less. In the past (let’s say computer medieval age), you needed a 
> strategy for filters running in floating point to avoid running into 
> denormals by an ordinary underflow. This could be accomplished by adding a 
> very small DC or noise component to the input or injecting it somewhere in 
> the filter structure. This keeps the numerical range above the underflow 
> condition. With fix point arithmetic, you’ll enjoy limit cycles, which can 
> create some funky noises in reverb tanks. 
> 
> I’ll ry to sum up the state of discussion:
> 
> - RBJ's POV is, that todays CPUs should be able to seamlessly switch into 
> denormals processing. As it seems (thank you for mentioning the 
> DDI01001.pdf), that’s not the case. 
> 
> - Strategies like setting FPU flags to flush to zero or suppressing the 
> underflow exception (FTZ / UFE) doesn’t seem to cut, since they might be 
> time-consuming or might affect the processing of your dear neighbors in the 
> DAW session or the host itself.
> 
> - Stefan Stenzel proposed to gain up the input by some hundred dBs and gain 
> down the output accordingly to push out the likelihood of an underflow, which 
> leads to an interesting compander scheme. 
> 
> - Another strategy is to switch back to fix point arithmetic. 
> 
>> maybe i m too funky ...
>> DDI01001.pdf
> 
> OK, here it is, underflow is still an exception. 
> 
> 
>> ps: how you deal with the discovery of the None or Nan ?
> 
> NaNs are very rare, since they can be produced numerically by dividing 0. by 
> 0.. So, avoiding a division by zero or replacing a division by a 
> multiplication with the inverse in case of a  constant resolves that. For 
> denormals, we’re still stuck in the medieval age, see above. 
> 
> Best, 
> 
> Steffan

Reply via email to