Paul D. Anderson wrote:
Georg Wrede Wrote:

Well, if you're doing precise arithmetic, you need a different number of significant digits at different parts of a calculation.

Say, you got an integer of n digits (which obviously needs n digits of precision in the first place). Square it, and all of a sudden you need n+n digits to display it precisely. Unless you truncate it to n digits. But then, taking the square root would yield something else than the original.

To improve the speed, one could envision having the calculations always use a suitable precision.

This problem of course /doesn't/ show when doing integer arithmetic, even at "unlimited" precision, or with BCD, which usually is fixed point. But to do variable precision floating point (mostly in software because we're wider than the hardware) the precision really needs to vary. And also that's where "precision deserved by the input" comes from.


There are different, sometimes conflicting, purposes and goals associated with 
arbitrary precision arithmetic. I agree that balancing these is difficult. My 
intention is to provide an implementation that can be expanded or modified as 
needed to meet their particular goals. The default behavior may not be what a 
user needs.

As an aside, I've seen an implementation that was centered around carrying the error 
associated with any calculation. The intent was to make "scientific" 
calculation precise, but the result was to rapidly degrade the precision to the point 
where the error exceeded the computed values.

I think it's generally better to relegate that task to Interval arithmetic, rather that BigFloat.

I'm hoping to have an alpha-level implementation available sometime next week.

Awesome!

Paul

Reply via email to