strtr wrote:
Walter Bright Wrote:

strtr wrote:
abcd Wrote:

On the other hand, being an engineer, I use the reals all the
time and want them to stay. I would use the max precision
supported by the cpu then fixed precision like double any day.

-sk
For me it's the exact opposite, reproducibility/portability is
key. My problem with real is that I am always afraid my floats
get upgraded to them internally somewhere/somehow.
With numerical work, I suggest getting the correct answer is
preferable <g>. Having lots of bits makes it more likely you'll get
the right answer. Yes, it is possible to get correct answers with
low precision, but it requires an expert and the techniques are
pretty advanced.

The funny thing is that getting the exact correct answer is not that
big of a deal. I would give a few bits of imprecision for portability
over x86


In my experience doing numerical work, loss of a "few bits" of precision can have order of magnitude effects on the result. The problems is the accumulation of roundoff errors. Using more bits of precision is the easiest solution, and is often good enough.

In Java's early days, they went for portability of floating point over precision. Experience with this showed it to be a very wrong tradeoff, no matter how good it sounds. Having your program produce the crappiest, least accurate answer despite buying a powerful fp machine just because there exists some hardware somewhere that does a crappy floating point job is just not acceptable.

It'd be like buying a Ferrari and having it forcibly throttled back to VW bug performance.

Reply via email to