On 14 Feb 2008, [EMAIL PROTECTED] wrote: > > [EMAIL PROTECTED] wrote: > > Trialog: > > Roman Leshchinskiy writes: > >> Richard A. O'Keefe wrote: > > > >>> [EMAIL PROTECTED] wrote: > >>>> Would you say that *no* typical floating-point software is reliable? > >>> > >>> With lots of hedging and clutching of protective amulets around the > >>> word "reliable", of course not. What I *am* saying is that > >>> (a) it's exceptionally HARD to make reliable because although the > >>> operations > >>> are well defined and arguably reasonable they do NOT obey the laws that > >>> school and university mathematics teach us to expect them to obey > >> > >> Ints do not obey those laws, either. It is not exceptionally hard to write > >> reliable software using ints. You just have to check for exceptional > >> conditions. That's also the case for floating point. > >> That said, I suspect that 90% of programs that use float and double would > >> be > >> much better off using something else. The only reason to use floating point > >> is performance. > > > > I have a bit different perspective... > > First, when I see the advice "use something else", I always ask "what", > > and I get an answer very, very rarely... Well? What do you propose? > > For Haskell, Rational seems like a good choice. The fact that the standard > requires defaulting to Double is quite unfortunate and inconsistent, IMO; the > default should be Rational. Float and Double shouldn't even be in scope > without > an explicit import. There really is no good reason to use them unless you are > writing a binding to existing libraries or really need the performance.
Until you need to evaluate a transcendental function. Floating point numbers are remarkably well-behaved in the following sense. Fundamental Axiom of Floating Point Arithmetic (Trefethen & Bau, 1997) For all x,y in F, there exists e with |e| <= e_machine such that x <*> y = (x * y) (1 + e) where F is the set of real numbers in a particular floating point representation, <*> represents any hardware arithmetic operation and * is the corresponding exact operation. This is satisfied by nearly all floating point implementations, with somewhat stronger conditions for IEEE. It is easily extended to complex arithmetic, perhaps after scaling e_machine by a small factor. This single axiom is sufficient for all stability (with regard to rounding error) results of numerical algorithms. Double precision has 15 significant digits. It is a very rare physical quantity that can be measured to 10 significant digits and this is unlikely to change in the next 100 years. It is a rare algorithm for which floating point arithmetic is a problem. Occasionally we must make decisions, such as choosing which way to project during Householder QR, so that rounding error is not a problem. Unfortunately, Gaussian Elimination is an important (only because it happens to be fast) algorithm which suffers From rounding error. Since there are well conditioned matrices for which Gaussian Elimination fails spectacularly despite pivoting, many people believe that rounding error is a major part of numerical analysis. This is absolutely not the case; it is less than 10% of the field. Of course, if you are not trying to represent arbitrary real numbers, using floating point may be a mistake. Converting between integer or rational representations and floating point requires careful attention. As long as you stay in floating point, it is almost never a problem. Remember that it is not possible to implement exact real arithmetic so that equality is decidable. We should take this as a sign that equality for floating point arithmetic is dangerous to rely upon. Floating point is extremely useful and I think it would be a mistake to remove it from the Prelude. One thing I would like to see is the Enum instances removed. Jed
pgpc7MjwKCyWm.pgp
Description: PGP signature
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe