> I now have a distinct dislike of float values (it'll probably wear off over
> time), how can the sum of 100,000 numbers be anything other than the sum of
> those numbers. I know the reasoning, as highlighted by the couple of other
> e-mails we have had, but I feel the default should probably lean towards
> accuracy than speed. 2.0+2.0=4.0 and 2.0+2.0.....=200,000.0 not 2array.sum()
> != 200,000...

In that case, we should not use doubles, but long double or even
better, the real numbers themselves. Which would mean that
computations would be very very very slow.
Numpy leans somehow towards accuracy. If you want more accuracy
(because even with double, you can hit the limit very fast), use
another type.

You said :
 how can the sum of 100,000 numbers be anything other than the sum of
> those numbers

This will always be a problem. With doubles, try to sum 1/n
(1...100000), you'll be surprized. And then do sum 1/n (100000...1)
with float values, and here the result should be better than when
using doubles. Numerical issues in scientific computing are tricky.
There is no single answer, it depends on your problem.

Matthieu
-- 
French PhD student
Information System Engineer
Website: http://matthieu-brucher.developpez.com/
Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn: http://www.linkedin.com/in/matthieubrucher
_______________________________________________
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to