Grant Edwards wrote: > I've always found that check to be really annoying. Every time > anybody asks about floating point handling, the standard > response is that "Python just does whatever the underlying > platform does". Except it doesn't in cases like this. All my > platforms do exactly what I want for division by zero: they > generate a properly signed INF. Python chooses to override > that (IMO correct) platform behavior with something surprising. > Python doesn't generate exceptions for other floating point > "events" -- why the inconsistency with divide by zero?
I'm aware result is arguable and professional users may prefer +INF for 1/0. However Python does the least surprising thing. It raises an exception because everybody has learned at school 1/0 is not allowed. >From the PoV of a mathematician Python does the right thing, too. 1/0 is not defined, only the lim(1/x) for x -> 0 is +INF. From the PoV of a numerics guy it's surprising. Do you suggest that 1./0. results into +INF [1]? What should be the result of 1/0? Christian [1] http://en.wikipedia.org/wiki/Division_by_zero#Division_by_zero_in_computer_arithmetic -- http://mail.python.org/mailman/listinfo/python-list