Mark Dickinson added the comment:

The current `gcd` definition is almost accidental, in that it just happens to 
be what's convenient for use in normalisation in the Fraction type.  If people 
are using it as a standalone implementation of gcd, independent of the 
fractions module, then defining the result to be always nonnegative is probably 
a little less surprising than the current behaviour.

BTW, I don't think there's a universally agreed definition for the extension of 
the gcd to negative numbers (and I certainly wouldn't take Rosen's book as 
authoritative: did you notice the bit where he talks about 35-bit machines 
being common?), so I don't regard the fraction module definition as wrong, per 
se.  But I do agree that the behaviour you propose would be less surprising.

One other thought: if we're really intending for gcd to be used independently 
of the fractions module, perhaps it should be exposed as math.gcd.  (That would 
also give the opportunity for an optimised C version.)

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue22477>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to