Why not define gcd a b as the largest (in 'normal' order) integer d such
that the set of sums of
multiples of a and b {na+mb | n <- Z, m <- Z} is equal to the set of
multiples of d
{nd | n <- Z}? Easy to understand, no talk of division, lattices, rings,
ideals etcetera, and it covers the cases with 0.

Cheers, Jan de Wit



_______________________________________________
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell

Reply via email to