In response to Marnix Klooster <[EMAIL PROTECTED]> and S.D.Mechveliani <[EMAIL PROTECTED]>, can I offer another mathematic perspective to explain why gcd(0,0) should be zero? Just looking at the natural numbers, the relationship "a divides b", written a|b, defines a partial order. We can write a<=b (in this context) for a|b. This partial order is a (complete) lattice - the greatest lower bound of two numbers a and b is just gcd(a,b), and the least upper bound is lcm(a,b)=a*b/gcd(a,b). The integer 1 is the bottom element (as clearly 1|a for any a), and similarly 0 is the top element (as a|0 for any a). If we look at gcd from this perspective (as computing the meet of two elements of a lattice), then it is obvious that gcd(a,a) = a for every a, and that 0 doesn't need special treatment here. It is true that my definition of lcm is broken when a = b = 0, but this isn't an argument for breaking gcd - after all, the Haskell prelude gets lcm right! Certainly there's no reason for the clause gcd 0 0 = error "Prelude.gcd: gcd 0 0 is undefined" in the prelude! P.S. On domains other than N, a|b becomes a pre-order. This makes the canonical choice of gcd(a,b) less determinate. For Z the choice is obvious; for R gcd 0 0=0; gcd a b=1 seems as good a choice as any; I'm a bit hazy about how gcd behaves on Q (the rationals).