ginal Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
> Behalf Of Jan de Wit
> Sent: den 19 december 2001 01:15
> To: [EMAIL PROTECTED]
> Subject: Re: gcd 0 0 = 0
>
>
> Why not define gcd a b as the largest (in 'normal' order)
> integer d s
Why not define gcd a b as the largest (in 'normal' order) integer d such
that the set of sums of
multiples of a and b {na+mb | n <- Z, m <- Z} is equal to the set of
multiples of d
{nd | n <- Z}? Easy to understand, no talk of division, lattices, rings,
ideals etcetera, and it covers the cases wit
ses, there is a natural choice for d (e.g., in
the integers, the non-negative d; in the ring of polynomials over a
field, the monic d (having leading coeff. 1)). In some UFDs there is no
canonical choice (e.g. in the Gaussian integers, a + ib for a, b integers).
gcd(0, 0) = 0.
Cheers,
Michae
t least the books by Lang and MacLane-Birkhoff are standard references.
Note that the definitions all agree when they are defined (with gcd 0 0 = 0).
As I said, I was surprised; to me, the definiton with all a and b is the
more natural one. I still recommend using the full domain (especially since
e
so below!),
but that is different. It is not the greatest (in the ordinary sense)
divisor of 0. Indeed, +infinity is a much larger divisor of 0...
I'm not in favour of using a very special-purpose order, not used for
anything else, and that isn't even an order but a preorder, just to
moti
t; : and `lcm', we define `gcd 0 0 = 0'.
[snip]
: This is exactly what you get if you plug the relation 'divides' on the
: non-negative integers into the definition of meet in a lattice. So
: this formulation is no more or less complex to use than the lattice
: one --- and
integers into a lattice under `gcd'
> :and `lcm', we define `gcd 0 0 = 0'.
> It would surely make things a lot less accessible to people
> (including me) who do not have any (or limited) knowledge about
> lattices. Why not make it more accessible and use the following
>>>>> "Simon" == Simon Peyton-Jones <[EMAIL PROTECTED]> writes:
Simon> Christoph does not like this
It's OK if the definition is clear; it wasn't using
the words "positive" or "greatest integer".
Stating "gcd 0 0 =
If everyone likes this I'll put it in; otherwise I'll simply state that
gcd 0 0 is defined to be 0.
Christoph does not like this, but the weight of world opinion seems
to be fairly clearly in favour of gcd 0 0 = 0.Let's try to wrap this
one
up.
Simon
| -Original Mess
it. How about if the report just
: says:
:
:In order to make the non-negative integers into a lattice under `gcd'
:and `lcm', we define `gcd 0 0 = 0'.
It would surely make things a lot less accessible to people (including
me) who do not have any (or limited) knowledge a
Ch. A. Herrmann ([EMAIL PROTECTED]) wrote:
: In contrast, 0*x=0, thus 0 "divides" 0 (somehow).
: But I have problems with "gcd being the greatest positive integer ..."
[snip]
: - 0 is not positive, it is non-negative or natural
: - 2 also divides 0 and 2 is a "greater integer" than 0
: (0 is
and lcm.
Indeed, that's a nice way of putting it. How about if the report just
says:
In order to make the non-negative integers into a lattice under `gcd'
and `lcm', we define `gcd 0 0 = 0'.
___
Haskell mailing list
[EMAIL PROTE
>>>>> "George" == George Russell <[EMAIL PROTECTED]> writes:
George> I've reconsidered my earlier position and think now that the
George> Prelude is wrong to make gcd 0 0 an error, and should return
George> 0. It probably doesn't ma
I've reconsidered my earlier position and think now that the Prelude is wrong to make
gcd 0 0 an error, and should return 0. It probably doesn't make much difference to
anyone, but it's like 1 not being a prime; it may be slightly harder to explain, but it
makes the maths come o
d/lcm is equivalent to noting that
divides is a partial preorder on the non-zero integers, and that the
quotient identifies x and -x).
The only thing that is lacking to make it a lattice on the
non-negative integers, is that gcd 0 0 = 0 . All other cases
involving zero (i.e., gcd 0 x = x for
Marc van Dongen ([EMAIL PROTECTED]) wrote:
: An integer $a$ divides integer $b$ if there exists an integer
: $c$ such that $a c= b$.
[snip]
: gcd 0 0 = 0; and
: gcd 0 0 /= error "Blah"
To make clear why $0$ (and not any other non-zero integer) is the
gcd of $0$ and $0$ I s
From: "Simon Peyton-Jones" <[EMAIL PROTECTED]>
Date: Fri, 14 Dec 2001 01:18:56 -0800
...
If someone could write a sentence or two to explain why gcd 0 0 = 0,
(ideally, brief ones I can put in the report by way of explanation),
I think that might help those of
Simon Peyton Jones ([EMAIL PROTECTED]) wrote:
: If someone could write a sentence or two to explain why gcd 0 0 = 0,
: (ideally, brief ones I can put in the report by way of explanation),
: I think that might help those of us who have not followed the details
: of the discussion.
Division in
y to say
greatest (positive) integer that divides both n and m
but debate seems to have swirled round whether (gcd 0 0) should
be 0 or an error. Currently in H98 it's an error; but it is the kind
of thing I'm willing to change IF there is a consensus, because it
will only make
From: "S.D.Mechveliani" <[EMAIL PROTECTED]>
Date: Thu, 13 Dec 2001 12:53:32 +0300
Further, the definintion
> gcd(x, y) to be the smallest
> z >= 0 such that {m*x + n*y | m, n in Z} = {n*z | n in Z}
is not natural. In particular, how does it generalize to gcd X Y
for po
People write on gcd 0 0 :
Alan Bawden <[EMAIL PROTECTED]>
> If you take the point-of-view that gcd is actually an operation on
> ideals, then gcd(0, 0) is 0. I.e. define gcd(x, y) to be the smallest
> z >= 0 such that {m*x + n*y | m, n in Z} = {n*z | n in Z}. This is
&
From: George Russell <[EMAIL PROTECTED]>
Date: Tue, 11 Dec 2001 18:18:31 +0100
...
Yes. The Report definition says
gcd :: (Integral a) => a -> a -> a
gcd 0 0 = error "Prelude.gcd: gcd 0 0 is undefined"
gcd x y
S.D.Mechveliani wrote
> Does the Report specify that gcd 0 0 is not defined?
Yes. The Report definition says
gcd :: (Integral a) => a -> a -> a
gcd 0 0 = error "Prelude.gcd: gcd 0 0 is undefined"
People write about the Report definition of
gcd x y
as of greatest integer that divides x and y,
and mention
> gcd 0 0 = 0
But 2 also divides 0, because 2*0 = 0.
Does the Report specify that gcd 0 0 is not defined?
For an occas
Hello,
I just noticed that the Haskell Report leaves "gcd 0 0" undefined. Why is
that? I think the gcd of 0 and N should be |N|, the absolute value of N.
Haskell currently does just that, except when N=0: the result of "gcd 0 0"
is undefined instead of 0. I propose to rem
25 matches
Mail list logo