Florian has already hinted at it, but I will say it again explicitly:

Mathematically, gcd and lcm are not operations on the elements of a ring, but on the equivalence classes w.r.t. associatedness (i.e. mutual divisibility). In fact, these equivalence classes form a complete lattice w.r.t. divisibility, where inf = gcd, sup = lcm, Inf = Gcd, Sup = Lcm, bot = 1, top = 0.

However, juggling equivalence classes is inconvenient; it is much nicer to use representatives, and ideally canonical representatives. This is why, in Isabelle, we require that the gcd/lcm be normalised. For natural numbers, everything is normalised; for integers, this means that the integer is non-negative; for polynomials, it means that the leading coefficient is normalised (if the coefficient ring is a field, this means the polynomial is zero or must have leading coefficient 1).

I do think that we should enforce the same thing in the ML implementation of gcd/lcm. Any definition of gcd/lcm for integers where either of them may be negative does not make much sense to me. My guess would be that lcm can be negative in the implementation you mentioned because the author defined "lcm a b = a * b / gcd a b" with the unstated assumption that it is only called for non-negative numbers. Or perhaps they thought the sign does not matter.


By the way, speaking of GCDs here and of rational numbers in the other thread: I am currently working on a theory of normalised fractions for arbitrary GCD rings. This builds on top of Amine Chaiebs fraction fields, but additionally requires that the numerator and denominator be normalised and the denominator be normalised, which makes the representation unique and therefore more convenient.

This theory will then easily be instantiable for polynomials and formal power series, yielding rational functions and Laurent series, respectively. One could even base Isabelle's rational number theory on this more general development in the future.


Cheers,

Manuel



On 31/05/16 21:37, Makarius wrote:
Dear integer experts,

presently on Isabelle/c3896c385c3e and AFP/d9305abd02e9, I am trying to
understand the situation of gcd / lcm for negative arguments.

We have the following slightly divergent operations:

  * HOL gcd / lcm: always >= 0 (via "normalize" which is "abs")

  * PolyML.IntInf.gcd: always >= 0
    PolyML.IntInf.lcm: mixed signs, according to product of arguments

  * Integer.gcd / lcm: mixed signs -- IIRC the implementation was only
meant to operate on "nat", which happens to be spelled "int" in ML.

My impression is that the HOL definitions are canonical: several number
theory experts have gone over it over the years. Is this correct?


An investigation of the (very few) uses of the above Poly/ML and
Isabelle/ML operations in Isabelle + AFP supports the following working
hypothesis:

Integer.gcd / lcm should follow the HOL versions, in "normalizing" the
sign, i.e. removing it. (The updated implementation will use
PolyML.IntInf gcd for improved performance.)


Does this sound like a good idea?


        Makarius
_______________________________________________
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev

_______________________________________________
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev

Reply via email to