Marcin Dalecki wrote:
On 2006-12-20, at 22:48, Richard B. Kreckel wrote:
2) Signed types are not an algebra, they are not even a ring, at
least when their elements are interpreted in the canonical way as
integer numbers. (Heck, what are they?)
You are apparently using a different definition of an algebra or ring
than the common one.
What I was talking about was this:
<http://en.wikipedia.org/wiki/Algebra_over_a_field>.
In the absence of a modulus (i.e. "wrapping") all the operations (the
vector space's addition and the algebra's multiplication) run into
problems as long as one maintains the canonical homomorphism (i.e.
identification with integer numbers 0, 1, 5...)
Integral types are an incomplete representation of the calculation
domain, which is the natural numbers.
This is an arbitrary assumption. In fact most people simply are well
aware of the fact that computer
don't to infinite arithmetics.
But the same applies to floating point numbers. There, the situation is
even better, because nowadays I can rely on a float or double being the
representation defined in IEEE 754 because there is such overwhelming
hardware support. The variety of int sizes encountered nowadays is
greater. Case in point: During the last couple of years, I've not seen
any nonstandard floating point storage representation. On the other
hand, last year 16 bit ints were inflicted upon me (an embedded target),
and on UNICOS-MAX I found the 64 bit ints were slightly irritating, too.
You are apparently confusing natural numbers, which don't include
negatives,
with integers.
Right, I used the wrong term.
However it's a quite common mistake to forget how "bad" floats "model"
real numbers.
And it's quite a common mistake to forget how "bad" finite ints "model"
integer numbers.
This corroborates the validity of the analogy with IEEE real arithmetic.
And wrong assumptions lead to wrong conclusions.
Which assumption was wrong?
-richy.
--
Richard B. Kreckel
<http://www.ginac.de/~kreckel/>