On Friday, 21 November 2014 at 16:12:19 UTC, Don wrote:
It is not uint.max. It is natint.max. And yes, that's an overflow condition.

Exactly the same as when you do int.max + int.max.

This depends on how you look at it. From a formal perspective assume zero as the base, then a predecessor function P and a successor function S.

Then you have 0u  - 1u + 2u ==> SSP0

Then you do a normalization where you cancel out successor and predecessor pairs and you get the result S0 ==> 1u. On the other hand if you end up with P0 the result should be bottom (error).

In binary representation you need to collect the carry over N terms, so you need an extra accumulator which you can get by extending the precision by ~ log2(N) bits. Then do a masking of the most significant bits to check for over/underflow.

Advanced for a compiler, but possible.

The type that I think would be useful, would be a number in the range 0..int.max.
It has no risk of underflow.

Yep, from a correctness perspective length should be integer with a >=0 constraint. Ada also acknowledge this by having unsigned integers being 31 bits like you suggest. And now that most CPUs go 64 bit then a 63 bit integer would be the right choice for array length.

unsigned types are not a subset of mathematical integers.

They do not just have a restricted range. They have different semantics.


The question of what happens when a range is exceeded, is a different question.

There is really no difference between signed and unsigned in principle since you only have an offset, but in practical programming 64 bits signed and 63 bits unsigned is enough for most situations with the advantage that you have the same bit representation with only one interpretation.

What the semantics are depend on how you define the operators, right? So you can have both modular arithmetic and non-modular in the same type by providing more operators. This is after all how the hardware does it.

Contrary to what is claimed by others in this thread the general hardware ALU does not default to modular arithmetic, it preserves resolution:

32bit + 32bit ==> 33bit result
32bit * 32bit ==> 64bit result

Modular arithmetic is an artifact of the language, not the hardware.

Reply via email to