On Wednesday, 19 November 2014 at 11:43:38 UTC, John Colvin wrote:
On Wednesday, 19 November 2014 at 11:04:05 UTC, Matthias Bentrup wrote:
On Wednesday, 19 November 2014 at 10:03:35 UTC, Don wrote:
On Tuesday, 18 November 2014 at 18:23:52 UTC, Marco Leise wrote:
Am Tue, 18 Nov 2014 15:01:25 +0000
schrieb "Frank Like" <1150015...@qq.com>:

> but now ,'int' is enough for use,not huge and not > small,only enough.
> 'int' is easy to write,and most people are used to it.
> Most importantly easier to migrate code,if 'length''s > return
>value type is 'int'.

How about your idea?

I get the idea of a broken record right now...
Clearly size_t (which I tend to alias with ℕ in my code for
brevity and coolness) can express more than 2^31-1 items, which
is appropriate to reflect the increase in usable memory per
application on 64-bit platforms. Yes, the 64-bit version of a
program or library can handle larger data sets. Just like it
was when people transitioned from 16-bit to 32-bit. I wont use
`int` just because the technically correct thing is `size_t`,
even it it is a little harder to type.

This is difficult. Having arr.length return an unsigned type, is a dreadful language mistake.

Aside from the size factor, I personally prefer unsigned types
for countable stuff like array lengths. Mixed arithmetics
decay to unsinged anyways and you don't need checks like
`assert(idx >= 0)`. It is a matter of taste though and others
prefer languages with no unsigned types at all.


No! No! No! This is completely wrong. Unsigned does not mean "positive". It means "no sign", and therefore "wrapping semantics".
eg length - 4 > 0, if length is 2.

Weird consequence: using subtraction with an unsigned type is nearly always a bug.

I wish D hadn't called unsigned integers 'uint'. They should have been called '__uint' or something. They should look ugly. You need a very, very good reason to use an unsigned type.

We have a builtin type that is deadly but seductive.

int has wrapping the same semantics too, only it wraps to negative numbers instead of zero.

No. Signed types do not *wrap*. They *overflow* if their range is exceeded.
This is not the same thing. Overflow is always an error.
And the compiler could insert checks to detect this.

That's not possible for unsigned types. With an unsigned type, wrapping is part of the semantics.

Moreover, hitting an overflow with a signed type is an exceptional situation. Wrapping with an unsigned type is entirely normal, and happens with things like 2u - 1u.

If you insist on
non-wrapping length, it should return double or long double.

Which would be totally wrong for different reasons.

Short of BigInts or overflow-checking, there is no perfect option.

An overflow-checked type that could be reasonably well optimised would be nice, as mentioned by bearophile many times.

I don't think we need to worry about the pathological cases. The problem with unsigned size_t is that it introduces inappropriate semantics everywhere for the sake of the pathological cases.

IMHO the correct solution is to say that the length of a slice cannot exceed half of the memory space, otherwise a runtime error will occur. And then make size_t a positive integer.

Then let typeof(size_t - size_t) == int, instead of uint. All other operations stay as size_t.

Perhaps we can get most of the way, by improving range propagation.


Reply via email to