On Tuesday, 2 April 2013 at 09:43:37 UTC, Jonathan M Davis wrote:
On Tuesday, April 02, 2013 09:49:03 Don wrote:
On Thursday, 28 March 2013 at 20:03:08 UTC, Adam D. Ruppe
wrote:
> I was working on a project earlier today that stores IP
> addresses in a database as a uint. For some reason though,
> some
> addresses were coming out as 0.0.0.0, despite the fact that
> if(ip == 0) return; in the only place it actually saves them
> (which was my first attempted quick fix for the bug).
>
> Turns out the problem was this:
>
> if (arg == typeid(uint)) {
>
> int e = va_arg!uint(_argptr);
> a = to!string(e);
>
> }
>
>
> See, I copy/pasted it from the int check, but didn't update
> the
> type on the left hand side. So it correctly pulled a uint out
> of the varargs, but then assigned it to an int, which the
> compiler accepted silently, so to!string() printed -blah
> instead of bigblah... which then got truncated by the
> database,
> resulting in zero being stored.
>
> I've since changed it to be "auto e = ..." and it all works
> correctly now.
>
>
>
> Anyway I thought I'd share this just because one of the many
> times bearophile has talked about this as a potentially buggy
> situation, I was like "bah humbug"... and now I've actually
> been there!
>
> I still don't think I'm for changing the language though just
> because of potential annoyances in other places unsigned
> works
> (such as array.length) but at least I've actually felt the
> other side of the argument in real world code now.
IMHO, array.length is *the* place where unsigned does *not*
work.
size_t should be an integer. We're not supporting 16 bit
systems,
and the few cases where a size_t value can potentially exceed
int.max could be disallowed.
The problem with unsigned is that it gets used as "positive
integer", which it is not. I think it was a big mistake that D
turned C's "unsigned long" into "ulong", thereby making it look
more attractive. Nobody should be using unsigned types unless
they have a really good reason. Unfortunately, size_t forces
you
to use them.
Naturally, the biggest reason to have size_t be unsigned is so
that you can
access the whole address space, though on 64-bit machines,
that's not
particularly relevant, since you're obviouly not going to have
a machine with
that much RAM (you're extremely unlikely to even have machine
with that much
hard drive space, though I think that I've heard of some
machines existing
which have run into that problem on 64-bit machines as crazy as
that would
be). For some people though, it _is_ a big deal on 32-bit
machines. For
instance, IIRC, David Simcha need 64-bit support for some of
the stuff he was
doing (biology stuff I think), because he couldn't address
enough memory on a
32-bit machine to do what he was doing. And I know that one of
the products
where I work is going to have to move to 64-bit OS, because
they're failing at
keeping its main process' memory footprint low enough to work
on a 32-bit box.
Having a signed size_t would make it even worse. Granted,
they're using C++,
not D, but the issue is the same.
My feeling is, that since the 16 bit days, using more than half
of the address space is such an usual activity that it deserves
special treatment in the code.
I don't think its unreasonable to require a cast for every use of
those super-sized sizes.
Even if you have an array which doesn't fit into an int, you can
only have one such array in your program!
This really, really obscure corner case doesn't deserve to be
polluting the language.
All those signed/unsigned issues basically come from it. It's a
helluva price to pay.
It's looking like an even worse deal now, because anybody with
large memory requirements will be on 64 bits. We've made this
sacrifice for the sake of a situation that is no longer relevant.