On Tuesday, April 02, 2013 09:49:03 Don wrote: > On Thursday, 28 March 2013 at 20:03:08 UTC, Adam D. Ruppe wrote: > > I was working on a project earlier today that stores IP > > addresses in a database as a uint. For some reason though, some > > addresses were coming out as 0.0.0.0, despite the fact that > > if(ip == 0) return; in the only place it actually saves them > > (which was my first attempted quick fix for the bug). > > > > Turns out the problem was this: > > > > if (arg == typeid(uint)) { > > > > int e = va_arg!uint(_argptr); > > a = to!string(e); > > > > } > > > > > > See, I copy/pasted it from the int check, but didn't update the > > type on the left hand side. So it correctly pulled a uint out > > of the varargs, but then assigned it to an int, which the > > compiler accepted silently, so to!string() printed -blah > > instead of bigblah... which then got truncated by the database, > > resulting in zero being stored. > > > > I've since changed it to be "auto e = ..." and it all works > > correctly now. > > > > > > > > Anyway I thought I'd share this just because one of the many > > times bearophile has talked about this as a potentially buggy > > situation, I was like "bah humbug"... and now I've actually > > been there! > > > > I still don't think I'm for changing the language though just > > because of potential annoyances in other places unsigned works > > (such as array.length) but at least I've actually felt the > > other side of the argument in real world code now. > > IMHO, array.length is *the* place where unsigned does *not* work. > size_t should be an integer. We're not supporting 16 bit systems, > and the few cases where a size_t value can potentially exceed > int.max could be disallowed. > > The problem with unsigned is that it gets used as "positive > integer", which it is not. I think it was a big mistake that D > turned C's "unsigned long" into "ulong", thereby making it look > more attractive. Nobody should be using unsigned types unless > they have a really good reason. Unfortunately, size_t forces you > to use them.
Naturally, the biggest reason to have size_t be unsigned is so that you can access the whole address space, though on 64-bit machines, that's not particularly relevant, since you're obviouly not going to have a machine with that much RAM (you're extremely unlikely to even have machine with that much hard drive space, though I think that I've heard of some machines existing which have run into that problem on 64-bit machines as crazy as that would be). For some people though, it _is_ a big deal on 32-bit machines. For instance, IIRC, David Simcha need 64-bit support for some of the stuff he was doing (biology stuff I think), because he couldn't address enough memory on a 32-bit machine to do what he was doing. And I know that one of the products where I work is going to have to move to 64-bit OS, because they're failing at keeping its main process' memory footprint low enough to work on a 32-bit box. Having a signed size_t would make it even worse. Granted, they're using C++, not D, but the issue is the same. So, it's arguably important on 32-bit machines that size_t be unsigned, but 64-bit doesn't really have that excuse. However, making size_t unsigned on 32- bit machines and signed on 64-bit machines would create its own set of problems, and I suspect that would be an even worse idea than making size_t signed on 64-bit machines. I do agree though that in general, unsigned types should be used with discretion, and they tend to be overused IMHO. I'm not convinced that that's the case with size_t though, since 32-bit machines do make it a necessity sometimes. - Jonathan M Davis