On 02/15/2011 10:40 PM, Daniel Gibson wrote:
Am 15.02.2011 20:15, schrieb Rainer Schuetze:

I think David has raised a good point here that seems to have been lost in the
discussion about naming.

Please note that the C name of the machine word integer was usually called
"int". The C standard only specifies a minimum bit-size for the different types
(see for example http://www.ericgiguere.com/articles/ansi-c-summary.html). Most
of current C++ implementations have identical "int" sizes, but now "long" is
different. This approach has failed and has caused many headaches when porting
software from one platform to another. D has recognized this and has explicitely
defined the bit-size of the various integer types. That's good!

Now, with size_t the distinction between platforms creeps back into the
language. It is everywhere across phobos, be it as length of ranges or size of
containers. This can get viral, as everything that gets in touch with these
values might have to stick to size_t. Is this really desired?

Consider saving an array to disk, trying to read it on another platform. How
many bits should be written for the size of that array?


This can indeed be a problem which actually is existent in Phobos: std.streams
Outputstream has a write(char[]) method - and similar methods for wchar and
dchar - that do exactly this: write a size_t first and then the data.. in many
places they used uint instead of size_t, but at the one method where this is a
bad idea they used size_t ;-) (see also
http://d.puremagic.com/issues/show_bug.cgi?id=5001 )

In general I think that you just have to define how you serialize data to
disk/net/whatever (what endianess, what exact types) and you won't have
problems. Just dumping the data to disk isn't portable anyway.

How do you, in general, cope with the issue that, when using machine-size types, programs or (program+data) combinations will work on some machines and not on others? This disturbs me a lot. I prefere having a constant field of applicability, even if artificially reduced for some set of machines.
Similar reflexion about "infinite"-size numbers.

Note this is different from using machine-size (unsigned) integers on the implementation side, for implementation reasons. This could be done, I guess, without having language-side issues. Meaning int, for instance, could be on the implementation side the same thing as long on 64-bit machine, but still be semantically limited to 32-bit; so that the code works the same way on all machines.

Denis
--
_________________
vita es estrany
spir.wikidot.com

Reply via email to