Steven Schveighoffer wrote:

In addition size_t isn't actually defined by the compiler. So the library controls the size of size_t, not the compiler. This should make it extremely portable.


I do not consider the language and the runtime as completely seperate when it comes to writing code. BTW, though defined in object.di, size_t is tied to some compiler internals:

        alias typeof(int.sizeof) size_t;

and the compiler will make assumptions about this when creating array literals.

Consider saving an array to disk, trying to read it on another platform. How many bits should be written for the size of that array?

It depends on the protocol or file format definition. It should be irrelevant what platform/architecture you are on. Any format or protocol worth its salt will define what size integers you should store.

Agreed, the example probably was not the best one.

I don't have a perfect solution, but maybe builtin arrays could be limited to 2^^32-1 elements (or maybe 2^^31-1 to get rid of endless signed/unsigned conversions), so the normal type to be used is still "int". Ranges should adopt the type sizes of the underlying objects.

No, this is too limiting. If I have 64GB of memory (not out of the question), and I want to have a 5GB array, I think I should be allowed to. This is one of the main reasons to go to 64-bit in the first place.

Yes, that's the imperfect part of the proposal. An array of ints could still use up to 16 GB, though.

What bothers me is that you have to deal with these "portability issues" from the very moment you store the length of an array elsewhere. Not a really big deal, and I don't think it will change, but still feels a bit awkward.

Reply via email to