On Tue, 15 Feb 2011 18:18:22 -0500, Rainer Schuetze <r.sagita...@gmx.de> wrote:


Steven Schveighoffer wrote:
In addition size_t isn't actually defined by the compiler. So the library controls the size of size_t, not the compiler. This should make it extremely portable.


I do not consider the language and the runtime as completely seperate when it comes to writing code.

You are right, in some cases the runtime just extends the compiler features. However, I believe the runtime is meant to be used in multiple compilers. I would expect object.di to remain the same. Probably core too. This should be easily checkable with the newer gdc, which I believe uses a port of druntime.

BTW, though defined in object.di, size_t is tied to some compiler internals:

        alias typeof(int.sizeof) size_t;

and the compiler will make assumptions about this when creating array literals.

This is true. This makes it depend on the compiler. However, I believe the spec is concrete about what the sizeof type should be (if not, it should be made concrete).

I don't have a perfect solution, but maybe builtin arrays could be limited to 2^^32-1 elements (or maybe 2^^31-1 to get rid of endless signed/unsigned conversions), so the normal type to be used is still "int". Ranges should adopt the type sizes of the underlying objects.
No, this is too limiting. If I have 64GB of memory (not out of the question), and I want to have a 5GB array, I think I should be allowed to. This is one of the main reasons to go to 64-bit in the first place.

Yes, that's the imperfect part of the proposal. An array of ints could still use up to 16 GB, though.

Unless you cast it to void[]. What would exactly happen there, a runtime error? Which means a runtime check for an implicit cast? I don't think it's really an option to make array length always be uint (or int).

I wouldn't have a problem with using signed words for length. using more than 2GB for one array in 32-bit land would be so rare that having to jump through special hoops would be fine by me. Obviously for now, 2^63-1 sized arrays is plenty room for todays machines in 64-bit land.

What bothers me is that you have to deal with these "portability issues" from the very moment you store the length of an array elsewhere. Not a really big deal, and I don't think it will change, but still feels a bit awkward.

Java defines everything to be the same regardless of architecture, and the result is you just can't do certain things (like have a 5GB array). A system-level language should support the full range of architecture capabilities, so you necessarily have to deal with portability issues.

If you want a super-portable language that runs the same everywhere, use an interpreted/bytecode language like Java, .Net or Python. D is for getting close to the metal.

I see size_t as a way to *mostly* make things portable. It is not perfect, and really cannot be. It's necessary to expose the architecture so you can adapt to it, there's no getting around taht.

Really, it's rare that you have to use it anyways, most should use auto.

-Steve

Reply via email to