#WEKA #INDUSTRY

I found this post from 2007 http://forum.dlang.org/post/fdspch$d3v$1...@digitalmars.com that refers to this post from 2006 http://www.digitalmars.com/d/archives/digitalmars/D/37038.html#N37071 -- and I still don't realize, why do static arrays have this size limit on them?

Any linker issues should be contained to the linker (i.e., use a different linker, fix your linker). As for cross-device -- if C lets me use huge static arrays, why should D impose a limit? As for executable failing to load -- we have ~1GB .bss section and a ~100MB .rodata section. No issues there.

As for the "use dynamic arrays instead", this poses two problems:
1) Why? I know everything in compile-time, why force me to (A) allocate it separately in `shared static this()` and (B) introduce all sorts of runtime bound checks?

2) Many times I need memory-contiguity, e.g., several big arrays inside a struct, which is dumped to disk/sent over network. I can't use pointers there.

And on top of it all, this limit *literally* makes no sense:

__gshared ubyte[16*1024*1024] x; // fails to compile

struct S {
    ubyte[10*1024*1024] a;
    ubyte[10*1024*1024] b;
}
__gshared S x;    // compiles

I'm already working on a super-ugly-burn-with-fire mixin that generates *structs* in a given size to overcome this limit.


-tomer

Reply via email to