On Saturday, 1 August 2015 at 17:29:54 UTC, Adam D. Ruppe wrote:
On Saturday, 1 August 2015 at 17:22:40 UTC, NX wrote:
I wonder if the followings are compiler bugs:
No, it is by design, the idea is to keep static arrays smallish
so null references will be caught by the processor. (An overly
large static array could allow indexing it through a null
pointer to potentially reach another object.)
The easiest workaround is to just dynamically allocate such
huge arrays:
byte[] arr = new byte[](1024*1024*16);
ReadProcessMemory(Proc, 0xdeadbeef, arr.ptr, arr.length, null);
The arr.ptr and arr.length are the key arguments there.
Sorry, I can't see _the_ point in that. I understand that could
be a problem if it was a "global" array but this scenery is
completely wrong in my view. I'm already going to dynamically
allocate it and my problem is actually a lot complex than what I
showed there, I not even allowed to do this:
struct stuff
{
byte[1024*1024*16] arr; // Error: index 16777216 overflow for
static array
}
//...
stuff* data = new stuff;
ReadProcessMemory(Proc, (void*)0xA970F4, data, stuff.sizeof,
null);
Here
(https://gist.github.com/NightmareX1337/6408287d7823c8a4ba20) is
the real issue if anyone want to see the real-world problem with
long lines of code