Re: Commit size and Page fault's very large for simple program

2016-07-04 Thread thedeemon via Digitalmars-d-learn

On Monday, 4 July 2016 at 11:56:14 UTC, Rene Zwanenburg wrote:

On Monday, 4 July 2016 at 11:42:40 UTC, Rene Zwanenburg wrote:

...


I forgot to mention:

If you're on Windows compilation defaults to 32 bit, false 
pointers can be a problem with D's current GC in 32 bit 
applications. This isn't an issue for the sample application 
though, since you're not putting random numbers in the 
allocated arrays.


It is an issue here. It doesn't matter what numbers are in the 
arrays, what matters is if some random values on stack look like 
pointers pointing inside those arrays, which often happens with 
large arrays.
Essentially it means in 32 bits we shouldn't GC-allocate anything 
larger than a few KBs, the larger the thing you allocate the more 
chances are it won't be collected due to false pointers on the 
stack and other data.





Re: Commit size and Page fault's very large for simple program

2016-07-04 Thread Rene Zwanenburg via Digitalmars-d-learn

On Monday, 4 July 2016 at 11:42:40 UTC, Rene Zwanenburg wrote:

...


I forgot to mention:

If you're on Windows compilation defaults to 32 bit, false 
pointers can be a problem with D's current GC in 32 bit 
applications. This isn't an issue for the sample application 
though, since you're not putting random numbers in the allocated 
arrays.


64 bit doesn't suffer from this. There's also a GSoC project 
underway, aimed at improving the GC. I'm not sure what the exact 
goals are, but IIRC work on making the GC precise is being done, 
which would eliminate the false pointer issue.


Re: Commit size and Page fault's very large for simple program

2016-07-04 Thread Rene Zwanenburg via Digitalmars-d-learn

On Monday, 4 July 2016 at 01:57:19 UTC, Hiemlick Hiemlicker wrote:

version(Windows)

void main()
{
import std.random;

while(getchar() != EOF)
{
auto x = new int[std.random.uniform(100, 1000)];

writeln("");
bThread.Now();

}
}

more or less, ends up with a huge amount of page faults and a 
several hundred MB commit size(hold enter down a bit). I'm 
trying to understand this. Is that normal behavior for normal 
programs(haven't tried it with a similar C++ example though).


The PF's are most likely due to default initialization, so you 
may not see those in a C++ equivalent (or, actually, the exact 
equivalent would be to initialize the array as well, in that case 
you'd get PF's too). If you've determined default initialization 
is causing performance problems in a hot piece of code, D 
provides facilities to disable it.


I realize the GC has to do some work and all that but the 
program only has a working set of a few MB yet a commit of 10 
times that.


Is commit size essentially "touched" memory but really doesn't 
mean much to overall free ram?(can other programs use it at 
some point)?


Strictly speaking there's no relation between commit size and 
RAM, from your application's POV there's only the virtual address 
space. Committed memory can be paged to disk if the OS determines 
your application isn't actively using certain pages.





We know the program is not using more than 10MB


It's an array of ints, so you'll have to multiply that by four ;)

of extra memory(since x is local)... so I'd only expect the 
footprint to be a max of around 15-20MB. not 150MB+(depends on 
how fast and long you hit enter).


Keeping memory usage to an absolute minimum would mean the GC has 
to do a collection on every allocation. As a very coarse rule of 
thumb, expect a GC heap (not just the D GC) to be about 1.5 to 2 
times as large as strictly necessary. This is to reduce the 
amount of collections. Since your example is doing nothing but 
hammering the GC I'm not surprised the heap is a bit larger than 
that.


Commit size and Page fault's very large for simple program

2016-07-03 Thread Hiemlick Hiemlicker via Digitalmars-d-learn

version(Windows)

void main()
{
import std.random;

while(getchar() != EOF)
{
auto x = new int[std.random.uniform(100, 1000)];

writeln("");
bThread.Now();

}
}

more or less, ends up with a huge amount of page faults and a 
several hundred MB commit size(hold enter down a bit). I'm trying 
to understand this. Is that normal behavior for normal 
programs(haven't tried it with a similar C++ example though).


I realize the GC has to do some work and all that but the program 
only has a working set of a few MB yet a commit of 10 times that.


Is commit size essentially "touched" memory but really doesn't 
mean much to overall free ram?(can other programs use it at some 
point)?



We know the program is not using more than 10MB of extra 
memory(since x is local)... so I'd only expect the footprint to 
be a max of around 15-20MB. not 150MB+(depends on how fast and 
long you hit enter).