Re: GC pathological case behaviour

2016-06-29 Thread Martin Nowak via Digitalmars-d
On Tuesday, 28 June 2016 at 21:20:01 UTC, Ola Fosheim Grøstad 
wrote:
Not necessarily, if the 10K allocations results in system 
calls, but try to remove the 1ms delay, set setMaxMailboxSize 
to a millon and set it to ignore. (i.e. if the box is full you 
bypass sending).


Yes your overflowing your MessageBox, nothing the GC can do about 
that.

BTW, try --DRT-gcopt=profile:2 or so.


Re: GC pathological case behaviour

2016-06-28 Thread Ola Fosheim Grøstad via Digitalmars-d
On Tuesday, 28 June 2016 at 21:19:18 UTC, Steven Schveighoffer 
wrote:

It appears so:
https://github.com/dlang/phobos/blob/master/std/concurrency.d#L2164

m_maxMsgs is default 0.


I guess it makes sense, as you could get deadlocks for specific 
restrictions, although I personally would side with forcing the 
programmer to specify the buffering policies rather than 
providing defaults.




Re: GC pathological case behaviour

2016-06-28 Thread Ola Fosheim Grøstad via Digitalmars-d

On Tuesday, 28 June 2016 at 21:01:20 UTC, John Colvin wrote:

On Tuesday, 28 June 2016 at 20:12:29 UTC, luminousone wrote:
Is puts high enough latency that, that main thread can fill 
the message queue faster then start can exhaust it? If you put 
a call to sleep for 1ms in the main loop does it have the same 
result?


It appears that adding a 1ms sleep causes the leak to 
disappear. Still seems like bad behaviour from the GC though, 
perhaps indicative of a bug.


Not necessarily, if the 10K allocations results in system calls, 
but try to remove the 1ms delay, set setMaxMailboxSize to a 
millon and set it to ignore. (i.e. if the box is full you bypass 
sending).


Re: GC pathological case behaviour

2016-06-28 Thread Steven Schveighoffer via Digitalmars-d

On 6/28/16 4:35 PM, Ola Fosheim Grøstad wrote:

On Tuesday, 28 June 2016 at 20:24:33 UTC, Steven Schveighoffer wrote:

On 6/28/16 4:12 PM, luminousone wrote:


Is puts high enough latency that, that main thread can fill the message
queue faster then start can exhaust it? If you put a call to sleep for
1ms in the main loop does it have the same result?


I think this is it. Your main loop is doing very little, basically
just allocating memory. The child thread is putting data to the
console, which is much more expensive.


Is the message queue unlimited by default?


It appears so:
https://github.com/dlang/phobos/blob/master/std/concurrency.d#L2164

m_maxMsgs is default 0.

-Steve


Re: GC pathological case behaviour

2016-06-28 Thread John Colvin via Digitalmars-d

On Tuesday, 28 June 2016 at 20:12:29 UTC, luminousone wrote:
Is puts high enough latency that, that main thread can fill the 
message queue faster then start can exhaust it? If you put a 
call to sleep for 1ms in the main loop does it have the same 
result?


It appears that adding a 1ms sleep causes the leak to disappear. 
Still seems like bad behaviour from the GC though, perhaps 
indicative of a bug.


Re: GC pathological case behaviour

2016-06-28 Thread Ola Fosheim Grøstad via Digitalmars-d
On Tuesday, 28 June 2016 at 20:24:33 UTC, Steven Schveighoffer 
wrote:

On 6/28/16 4:12 PM, luminousone wrote:


Is puts high enough latency that, that main thread can fill 
the message
queue faster then start can exhaust it? If you put a call to 
sleep for

1ms in the main loop does it have the same result?


I think this is it. Your main loop is doing very little, 
basically just allocating memory. The child thread is putting 
data to the console, which is much more expensive.


Is the message queue unlimited by default? Use this then:

https://dlang.org/phobos/std_concurrency.html#.setMaxMailboxSize

?




Re: GC pathological case behaviour

2016-06-28 Thread Steven Schveighoffer via Digitalmars-d

On 6/28/16 4:12 PM, luminousone wrote:


Is puts high enough latency that, that main thread can fill the message
queue faster then start can exhaust it? If you put a call to sleep for
1ms in the main loop does it have the same result?


I think this is it. Your main loop is doing very little, basically just 
allocating memory. The child thread is putting data to the console, 
which is much more expensive.


-Steve


Re: GC pathological case behaviour

2016-06-28 Thread Steven Schveighoffer via Digitalmars-d

On 6/28/16 3:53 PM, Ola Fosheim Grøstad wrote:

On Tuesday, 28 June 2016 at 19:03:14 UTC, John Colvin wrote:

char[2] s = '\0';
s[0] = cast(char)msg;
puts(s.ptr);// remove this => no memory leak


But wait, is the string zero terminated?



I have to laugh at this quoted post :)

-Steve


Re: GC pathological case behaviour

2016-06-28 Thread luminousone via Digitalmars-d

On Tuesday, 28 June 2016 at 19:03:14 UTC, John Colvin wrote:
On my machine (OS X), this program eats up memory with no end 
in sight


import std.concurrency;
import core.stdc.stdio;

void start()
{
while(true)
{
receive(
(int msg)
{
char[2] s = '\0';
s[0] = cast(char)msg;
puts(s.ptr);// remove this => no memory leak
}
);
}
}


void main()
{
auto hw_tid = spawn();

while(true)
{
send(hw_tid, 64);
auto b = new ubyte[](1_000); // 10_000 => no memory leak
}
}

This is very odd, no? I'm not sure if it's a bug, but it sure 
is surprising. Why should "puts" cause a GC leak (writeln is 
the same)? What's so special about small allocations that 
allows all my memory to get filled up?


Is puts high enough latency that, that main thread can fill the 
message queue faster then start can exhaust it? If you put a call 
to sleep for 1ms in the main loop does it have the same result?


Re: GC pathological case behaviour

2016-06-28 Thread Ola Fosheim Grøstad via Digitalmars-d
On Tuesday, 28 June 2016 at 19:53:27 UTC, Ola Fosheim Grøstad 
wrote:

On Tuesday, 28 June 2016 at 19:03:14 UTC, John Colvin wrote:

char[2] s = '\0';
s[0] = cast(char)msg;
puts(s.ptr);// remove this => no memory 
leak


But wait, is the string zero terminated?


Forget it, it should be.


Re: GC pathological case behaviour

2016-06-28 Thread Ola Fosheim Grøstad via Digitalmars-d

On Tuesday, 28 June 2016 at 19:03:14 UTC, John Colvin wrote:

char[2] s = '\0';
s[0] = cast(char)msg;
puts(s.ptr);// remove this => no memory leak


But wait, is the string zero terminated?



Re: GC pathological case behaviour

2016-06-28 Thread Ola Fosheim Grøstad via Digitalmars-d

On Tuesday, 28 June 2016 at 19:03:14 UTC, John Colvin wrote:
This is very odd, no? I'm not sure if it's a bug, but it sure 
is surprising. Why should "puts" cause a GC leak (writeln is 
the same)? What's so special about small allocations that 
allows all my memory to get filled up?


Try to put locks around the allocations to see if it is a 
race-condition.




Re: GC pathological case behaviour

2016-06-28 Thread deadalnix via Digitalmars-d

On Tuesday, 28 June 2016 at 19:03:14 UTC, John Colvin wrote:
On my machine (OS X), this program eats up memory with no end 
in sight


import std.concurrency;
import core.stdc.stdio;

void start()
{
while(true)
{
receive(
(int msg)
{
char[2] s = '\0';
s[0] = cast(char)msg;
puts(s.ptr);// remove this => no memory leak
}
);
}
}


void main()
{
auto hw_tid = spawn();

while(true)
{
send(hw_tid, 64);
auto b = new ubyte[](1_000); // 10_000 => no memory leak
}
}

This is very odd, no? I'm not sure if it's a bug, but it sure 
is surprising. Why should "puts" cause a GC leak (writeln is 
the same)? What's so special about small allocations that 
allows all my memory to get filled up?


My bet is that you are creating fragmentation in the small size 
allocator. 10 000 is hitting the larg size one, whihc doesn't 
seems to have the same problem.


Now that would explain extra memory consumption, but I'm not sure 
why this wouldn't allow things to be freed at the end anyway.




GC pathological case behaviour

2016-06-28 Thread John Colvin via Digitalmars-d
On my machine (OS X), this program eats up memory with no end in 
sight


import std.concurrency;
import core.stdc.stdio;

void start()
{
while(true)
{
receive(
(int msg)
{
char[2] s = '\0';
s[0] = cast(char)msg;
puts(s.ptr);// remove this => no memory leak
}
);
}
}


void main()
{
auto hw_tid = spawn();

while(true)
{
send(hw_tid, 64);
auto b = new ubyte[](1_000); // 10_000 => no memory leak
}
}

This is very odd, no? I'm not sure if it's a bug, but it sure is 
surprising. Why should "puts" cause a GC leak (writeln is the 
same)? What's so special about small allocations that allows all 
my memory to get filled up?