Maxime's micro allocation benchmark much faster ?

2015-03-31 Thread Laeeth Isharc via Digitalmars-d-learn
I was curious to see if new DMD had changed speed on Maxime 
Chevalier-Boisvert's allocation benchmark here:


http://pointersgonewild.com/2014/10/26/circumventing-the-d-garbage-collector/

I haven't had time to look at the Phobos test suite to know if 
this was one of those that were included, but the difference 
seems to be striking.  I am using two machines in my office - 
both of which are old x64 boxes running Arch Linux and are quite 
old (8 Gb RAM only).  Same manufacturer and similar models so 
should be same spec CPUwise.  Have not got time to install and 
compare different versions of dmd on same machine, so fwiw:



1mm objects
---
dmd 2.07 release: 0.56 seconds
dmd 2.067-devel-639bcaa: 0.88 seconds


dmd 2.07 release: between 4.44 and 6.57 seconds
dmd 2.067-devel-639bcaa: 90 seconds


In case I made a typo in code:

import std.conv;

class Node
{
Node next;
size_t a,b,c,d;
}

void main(string[] args)
{
auto numNodes=to!size_t(args[1]);

Node head=null;

for(size_t i=0;i

Re: Maxime's micro allocation benchmark much faster ?

2015-03-31 Thread Laeeth Isharc via Digitalmars-d-learn
oops - scratch that.  may have made a mistake with versions and 
be comparing 2.067 with some unstable dev version.


On Tuesday, 31 March 2015 at 11:46:41 UTC, Laeeth Isharc wrote:
I was curious to see if new DMD had changed speed on Maxime 
Chevalier-Boisvert's allocation benchmark here:


http://pointersgonewild.com/2014/10/26/circumventing-the-d-garbage-collector/

I haven't had time to look at the Phobos test suite to know if 
this was one of those that were included, but the difference 
seems to be striking.  I am using two machines in my office - 
both of which are old x64 boxes running Arch Linux and are 
quite old (8 Gb RAM only).  Same manufacturer and similar 
models so should be same spec CPUwise.  Have not got time to 
install and compare different versions of dmd on same machine, 
so fwiw:



1mm objects
---
dmd 2.07 release: 0.56 seconds
dmd 2.067-devel-639bcaa: 0.88 seconds


dmd 2.07 release: between 4.44 and 6.57 seconds
dmd 2.067-devel-639bcaa: 90 seconds


In case I made a typo in code:

import std.conv;

class Node
{
Node next;
size_t a,b,c,d;
}

void main(string[] args)
{
auto numNodes=to!size_t(args[1]);

Node head=null;

for(size_t i=0;i



Re: Maxime's micro allocation benchmark much faster ?

2015-03-31 Thread Laeeth Isharc via Digitalmars-d-learn
Trying on a different beefier machine with 2.066 and 2.067 
release versions installed:


1mm allocations:
2.066: 0.844s
2.067: 0.19s

10mm allocations

2.066: 1m 17.2 s
2.067: 0m  1.15s

So numbers were ballpark right before, and allocation on this 
micro-benchmark much faster.


Re: Maxime's micro allocation benchmark much faster ?

2015-03-31 Thread John Colvin via Digitalmars-d-learn

On Tuesday, 31 March 2015 at 20:56:09 UTC, Laeeth Isharc wrote:
Trying on a different beefier machine with 2.066 and 2.067 
release versions installed:


1mm allocations:
2.066: 0.844s
2.067: 0.19s

10mm allocations

2.066: 1m 17.2 s
2.067: 0m  1.15s

So numbers were ballpark right before, and allocation on this 
micro-benchmark much faster.


That's nice news. The recent GC improvements are clearly working.


Re: Maxime's micro allocation benchmark much faster ?

2015-03-31 Thread weaselcat via Digitalmars-d-learn

On Tuesday, 31 March 2015 at 20:56:09 UTC, Laeeth Isharc wrote:
Trying on a different beefier machine with 2.066 and 2.067 
release versions installed:


1mm allocations:
2.066: 0.844s
2.067: 0.19s

10mm allocations

2.066: 1m 17.2 s
2.067: 0m  1.15s

So numbers were ballpark right before, and allocation on this 
micro-benchmark much faster.


Wow! props to the people that worked on the GC.


Re: Maxime's micro allocation benchmark much faster ?

2015-03-31 Thread Laeeth Isharc via Digitalmars-d-learn

On Tuesday, 31 March 2015 at 22:00:39 UTC, weaselcat wrote:

On Tuesday, 31 March 2015 at 20:56:09 UTC, Laeeth Isharc wrote:
Trying on a different beefier machine with 2.066 and 2.067 
release versions installed:


1mm allocations:
2.066: 0.844s
2.067: 0.19s

10mm allocations

2.066: 1m 17.2 s
2.067: 0m  1.15s

So numbers were ballpark right before, and allocation on this 
micro-benchmark much faster.


Wow! props to the people that worked on the GC.


Yes - should have said that.  And I do appreciate very much all 
the hard work that has been done on this (and also by the GDC and 
LDC maintainers who have to keep up with each release).


Don't trust these numbers till someone else has verified them, as 
I am not certain I haven't messed up transliterating the code, or 
doing something else stoopid.  And of course it's a very specific 
micro benchmark, but it's one that matters beyond the direct 
implications given the discussion over it when her post came out. 
 I would be really curious to see if Maxime finds the overall 
performance of her JIT improved.


Re: Maxime's micro allocation benchmark much faster ?

2015-04-01 Thread FG via Digitalmars-d-learn

On 2015-03-31 at 22:56, Laeeth Isharc wrote:

1mm allocations
2.066: 0.844s
2.067: 0.19s


That is great news, thanks!

OT: it's a nasty financier's habit to write 1M and 1MM instead of 1k and 1M.  :P


Re: Maxime's micro allocation benchmark much faster ?

2015-04-01 Thread John Colvin via Digitalmars-d-learn

On Wednesday, 1 April 2015 at 10:09:12 UTC, FG wrote:

On 2015-03-31 at 22:56, Laeeth Isharc wrote:

1mm allocations
2.066: 0.844s
2.067: 0.19s


That is great news, thanks!

OT: it's a nasty financier's habit to write 1M and 1MM instead 
of 1k and 1M.  :P


Yeah, what's with that? I've never seen it before.


Re: Maxime's micro allocation benchmark much faster ?

2015-04-01 Thread Laeeth Isharc via Digitalmars-d-learn

On Wednesday, 1 April 2015 at 10:35:05 UTC, John Colvin wrote:

On Wednesday, 1 April 2015 at 10:09:12 UTC, FG wrote:

On 2015-03-31 at 22:56, Laeeth Isharc wrote:

1mm allocations
2.066: 0.844s
2.067: 0.19s


That is great news, thanks!

OT: it's a nasty financier's habit to write 1M and 1MM instead 
of 1k and 1M.  :P


Yeah, what's with that? I've never seen it before.


One cannot entirely escape déformation professionnelle ;)  
[People mostly write 1,000 but 1mm although 1m is pedantically 
correct for 1,000).  Better internalize the conventions if one 
doesn't want to avoid expensive mistakes under pressure.


Re: Maxime's micro allocation benchmark much faster ?

2015-04-01 Thread John Colvin via Digitalmars-d-learn

On Wednesday, 1 April 2015 at 14:22:57 UTC, Laeeth Isharc wrote:

On Wednesday, 1 April 2015 at 10:35:05 UTC, John Colvin wrote:

On Wednesday, 1 April 2015 at 10:09:12 UTC, FG wrote:

On 2015-03-31 at 22:56, Laeeth Isharc wrote:

1mm allocations
2.066: 0.844s
2.067: 0.19s


That is great news, thanks!

OT: it's a nasty financier's habit to write 1M and 1MM 
instead of 1k and 1M.  :P


Yeah, what's with that? I've never seen it before.


One cannot entirely escape déformation professionnelle ;)  
[People mostly write 1,000 but 1mm although 1m is pedantically 
correct for 1,000).  Better internalize the conventions if one 
doesn't want to avoid expensive mistakes under pressure.


well yes, who doesn't always not want to never avoid mistakes? ;)

Anyway, as I'm sure you know, the rest of the world assumes 
SI/metric, or binary in special cases (damn those JEDEC guys!): 
http://en.wikipedia.org/wiki/Template:Bit_and_byte_prefixes


Re: Maxime's micro allocation benchmark much faster ?

2015-04-01 Thread FG via Digitalmars-d-learn

On 2015-04-01 at 16:52, John Colvin wrote:

On Wednesday, 1 April 2015 at 14:22:57 UTC, Laeeth Isharc wrote:

On Wednesday, 1 April 2015 at 10:35:05 UTC, John Colvin wrote:

On Wednesday, 1 April 2015 at 10:09:12 UTC, FG wrote:

On 2015-03-31 at 22:56, Laeeth Isharc wrote:

1mm allocations
2.066: 0.844s
2.067: 0.19s


That is great news, thanks!

OT: it's a nasty financier's habit to write 1M and 1MM instead of 1k and 1M.  :P


Yeah, what's with that? I've never seen it before.


One cannot entirely escape déformation professionnelle ;) [People mostly write 
1,000 but 1mm although 1m is pedantically correct for 1,000).  Better 
internalize the conventions if one doesn't want to avoid expensive mistakes 
under pressure.


well yes, who doesn't always not want to never avoid mistakes? ;)

Anyway, as I'm sure you know, the rest of the world assumes SI/metric, or 
binary in special cases (damn those JEDEC guys!): 
http://en.wikipedia.org/wiki/Template:Bit_and_byte_prefixes


Yeah, there's that, but at least 1024 and 1000 are still in the same ballpark. 
Bankers are used to the convention and won't mistake M for a million (or you'd 
read it in every newspaper if they did), but it does create havoc when you see 
that convention being used outside of the financial context, or worse, being 
mixed with SI.