Re: Descent with compile-time debug for testing

2009-06-06 Thread Trass3r

Ary Borenszweig schrieb:

That's ddbg working wrong, not Descent. :-P



Ah, damn so no way this gets fixed.
Debugging D is a pain :(


D2's feature set?

2009-06-06 Thread Kristian Kilpi


I think I shouldn't post this because I could very well start one of those  
mega-threads... :D


Of course, only Walter  Co know what D2 will include when it's finally  
'released'. Concurrency stuff has been developed lately. But something  
more? Or is that it? What do you think?



I haven't thought about it too much, but lets say something... ;)

1) Scoped members. For example:

class Foo
{
 scope Bar bar;  // gets destructed with Foo object
}


2) There have been threads about the import/package system. So maybe it  
could be improved as well. (I'm not gonna discuss it more here.) But yeah,  
I guess this would require too much work to be done within D2's schedule.



Oh, macros would also be nice, but maybe they will/should wait for D3... :)


Pop quiz [memory usage]

2009-06-06 Thread Vladimir Panteleev

// Works for DMD1/Phobos, DMD1/Tango and DMD2/Phobos
version(Tango) import tango.io.Console;
else   import std.stdio;

struct S
{
ubyte[40_000] data;
}

void main()
{
S[] a;
a ~= S();

	// QUESTION: How much memory will this program consume upon reaching this  
point?

version(Tango) Cin.copyln();
else   readln();
}


Re: Pop quiz [memory usage]

2009-06-06 Thread bearophile
Vladimir Panteleev:
   // QUESTION: How much memory will this program consume upon reaching 
 this  
 point?

There's some serious bug here. It allocates 40+ MB.

The following code even crashes LDC during the compilation, I'll ask in the LDC 
channel:

struct S { ubyte[40_000] d; }
void main() {
S[] a;
a ~= S();
}

Bye and thank you for the little test,
bearophile


Re: D2's feature set?

2009-06-06 Thread Christopher Wright

Kristian Kilpi wrote:


I think I shouldn't post this because I could very well start one of 
those mega-threads... :D


Of course, only Walter  Co know what D2 will include when it's finally 
'released'. Concurrency stuff has been developed lately. But something 
more? Or is that it? What do you think?



I haven't thought about it too much, but lets say something... ;)

1) Scoped members. For example:

class Foo
{
 scope Bar bar;  // gets destructed with Foo object
}


You need to do escape analysis and whole program analysis to determine 
whether there are aliases to a scope member. Failing that, it's pretty 
easy to introduce bugs that are difficult to find.


It would be fine as long as you always assign with a NewExpression.


Re: Pop quiz [memory usage]

2009-06-06 Thread Vladimir Panteleev
On Sat, 06 Jun 2009 16:17:10 +0300, bearophile bearophileh...@lycos.com  
wrote:

There's some serious bug here. It allocates 40+ MB.


Um, it should be much more than that. What are you running?

--
Best regards,
 Vladimir  mailto:thecybersha...@gmail.com


Re: Pop quiz [memory usage]

2009-06-06 Thread bearophile
Vladimir Panteleev:
Um, it should be much more than that. What are you running?

I am running DMD v1.042 with Phobos on WinXP, the compilation needs only tents 
of a second, and at runtime it allocated 48.3 MB.

DMD v2.030 with Phobos needs 49.028 MB and the compilation is a bit slower, 
0.35 seconds.

Bye,
bearophile


Re: Pop quiz [memory usage]

2009-06-06 Thread Vladimir Panteleev
On Sat, 06 Jun 2009 17:11:45 +0300, bearophile bearophileh...@lycos.com  
wrote:



Vladimir Panteleev:

Um, it should be much more than that. What are you running?


I am running DMD v1.042 with Phobos on WinXP, the compilation needs only  
tents of a second, and at runtime it allocated 48.3 MB.


DMD v2.030 with Phobos needs 49.028 MB and the compilation is a bit  
slower, 0.35 seconds.


Ah, that's just the working set. Have a look at the virtual memory usage.

--
Best regards,
 Vladimir  mailto:thecybersha...@gmail.com


Re: Pop quiz [memory usage]

2009-06-06 Thread Jarrett Billingsley
On Sat, Jun 6, 2009 at 10:19 AM, Vladimir
Panteleevthecybersha...@gmail.com wrote:
 On Sat, 06 Jun 2009 17:11:45 +0300, bearophile bearophileh...@lycos.com
 wrote:

 Vladimir Panteleev:

 Um, it should be much more than that. What are you running?

 I am running DMD v1.042 with Phobos on WinXP, the compilation needs only
 tents of a second, and at runtime it allocated 48.3 MB.

 DMD v2.030 with Phobos needs 49.028 MB and the compilation is a bit
 slower, 0.35 seconds.

 Ah, that's just the working set. Have a look at the virtual memory usage.

 --
 Best regards,
  Vladimir                          mailto:thecybersha...@gmail.com


1.5GB.  Wow.


Re: Pop quiz [memory usage]

2009-06-06 Thread bearophile
Vladimir Panteleev:
 Ah, that's just the working set. Have a look at the virtual memory usage.

How can I tell them apart? How can I measure that ' virtual memory usage' on 
Windows?

Bye,
bearophile


Re: Pop quiz [memory usage]

2009-06-06 Thread BCS

Hello bearophile,


Vladimir Panteleev:


Ah, that's just the working set. Have a look at the virtual memory
usage.


How can I tell them apart? How can I measure that ' virtual memory
usage' on Windows?

Bye,
bearophile


bring up task manager




Re: Pop quiz [memory usage]

2009-06-06 Thread bearophile
BCS:
 bring up task manager

That's what I have done to take those numbers. Then I have used another small 
program that has given me similar numbers...

Bye,
bearophile


Re: D2's feature set?

2009-06-06 Thread Kristian Kilpi
On Sat, 06 Jun 2009 16:35:15 +0300, Christopher Wright  
dhase...@gmail.com wrote:



Kristian Kilpi wrote:
 I think I shouldn't post this because I could very well start one of  
those mega-threads... :D
 Of course, only Walter  Co know what D2 will include when it's  
finally 'released'. Concurrency stuff has been developed lately. But  
something more? Or is that it? What do you think?

  I haven't thought about it too much, but lets say something... ;)
 1) Scoped members. For example:
 class Foo
{
 scope Bar bar;  // gets destructed with Foo object
}


You need to do escape analysis and whole program analysis to determine  
whether there are aliases to a scope member. Failing that, it's pretty  
easy to introduce bugs that are difficult to find.


It would be fine as long as you always assign with a NewExpression.


Yep. Well, I would be happy with it even if you could only assing newly  
created objects to a scoped member. (And you could always remove this  
restriction when the escape analysis will be available (if ever).)


silently accept parentclassName.func can be bug-prone

2009-06-06 Thread davidl

import std.stdio;
class t
{
void func(){writefln(t);}
void delegate() getdelegate(){return func; }
}

class v:t
{
void func(){writefln(v);}
}

class r:v
{
void func(){writefln(r);}
void delegate() getdelegate(){return t.func; }
// someone write this may expect the compiler to return the delegate of  
func in class t

}

void main()
{
r p= new r;
p.getdelegate()();
}

currently it prints r

--
使用 Opera 革命性的电子邮件客户程序: http://www.opera.com/mail/


Re: Pop quiz [memory usage]

2009-06-06 Thread Jarrett Billingsley
On Sat, Jun 6, 2009 at 10:53 AM, bearophilebearophileh...@lycos.com wrote:
 BCS:
 bring up task manager

 That's what I have done to take those numbers. Then I have used another small 
 program that has given me similar numbers...

You can add extra columns to the task manager to see all sorts of information.

That, or use procexp.


Re: Pop quiz [memory usage]

2009-06-06 Thread davidl
在 Sat, 06 Jun 2009 22:53:27 +0800,bearophile bearophileh...@lycos.com  
写道:



BCS:

bring up task manager


That's what I have done to take those numbers. Then I have used another  
small program that has given me similar numbers...


Bye,
bearophile


You need to bring up the column of virtual memory usage

--
使用 Opera 革命性的电子邮件客户程序: http://www.opera.com/mail/


Re: Pop quiz [memory usage]

2009-06-06 Thread Jarrett Billingsley
On Sat, Jun 6, 2009 at 8:03 AM, Vladimir
Panteleevthecybersha...@gmail.com wrote:
 // Works for DMD1/Phobos, DMD1/Tango and DMD2/Phobos
 version(Tango) import tango.io.Console;
 else           import std.stdio;

 struct S
 {
        ubyte[40_000] data;
 }

 void main()
 {
        S[] a;
        a ~= S();

        // QUESTION: How much memory will this program consume upon reaching
 this point?
        version(Tango) Cin.copyln();
        else           readln();
 }


There seems to be something wrong with the newCapacity function that
_d_arrayappendcT calls.  From an element size of 2 (I halved it
just to make the allocation faster) and an array length of 1, it
somehow calculates the new size to be 266686600.  Hm.  That seems a
bit off.

It seems this line:

long mult = 100 + (1000L * size) / log2plus1(newcap);

is to blame.  I don't think large value types were taken into account
here.  The resulting multiplier is 1,333,433, which is hilariously
large.


Re: D2's feature set?

2009-06-06 Thread Steven Schveighoffer
On Sat, 06 Jun 2009 09:35:15 -0400, Christopher Wright  
dhase...@gmail.com wrote:



Kristian Kilpi wrote:
 I think I shouldn't post this because I could very well start one of  
those mega-threads... :D
 Of course, only Walter  Co know what D2 will include when it's  
finally 'released'. Concurrency stuff has been developed lately. But  
something more? Or is that it? What do you think?

  I haven't thought about it too much, but lets say something... ;)
 1) Scoped members. For example:
 class Foo
{
 scope Bar bar;  // gets destructed with Foo object
}


You need to do escape analysis and whole program analysis to determine  
whether there are aliases to a scope member. Failing that, it's pretty  
easy to introduce bugs that are difficult to find.


Not really.  A scope member would be placed in the same memory block as  
the owner class.  So an alias to the member would be the same as an alias  
to the owner class because the same memory block would be referenced.   
Both wouldn't be collected until neither is referenced.


It's the equivalent of this in C++:

class Bar
{
}
class Foo
{
  Bar bar;
}

The only issue left to decide is how the member is initialized.  There  
must be some rules, like if bar has no default constructor, it must be  
new'd in the constructor.  And it can't be rebound.


-Steve


Re: D2's feature set?

2009-06-06 Thread Jarrett Billingsley
On Sat, Jun 6, 2009 at 11:17 AM, Steven
Schveighofferschvei...@yahoo.com wrote:

 You need to do escape analysis and whole program analysis to determine
 whether there are aliases to a scope member. Failing that, it's pretty easy
 to introduce bugs that are difficult to find.

 Not really.  A scope member would be placed in the same memory block as the
 owner class.  So an alias to the member would be the same as an alias to the
 owner class because the same memory block would be referenced.  Both
 wouldn't be collected until neither is referenced.

Regardless of how/where it's allocated, Chris is still right, unless
'scope' becomes a type constructor instead of a storage attribute.
Consider:

class A
{
void fork() { writeln(fork!); }
}

class B
{
scope A a;
this() { a = new A(); }
}

A a;

void foo()
{
scope b = new B();
a = b.a; // waaait
}

void main()
{
foo();
a.fork(); // AGHL
}

If it were impossible to assign a scope A into an A, this wouldn't
be a problem.  Or, full escape analysis, either way.


Re: Pop quiz [memory usage]

2009-06-06 Thread bearophile
davidl:
 You need to bring up the column of virtual memory usage

Ah, thank you for the suggestion\tip.
Now it shows it's using 1033 MB of virtual memory!

Bye,
bearophile


Re: D2's feature set?

2009-06-06 Thread Steve Schveighoffer
On Sat, 06 Jun 2009 11:26:37 -0400, Jarrett Billingsley wrote:

 On Sat, Jun 6, 2009 at 11:17 AM, Steven
 Schveighofferschvei...@yahoo.com wrote:

 You need to do escape analysis and whole program analysis to determine
 whether there are aliases to a scope member. Failing that, it's pretty
 easy to introduce bugs that are difficult to find.

 Not really.  A scope member would be placed in the same memory block as
 the owner class.  So an alias to the member would be the same as an
 alias to the owner class because the same memory block would be
 referenced.  Both wouldn't be collected until neither is referenced.
 
 Regardless of how/where it's allocated, Chris is still right, unless
 'scope' becomes a type constructor instead of a storage attribute.
 Consider:
 
 class A
 {
 void fork() { writeln(fork!); }
 }
 
 class B
 {
 scope A a;
 this() { a = new A(); }
 }
 
 A a;
 
 void foo()
 {
 scope b = new B();
 a = b.a; // waaait
 }
 
 void main()
 {
 foo();
 a.fork(); // AGHL
 }
 
 If it were impossible to assign a scope A into an A, this wouldn't be
 a problem.  Or, full escape analysis, either way.

You are looking at a different problem.  This problem already exists:

A a;
void foo()
{
  scope al = new A();
  a = al;
}

I'm talking about scope members, that is, members whose storage is 
contained within the owner.  This allows destructors to access members 
before the members' memory is destroyed, because both the member and the 
owner's memory is destroyed at the same time.

There would have to be some special treatment for a scope member by the 
compiler -- the member should not be rebindable, and I think you would 
not need to store a separate reference to the member, because it's 
reference address is statically decided.  In other words, yes, scope 
members would be treated differently, but I don't think scope needs to be 
a type constructor for that.

-Steve


Re: Pop quiz [memory usage]

2009-06-06 Thread Vladimir Panteleev
On Sat, 06 Jun 2009 18:12:58 +0300, Jarrett Billingsley  
jarrett.billings...@gmail.com wrote:



On Sat, Jun 6, 2009 at 8:03 AM, Vladimir
Panteleevthecybersha...@gmail.com wrote:

// Works for DMD1/Phobos, DMD1/Tango and DMD2/Phobos
version(Tango) import tango.io.Console;
else   import std.stdio;

struct S
{
   ubyte[40_000] data;
}

void main()
{
   S[] a;
   a ~= S();

   // QUESTION: How much memory will this program consume upon  
reaching

this point?
   version(Tango) Cin.copyln();
   else   readln();
}



There seems to be something wrong with the newCapacity function that
_d_arrayappendcT calls.  From an element size of 2 (I halved it
just to make the allocation faster) and an array length of 1, it
somehow calculates the new size to be 266686600.  Hm.  That seems a
bit off.

It seems this line:

long mult = 100 + (1000L * size) / log2plus1(newcap);

is to blame.  I don't think large value types were taken into account
here.  The resulting multiplier is 1,333,433, which is hilariously
large.


Bulls-eye. I think mr. Dave Fladebo deserves a math lesson or something,  
and mr. Walter Bright should review patches that go into the language's  
runtime more carefully. :P


Can we discuss a suitable replacement? I suggested in #d that we look at  
how other platforms do it. For example, Java's Vector just doubles its  
capacity by default (  
http://java.sun.com/javase/6/docs/api/java/util/Vector.html ).


--
Best regards,
 Vladimir  mailto:thecybersha...@gmail.com


Re: Pop quiz [memory usage]

2009-06-06 Thread Fawzi Mohamed
On 2009-06-06 17:12:58 +0200, Jarrett Billingsley 
jarrett.billings...@gmail.com said:



On Sat, Jun 6, 2009 at 8:03 AM, Vladimir
Panteleevthecybersha...@gmail.com wrote:

// Works for DMD1/Phobos, DMD1/Tango and DMD2/Phobos
version(Tango) import tango.io.Console;
else           import std.stdio;

struct S
{
       ubyte[40_000] data;
}

void main()
{
       S[] a;
       a ~= S();

       // QUESTION: How much memory will this program consume upo

n reaching

this point?
       version(Tango) Cin.copyln();
       else           readln();
}



There seems to be something wrong with the newCapacity function that
_d_arrayappendcT calls.  From an element size of 2 (I halved it
just to make the allocation faster) and an array length of 1, it
somehow calculates the new size to be 266686600.  Hm.  That seems a
bit off.

It seems this line:

long mult = 100 + (1000L * size) / log2plus1(newcap);

is to blame.  I don't think large value types were taken into account
here.  The resulting multiplier is 1,333,433, which is hilariously
large.


Indeed we were discussing this in the IRC,
Actually it is interesting to note that the continuos function written 
as comment in newCapacity

double mult2 = 1.0 + (size / log10(pow(newcap * 2.0,2.0)));
does *not* have that behaviour.
It seems to me that it is generally much better to work on the total 
memory rather than on the number of elements.

I would use something like
   long mult = 100 + 200L / log2plus2(newcap)
and round up
   newext = cast(size_t)((newcap * mult) / 100);
   newext += size-(newext % size);

This is what I am adding in tango.

One could add something that further favors large sizes, but I miss the 
rationale behind that, I would rather expect that one typically 
concatenates strings (size=1..4) and so there is more to gain by making 
that faster.
I can also understand if someone wants to use only the number of 
elements (rather than the total size), but what was implemented wasn't 
that either.


If someone has some insight, or good benchmarks to choose a better 
function it would be welcome.


Fawzi



Re: Pop quiz [memory usage]

2009-06-06 Thread Fawzi Mohamed

Indeed we were discussing this in the IRC,
Actually it is interesting to note that the continuos function written 
as comment in newCapacity

double mult2 = 1.0 + (size / log10(pow(newcap * 2.0,2.0)));
does *not* have that behaviour.
It seems to me that it is generally much better to work on the total 
memory rather than on the number of elements.

I would use something like
long mult = 100 + 200L / log2plus2(newcap)
and round up
newext = cast(size_t)((newcap * mult) / 100);
newext += size-(newext % size);

This is what I am adding in tango.


thinking more about this, given that one starts at pagesize ~4024, 
log2=12 this might be too conservative, I will test a little bit more...


One could add something that further favors large sizes, but I miss the 
rationale behind that, I would rather expect that one typically 
concatenates strings (size=1..4) and so there is more to gain by making 
that faster.
I can also understand if someone wants to use only the number of 
elements (rather than the total size), but what was implemented wasn't 
that either.


maybe the number of elements is really the correct thing to do...



If someone has some insight, or good benchmarks to choose a better 
function it would be welcome.


Fawzi





Re: Pop quiz [memory usage]

2009-06-06 Thread Sean Kelly

Jarrett Billingsley wrote:

On Sat, Jun 6, 2009 at 8:03 AM, Vladimir
Panteleevthecybersha...@gmail.com wrote:

// Works for DMD1/Phobos, DMD1/Tango and DMD2/Phobos
version(Tango) import tango.io.Console;
else   import std.stdio;

struct S
{
   ubyte[40_000] data;
}

void main()
{
   S[] a;
   a ~= S();

   // QUESTION: How much memory will this program consume upon reaching
this point?
   version(Tango) Cin.copyln();
   else   readln();
}



There seems to be something wrong with the newCapacity function that
_d_arrayappendcT calls.  From an element size of 2 (I halved it
just to make the allocation faster) and an array length of 1, it
somehow calculates the new size to be 266686600.  Hm.  That seems a
bit off.

It seems this line:

long mult = 100 + (1000L * size) / log2plus1(newcap);

is to blame.  I don't think large value types were taken into account
here.  The resulting multiplier is 1,333,433, which is hilariously
large.


I've been debating for a while whether newCapacity shoulds exist at all. 
 The GC already tends to leave free space at the end of blocks, so why 
should the runtime second-guess it?  Particularly in D2 where append 
operations on arrays are probably less common as a result of string 
being invariant.  I'll fix newCapacity and run some tests, depending on 
their result I may disable it entirely.


Re: Pop quiz [memory usage]

2009-06-06 Thread Andrei Alexandrescu

Sean Kelly wrote:

Jarrett Billingsley wrote:

On Sat, Jun 6, 2009 at 8:03 AM, Vladimir
Panteleevthecybersha...@gmail.com wrote:

// Works for DMD1/Phobos, DMD1/Tango and DMD2/Phobos
version(Tango) import tango.io.Console;
else   import std.stdio;

struct S
{
   ubyte[40_000] data;
}

void main()
{
   S[] a;
   a ~= S();

   // QUESTION: How much memory will this program consume upon 
reaching

this point?
   version(Tango) Cin.copyln();
   else   readln();
}



There seems to be something wrong with the newCapacity function that
_d_arrayappendcT calls.  From an element size of 2 (I halved it
just to make the allocation faster) and an array length of 1, it
somehow calculates the new size to be 266686600.  Hm.  That seems a
bit off.

It seems this line:

long mult = 100 + (1000L * size) / log2plus1(newcap);

is to blame.  I don't think large value types were taken into account
here.  The resulting multiplier is 1,333,433, which is hilariously
large.


I've been debating for a while whether newCapacity shoulds exist at all. 
 The GC already tends to leave free space at the end of blocks, so why 
should the runtime second-guess it?  Particularly in D2 where append 
operations on arrays are probably less common as a result of string 
being invariant.  I'll fix newCapacity and run some tests, depending on 
their result I may disable it entirely.


I think that's a great point.

Andrei


Re: Pop quiz [memory usage]

2009-06-06 Thread bearophile
Sean Kelly:
 Particularly in D2 where append 
 operations on arrays are probably less common as a result of string 
 being invariant.

They aren't much common maybe because they are currently dead-slow. Appending 
to an immutable string is a common operation. But I guess Array appenders will 
get more common...

Bye,
bearophile


Re: D2's feature set?

2009-06-06 Thread Brad Roberts
Steven Schveighoffer wrote:
 On Sat, 06 Jun 2009 09:35:15 -0400, Christopher Wright
 dhase...@gmail.com wrote:
 
 Kristian Kilpi wrote:
  I think I shouldn't post this because I could very well start one of
 those mega-threads... :D
  Of course, only Walter  Co know what D2 will include when it's
 finally 'released'. Concurrency stuff has been developed lately. But
 something more? Or is that it? What do you think?
   I haven't thought about it too much, but lets say something... ;)
  1) Scoped members. For example:
  class Foo
 {
  scope Bar bar;  // gets destructed with Foo object
 }

 You need to do escape analysis and whole program analysis to determine
 whether there are aliases to a scope member. Failing that, it's pretty
 easy to introduce bugs that are difficult to find.
 
 Not really.  A scope member would be placed in the same memory block as
 the owner class.  So an alias to the member would be the same as an
 alias to the owner class because the same memory block would be
 referenced.  Both wouldn't be collected until neither is referenced.
 
 It's the equivalent of this in C++:
 
 class Bar
 {
 }
 class Foo
 {
   Bar bar;
 }
 
 The only issue left to decide is how the member is initialized.  There
 must be some rules, like if bar has no default constructor, it must be
 new'd in the constructor.  And it can't be rebound.
 
 -Steve

Can't work like that since it's subject to the object slicing problem.  Class
size can't be known in advance since subclasses can add an arbitrary amount of
additional storage requirements.  If you restrict it to structs or other value
types, then it could work but would be awfully restrictive.

Later,
Brad


Re: Pop quiz [memory usage]

2009-06-06 Thread Vladimir Panteleev

On Sat, 06 Jun 2009 20:19:45 +0300, Sean Kelly s...@invisibleduck.org
wrote:


Jarrett Billingsley wrote:

On Sat, Jun 6, 2009 at 8:03 AM, Vladimir
Panteleevthecybersha...@gmail.com wrote:

// Works for DMD1/Phobos, DMD1/Tango and DMD2/Phobos
version(Tango) import tango.io.Console;
else   import std.stdio;

struct S
{
   ubyte[40_000] data;
}

void main()
{
   S[] a;
   a ~= S();

   // QUESTION: How much memory will this program consume upon  
reaching

this point?
   version(Tango) Cin.copyln();
   else   readln();
}


 There seems to be something wrong with the newCapacity function that
_d_arrayappendcT calls.  From an element size of 2 (I halved it
just to make the allocation faster) and an array length of 1, it
somehow calculates the new size to be 266686600.  Hm.  That seems a
bit off.
 It seems this line:
 long mult = 100 + (1000L * size) / log2plus1(newcap);
 is to blame.  I don't think large value types were taken into account
here.  The resulting multiplier is 1,333,433, which is hilariously
large.


I've been debating for a while whether newCapacity shoulds exist at all.  
  The GC already tends to leave free space at the end of blocks, so why  
should the runtime second-guess it?


Do you mean at the end of pages, or pools?

Pages are only 4K in size, causing a reallocation on every 4K doesn't make
sense performance-wise.

If you mean that DMD will allocate memory pools larger than the immediate
memory requirement, that's true, however D's GC is greedy in that it
always allocates memory at the lowest address possible. This means that
when all previous pools fill up, the next object will cap the new array,
and the array will no longer be able to extend.

Allow me to demonstrate:

---
import std.gc;
import std.stdio;

struct S1
{
ubyte[4095] data;
}

struct S2
{
ubyte[4095*4] data;
}

void main()
{
S1[] a;
S2*[] b;
for (int i=0;i1024;i++)
{
a ~= S1();
b ~= new S2;
writefln(%d, i);
}
}
---

This program allocates 4-page blocks and appends 1-page blocks to an array
in a loop.

Here's a Diamond[1] analysis screenshot from before and after the first
two garbage collects:
http://dump.thecybershadow.net/b4af5badf32c954b7a18b848b7d9da64/1.png

The P+++ segments are S2 instances. The segments with the longer tails of
+es are copies of S1[].

As you can see, as soon as the previous pools fill up, the pool containing
the S1[] gets rapidly filled, because it's just a loop of reallocating
S1[] every time an S2[] is allocated at its end.

So, I don't think that your idea is feasable with the current GC
implementation. I wonder how much would things get improved if objects
would be divided between growing and non-growing, with the GC
prioritizing allocating new objects in free space not directly following
growing objects.

[1] http://www.dsource.org/projects/diamond

--
Best regards,
   Vladimir  mailto:thecybersha...@gmail.com


Re: Pop quiz [memory usage]

2009-06-06 Thread Sean Kelly

bearophile wrote:

Sean Kelly:
Particularly in D2 where append 
operations on arrays are probably less common as a result of string 
being invariant.


They aren't much common maybe because they are currently dead-slow. Appending 
to an immutable string is a common operation. But I guess Array appenders will 
get more common...


Yes but appending to an immutable string is never performed in place, 
which is the only time the extra space reserved by newCapacity matters. 
 I suspect the memory wasted by newCapacity is more of an issue than 
any time savings it provides.


Re: Pop quiz [memory usage]

2009-06-06 Thread Sean Kelly

Vladimir Panteleev wrote:

On Sat, 06 Jun 2009 20:19:45 +0300, Sean Kelly s...@invisibleduck.org
wrote:


Jarrett Billingsley wrote:

On Sat, Jun 6, 2009 at 8:03 AM, Vladimir
Panteleevthecybersha...@gmail.com wrote:

// Works for DMD1/Phobos, DMD1/Tango and DMD2/Phobos
version(Tango) import tango.io.Console;
else   import std.stdio;

struct S
{
   ubyte[40_000] data;
}

void main()
{
   S[] a;
   a ~= S();

   // QUESTION: How much memory will this program consume upon 
reaching

this point?
   version(Tango) Cin.copyln();
   else   readln();
}


 There seems to be something wrong with the newCapacity function that
_d_arrayappendcT calls.  From an element size of 2 (I halved it
just to make the allocation faster) and an array length of 1, it
somehow calculates the new size to be 266686600.  Hm.  That seems a
bit off.
 It seems this line:
 long mult = 100 + (1000L * size) / log2plus1(newcap);
 is to blame.  I don't think large value types were taken into account
here.  The resulting multiplier is 1,333,433, which is hilariously
large.


I've been debating for a while whether newCapacity shoulds exist at 
all.   The GC already tends to leave free space at the end of blocks, 
so why should the runtime second-guess it?


Do you mean at the end of pages, or pools?


Blocks.  Blocks less than 4k in the current allocator are sized in 
powers of two, so array appends already get the double in size 
behavior in Java even without newCapacity.  newCapacity just has the 
potential to throw an array into the next larger block early, thus 
potentially wasting more space if the array will never be appended to. 
I think newCapacity isn't supposed to reserve extra space for blocks 
larger than 4k, but it's been a while since I've looked at it.



If you mean that DMD will allocate memory pools larger than the immediate
memory requirement, that's true, however D's GC is greedy in that it
always allocates memory at the lowest address possible. This means that
when all previous pools fill up, the next object will cap the new array,
and the array will no longer be able to extend.


This is a quirk of the current GC.  A new GC may not behave the same way 
(in fact I can guarantee that the one I'm working on is implemented 
differently).



So, I don't think that your idea is feasable with the current GC
implementation.


I'm not sure I understand.  The only idea I proposed was to get rid of 
newCapacity.



I wonder how much would things get improved if objects
would be divided between growing and non-growing, with the GC
prioritizing allocating new objects in free space not directly following
growing objects.


The user already knows best which arrays are going to grow though.  Is 
this really something the runtime should try to figure out?


Re: Pop quiz [memory usage]

2009-06-06 Thread Jarrett Billingsley
On Sat, Jun 6, 2009 at 1:42 PM, bearophilebearophileh...@lycos.com wrote:
 Sean Kelly:
 Particularly in D2 where append
 operations on arrays are probably less common as a result of string
 being invariant.

 They aren't much common maybe because they are currently dead-slow. Appending 
 to an immutable string is a common operation. But I guess Array appenders 
 will get more common...

Especially since D2 has one in the stdlib.  I always find myself
writing my own anyway, since I don't like depending on unspecified
heap behavior.


Re: Pop quiz [memory usage]

2009-06-06 Thread Vladimir Panteleev
On Sat, 06 Jun 2009 22:07:40 +0300, Sean Kelly s...@invisibleduck.org  
wrote:


Blocks.  Blocks less than 4k in the current allocator are sized in  
powers of two, so array appends already get the double in size  
behavior in Java even without newCapacity.  newCapacity just has the  
potential to throw an array into the next larger block early, thus  
potentially wasting more space if the array will never be appended to. I  
think newCapacity isn't supposed to reserve extra space for blocks  
larger than 4k, but it's been a while since I've looked at it.


Yes, but you do agree that the penalty of reallocating every time the  
array size goes over a 4kB boundary (in the worst case of heap  
fragmentation similar like the one I demonstrated) is not acceptable?


This is a quirk of the current GC.  A new GC may not behave the same way  
(in fact I can guarantee that the one I'm working on is implemented  
differently).


Could you tell us more about that? I was toying with a new GC idea myself  
(since last year) but haven't gotten around to finishing it yet.


I'm not sure I understand.  The only idea I proposed was to get rid of  
newCapacity.


I understood as you wanting to change the code so it would behave as if  
newCapacity always returned newlength * size.


The user already knows best which arrays are going to grow though.  Is  
this really something the runtime should try to figure out?


No, but then you'll need to change the language specification to allow the  
user to specify growable arrays. I guess the proper solution here is to  
force the user to use specialized classes with their own newCapacity  
implementations.


--
Best regards,
 Vladimir  mailto:thecybersha...@gmail.com


Re: Pop quiz [memory usage]

2009-06-06 Thread Fawzi Mohamed

On 2009-06-06 21:07:40 +0200, Sean Kelly s...@invisibleduck.org said:


Vladimir Panteleev wrote:

On Sat, 06 Jun 2009 20:19:45 +0300, Sean Kelly s...@invisibleduck.org
wrote:

[...]


I've been debating for a while whether newCapacity shoulds exist at 
all.   The GC already tends to leave free space at the end of blocks, 
so why should the runtime second-guess it?


Do you mean at the end of pages, or pools?


Blocks.  Blocks less than 4k in the current allocator are sized in 
powers of two, so array appends already get the double in size 
behavior in Java even without newCapacity.  newCapacity just has the 
potential to throw an array into the next larger block early, thus 
potentially wasting more space if the array will never be appended to. 
I think newCapacity isn't supposed to reserve extra space for blocks 
larger than 4k, but it's been a while since I've looked at it.


at the moment the behavior is exactly the opposite (leaving the small 
array to the GC handling you just sketched)), only array larger than a 
page have the special treatment, I think that the idea is that 
appending to large arrays can be potentially very expensive, so those 
get the special treatment.
But as I said previously I don't fully understand the rationale, 
especially behind the idea of having more reserved space if the 
elements are larger, I could understand having the reserved space just 
referring to the elements, like this:


   long mult = 100 + 200L / log2plus2(newlength)
instead of
   long mult = 100 + 1000L / log2plus2(newcap)

or something in between

these give at most 1.5 or 1.71, so a waste of at most 100% or 71% and 
decreases as 1/log toward 1.02.


To really test this one should benchmark some typical applications.

The current choice is neither of these, I don't know exactly why this 
high weight to the high size elements. I think it is an error, but 
maybe there was an idea (at least for smallish sizes), that I don't see.


Fawzi




If you mean that DMD will allocate memory pools larger than the immediate
memory requirement, that's true, however D's GC is greedy in that it
always allocates memory at the lowest address possible. This means that
when all previous pools fill up, the next object will cap the new array,
and the array will no longer be able to extend.


This is a quirk of the current GC.  A new GC may not behave the same 
way (in fact I can guarantee that the one I'm working on is implemented 
differently).



So, I don't think that your idea is feasable with the current GC
implementation.


I'm not sure I understand.  The only idea I proposed was to get rid 
of newCapacity.



I wonder how much would things get improved if objects
would be divided between growing and non-growing, with the GC
prioritizing allocating new objects in free space not directly following
growing objects.


The user already knows best which arrays are going to grow though.  Is 
this really something the runtime should try to figure out?





Re: D2's feature set?

2009-06-06 Thread Steve Schveighoffer
On Sat, 06 Jun 2009 10:57:08 -0700, Brad Roberts wrote:

 Steven Schveighoffer wrote:
 On Sat, 06 Jun 2009 09:35:15 -0400, Christopher Wright
 dhase...@gmail.com wrote:
 
 Kristian Kilpi wrote:
  I think I shouldn't post this because I could very well start one of
 those mega-threads... :D
  Of course, only Walter  Co know what D2 will include when it's
 finally 'released'. Concurrency stuff has been developed lately. But
 something more? Or is that it? What do you think?
   I haven't thought about it too much, but lets say something... ;)
  1) Scoped members. For example:
  class Foo
 {
  scope Bar bar;  // gets destructed with Foo object
 }

 You need to do escape analysis and whole program analysis to determine
 whether there are aliases to a scope member. Failing that, it's pretty
 easy to introduce bugs that are difficult to find.
 
 Not really.  A scope member would be placed in the same memory block as
 the owner class.  So an alias to the member would be the same as an
 alias to the owner class because the same memory block would be
 referenced.  Both wouldn't be collected until neither is referenced.
 
 It's the equivalent of this in C++:
 
 class Bar
 {
 }
 class Foo
 {
   Bar bar;
 }
 
 The only issue left to decide is how the member is initialized.  There
 must be some rules, like if bar has no default constructor, it must be
 new'd in the constructor.  And it can't be rebound.
 
 -Steve
 
 Can't work like that since it's subject to the object slicing problem. 
 Class size can't be known in advance since subclasses can add an
 arbitrary amount of additional storage requirements.  If you restrict it
 to structs or other value types, then it could work but would be awfully
 restrictive.

Structs are already part of the class data.  I don't believe something 
like this will work

class A : B {}
class C
{
   scope A a;
   this() {a = new B;}
}

It would not work any differently than how scope classes are allocated on 
the stack.  Just instead of the stack, you are allocating the scope class 
inside the class data instead of a stack (well, if the owner class was 
allocated on the stack, it would be on the stack).  You would not have 
any slicing issues.  Either:

1. The class contains not only the member class data but also a reference 
to it (and in this case, the reference could be re-bound).
2. The class can not reassign the member once it is assigned (i.e. head-
const).  When it passes the class member to a function, the compiler 
would pass a reference to the contained class data.  This is what I was 
expecting.

The benefit would be that you can control the destruction order of the 
members in the owner's destructor.

-Steve


Re: Pop quiz [memory usage]

2009-06-06 Thread Lionello Lunesu

Vladimir Panteleev wrote:
On Sat, 06 Jun 2009 18:12:58 +0300, Jarrett Billingsley 
jarrett.billings...@gmail.com wrote:



On Sat, Jun 6, 2009 at 8:03 AM, Vladimir
Panteleevthecybersha...@gmail.com wrote:

// Works for DMD1/Phobos, DMD1/Tango and DMD2/Phobos
version(Tango) import tango.io.Console;
else   import std.stdio;

struct S
{
   ubyte[40_000] data;
}

void main()
{
   S[] a;
   a ~= S();

   // QUESTION: How much memory will this program consume upon 
reaching

this point?
   version(Tango) Cin.copyln();
   else   readln();
}



There seems to be something wrong with the newCapacity function that
_d_arrayappendcT calls.  From an element size of 2 (I halved it
just to make the allocation faster) and an array length of 1, it
somehow calculates the new size to be 266686600.  Hm.  That seems a
bit off.

It seems this line:

long mult = 100 + (1000L * size) / log2plus1(newcap);

is to blame.  I don't think large value types were taken into account
here.  The resulting multiplier is 1,333,433, which is hilariously
large.


Bulls-eye. I think mr. Dave Fladebo deserves a math lesson or something, 
and mr. Walter Bright should review patches that go into the language's 
runtime more carefully. :P


Can we discuss a suitable replacement? I suggested in #d that we look at 
how other platforms do it. For example, Java's Vector just doubles its 
capacity by default ( 
http://java.sun.com/javase/6/docs/api/java/util/Vector.html ).




In my own array classes I'm using Python's: size += max(size3, 8);
Using this, you won't end up wasting a lot of memory. Preallocating is 
always preferred anyway.


L.


Re: Pop quiz [memory usage]

2009-06-06 Thread Sean Kelly

Fawzi Mohamed wrote:

On 2009-06-06 21:07:40 +0200, Sean Kelly s...@invisibleduck.org said:


Vladimir Panteleev wrote:

On Sat, 06 Jun 2009 20:19:45 +0300, Sean Kelly s...@invisibleduck.org
wrote:

[...]


I've been debating for a while whether newCapacity shoulds exist at 
all.   The GC already tends to leave free space at the end of 
blocks, so why should the runtime second-guess it?


Do you mean at the end of pages, or pools?


Blocks.  Blocks less than 4k in the current allocator are sized in 
powers of two, so array appends already get the double in size 
behavior in Java even without newCapacity.  newCapacity just has the 
potential to throw an array into the next larger block early, thus 
potentially wasting more space if the array will never be appended to. 
I think newCapacity isn't supposed to reserve extra space for blocks 
larger than 4k, but it's been a while since I've looked at it.


at the moment the behavior is exactly the opposite (leaving the small 
array to the GC handling you just sketched)), only array larger than a 
page have the special treatment, I think that the idea is that appending 
to large arrays can be potentially very expensive, so those get the 
special treatment.


Hm.  I still think we'd be better off letting the user handle this.  If 
they're going to be appending and performance is important then they'll 
use an ArrayAppender anyway.  I'd rather not have extra space tacked 
onto the end of every array I create just in case I decide to append 
to it.


Re: Pop quiz [memory usage]

2009-06-06 Thread Steve Schveighoffer
On Sat, 06 Jun 2009 12:03:03 -0700, Sean Kelly wrote:

 bearophile wrote:
 Sean Kelly:
 Particularly in D2 where append
 operations on arrays are probably less common as a result of string
 being invariant.
 
 They aren't much common maybe because they are currently dead-slow.
 Appending to an immutable string is a common operation. But I guess
 Array appenders will get more common...
 
 Yes but appending to an immutable string is never performed in place,
 which is the only time the extra space reserved by newCapacity matters.
   I suspect the memory wasted by newCapacity is more of an issue than
 any time savings it provides.

What gave you that idea?

void main()
{
  auto str1 = hello.idup;
  auto str2 = str1;
  str1 ~= world;
  assert(str1.ptr == str2.ptr);
}

-Steve


Re: Pop quiz [memory usage]

2009-06-06 Thread Sean Kelly

Steve Schveighoffer wrote:

On Sat, 06 Jun 2009 12:03:03 -0700, Sean Kelly wrote:


bearophile wrote:

Sean Kelly:

Particularly in D2 where append
operations on arrays are probably less common as a result of string
being invariant.

They aren't much common maybe because they are currently dead-slow.
Appending to an immutable string is a common operation. But I guess
Array appenders will get more common...

Yes but appending to an immutable string is never performed in place,
which is the only time the extra space reserved by newCapacity matters.
  I suspect the memory wasted by newCapacity is more of an issue than
any time savings it provides.


What gave you that idea?

void main()
{
  auto str1 = hello.idup;
  auto str2 = str1;
  str1 ~= world;
  assert(str1.ptr == str2.ptr);
}


auto str1 = hello.idup;
auto str2 = str3 = str1;
str2 ~=  world;
str3 ~=  garbage;

Doesn't seem terribly safe to me.


Re: Pop quiz [memory usage]

2009-06-06 Thread davidl
在 Sun, 07 Jun 2009 12:59:39 +0800,Sean Kelly s...@invisibleduck.org  
写道:



Steve Schveighoffer wrote:

On Sat, 06 Jun 2009 12:03:03 -0700, Sean Kelly wrote:


bearophile wrote:

Sean Kelly:

Particularly in D2 where append
operations on arrays are probably less common as a result of string
being invariant.

They aren't much common maybe because they are currently dead-slow.
Appending to an immutable string is a common operation. But I guess
Array appenders will get more common...

Yes but appending to an immutable string is never performed in place,
which is the only time the extra space reserved by newCapacity matters.
  I suspect the memory wasted by newCapacity is more of an issue than
any time savings it provides.

 What gave you that idea?
 void main()
{
  auto str1 = hello.idup;
  auto str2 = str1;
  str1 ~= world;
  assert(str1.ptr == str2.ptr);
}


auto str1 = hello.idup;
auto str2 = str3 = str1;
str2 ~=  world;
str3 ~=  garbage;

Doesn't seem terribly safe to me.


Oh, file a bug report! you find the bug!

--
使用 Opera 革命性的电子邮件客户程序: http://www.opera.com/mail/


[Issue 3054] New: multithreading GC problem. And Stdio not multithreading safe

2009-06-06 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=3054

   Summary: multithreading GC problem. And Stdio not
multithreading safe
   Product: D
   Version: 2.028
  Platform: Other
OS/Version: Windows
Status: NEW
  Severity: critical
  Priority: P2
 Component: Phobos
AssignedTo: bugzi...@digitalmars.com
ReportedBy: dav...@126.com


import core.thread;
import std.stdio;
class threadid
{
static int threadid;
static int threadnum;
int getId(){threadid++; return threadid; }
}
threadid TID;
void main()
{
TID = new threadid;
while(true)
{
try
{
synchronized(TID) if (TID.threadnum500)
{
auto stress = (new Thread(
(){
int tid;
synchronized(TID){ tid = TID.getId(); }
scope(exit) synchronized(TID){TID.threadnum--;}
synchronized(TID){TID.threadnum++;}
//writefln(new thread:, tid, TID.threadnum);
void[] buffer;
try
{
buffer.length= 65536;
}
catch(Exception e)
{
writefln(thread:, tid);
writefln(e.msg);
}
}
));
stress.start();
}
//writefln(outside:, TID.threadnum);
}
catch(Exception e)
{
//writefln(error: , e.msg);
}
}
}

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 3029] Bug in array value mangling rule

2009-06-06 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=3029





--- Comment #2 from Shin Fujishiro rsi...@gmail.com  2009-06-06 09:07:02 PDT 
---
Another (possibly better) option is to fix the numeric literal mangling rule as
this:

Value:
i Number// positive numeric literal
i N Number  // negative numeric literal

The prefix 'i' avoids the mangled-name collision. And this rule is consistent
with other literal mangling rules, which are prefixed by some character (e.g.
'e' for floating point literals).

Patch (expression.c):

 void IntegerExp::toMangleBuffer(OutBuffer *buf)
 {
 if ((sinteger_t)value  0)
-buf-printf(N%jd, -value);
+buf-printf(iN%jd, -value);
 else
-buf-printf(%jd, value);
+buf-printf(i%jd, value);
 }


-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 3055] New: operator doesn't get correct func to construct the delegate

2009-06-06 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=3055

   Summary:  operator doesn't get correct func to construct the
delegate
   Product: D
   Version: 2.028
  Platform: Other
OS/Version: Windows
Status: NEW
  Severity: normal
  Priority: P2
 Component: DMD
AssignedTo: bugzi...@digitalmars.com
ReportedBy: dav...@126.com


import std.stdio;
class t
{
void func(){writefln(t);}
void delegate() getdelegate(){return t.func; } // this doesn't return
the delegate of t.func
}

class v:t
{
void func(){writefln(v);}
}

class r:v
{
void func(){writefln(r);}
void delegate() getdelegate(){return t.getdelegate(); }
}

void main()
{
r p= new r;
p.getdelegate()();
}

h3 gave me the solution which can work correctly:
import std.stdio;
class t
{
void func(){writefln(t);}
void delegate() getdelegate(){ static void function() getfuncptr() { return
func; } auto res = func; res.funcptr = getfuncptr; return res; }
}

class v:t
{
void func(){writefln(v);}
}

class r:v
{
void func(){writefln(r);}
void delegate() getdelegate(){return t.getdelegate(); }
}

void main()
{
r p= new r;
p.getdelegate()();
}

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---