Re: LLVM 2.6 Release!

2009-10-28 Thread Nick B

bearophile wrote:

Andrei Alexandrescu:

[snip]
 You can see an example of this from the missing videos/PDFs of the 
last conference, they were not allowed to show them, because Apple is 
sometimes even more corporative than Microsoft:

http://llvm.org/devmtg/2009-10/

Bye,
bearophile


Bearophile

Thanks for the link. By the way there is quite a interesting talk from
David Greene from CRAY using LLVM titled "LLVM on 180k Cores".

cheers
Nick B.


Re: The Thermopylae excerpt of TDPL available online

2009-10-28 Thread Andrei Alexandrescu

dsimcha wrote:

== Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s article

It's a rough rough draft, but one for the full chapter on arrays,
associative arrays, and strings.
http://erdani.com/d/thermopylae.pdf
Any feedback is welcome. Thanks!
Andrei


Given that new is all over the place, does this mean we're not getting rid of 
new
before D2 goes gold?


I guess we need to compromise.

Andrei


Re: The Thermopylae excerpt of TDPL available online

2009-10-28 Thread dsimcha
== Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s article
> It's a rough rough draft, but one for the full chapter on arrays,
> associative arrays, and strings.
> http://erdani.com/d/thermopylae.pdf
> Any feedback is welcome. Thanks!
> Andrei

Given that new is all over the place, does this mean we're not getting rid of 
new
before D2 goes gold?


The Thermopylae excerpt of TDPL available online

2009-10-28 Thread Andrei Alexandrescu
It's a rough rough draft, but one for the full chapter on arrays, 
associative arrays, and strings.


http://erdani.com/d/thermopylae.pdf

Any feedback is welcome. Thanks!


Andrei


Re: associative arrays: iteration is finally here

2009-10-28 Thread Andrei Alexandrescu

Denis Koroskin wrote:
On Thu, 29 Oct 2009 03:08:34 +0300, Andrei Alexandrescu 
 wrote:



Denis Koroskin wrote:
On Wed, 28 Oct 2009 23:18:08 +0300, Andrei Alexandrescu 
 wrote:



I'd also like you to add a few things in an AA interface.
 First, opIn should not return a pointer to Value, but a pointer to 
a pair of Key and Value, if possible (i.e. if this change won't 
sacrifice performance).


I'm coy about adding that because it forces the implementation to 
hold keys and values next to each other. I think that was a minor 
mistake of STL - there's too much exposure of layout details.


 It doesn't have to be the case: key and value are both properties 
(i.e. methods), and they doesn't have to be located next to each other.


I see. So you want a pointer to an elaborate type featuring a key and 
a value.


Second, AA.remove method should accept result of opIn operation to 
avoid an additional lookup for removal:

 if (auto value = key in aa) {
aa.remove(key); // an unnecessary lookup
}


I'll make aa.remove(key) always work and return a bool that tells 
you whether there was a mapping or not.



 Err... How does it solve the double lookup problem?


Your test looks something up and then removes it.


Andrei


Well, my extended test case looks something up, manipulates the found 
value, and then possibly removes it.


Ok, I understand your points, thanks for explaining.

Andrei


Re: associative arrays: iteration is finally here

2009-10-28 Thread Denis Koroskin
On Thu, 29 Oct 2009 03:08:34 +0300, Andrei Alexandrescu  
 wrote:



Denis Koroskin wrote:
On Wed, 28 Oct 2009 23:18:08 +0300, Andrei Alexandrescu  
 wrote:



I'd also like you to add a few things in an AA interface.
 First, opIn should not return a pointer to Value, but a pointer to a  
pair of Key and Value, if possible (i.e. if this change won't  
sacrifice performance).


I'm coy about adding that because it forces the implementation to hold  
keys and values next to each other. I think that was a minor mistake  
of STL - there's too much exposure of layout details.


 It doesn't have to be the case: key and value are both properties  
(i.e. methods), and they doesn't have to be located next to each other.


I see. So you want a pointer to an elaborate type featuring a key and a  
value.


Second, AA.remove method should accept result of opIn operation to  
avoid an additional lookup for removal:

 if (auto value = key in aa) {
aa.remove(key); // an unnecessary lookup
}


I'll make aa.remove(key) always work and return a bool that tells you  
whether there was a mapping or not.



 Err... How does it solve the double lookup problem?


Your test looks something up and then removes it.


Andrei


Well, my extended test case looks something up, manipulates the found  
value, and then possibly removes it.


Re: associative arrays: iteration is finally here

2009-10-28 Thread Andrei Alexandrescu

Denis Koroskin wrote:
On Wed, 28 Oct 2009 23:18:08 +0300, Andrei Alexandrescu 
 wrote:



I'd also like you to add a few things in an AA interface.
 First, opIn should not return a pointer to Value, but a pointer to a 
pair of Key and Value, if possible (i.e. if this change won't 
sacrifice performance).


I'm coy about adding that because it forces the implementation to hold 
keys and values next to each other. I think that was a minor mistake 
of STL - there's too much exposure of layout details.




It doesn't have to be the case: key and value are both properties (i.e. 
methods), and they doesn't have to be located next to each other.


I see. So you want a pointer to an elaborate type featuring a key and a 
value.


Second, AA.remove method should accept result of opIn operation to 
avoid an additional lookup for removal:

 if (auto value = key in aa) {
aa.remove(key); // an unnecessary lookup
}


I'll make aa.remove(key) always work and return a bool that tells you 
whether there was a mapping or not.




Err... How does it solve the double lookup problem?


Your test looks something up and then removes it.


Andrei


Re: associative arrays: iteration is finally here

2009-10-28 Thread Andrei Alexandrescu

Lars T. Kyllingstad wrote:

Andrei Alexandrescu wrote:

dsimcha wrote:
== Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s 
article

Walter has magically converted his work on T[new] into work on making
associative arrays true templates defined in druntime and not 
considered

very special by the compiler.
This is very exciting because it opens up or simplifies a number of
possibilities. One is that of implementing true iteration. I actually
managed to implement last night something that allows you to do:
int[int] aa = [ 1:1 ];
auto iter = aa.each;
writeln(iter.front.key);
writeln(iter.front.value);
Two other iterations are possible: by key and by value (in those cases
iter.front just returns a key or a value).
One question is, what names should these bear? I am thinking of makign
opSlice() a universal method of getting the "all" iterator, a default
that every container must implement.
For AAs, there would be a "iterate keys" and "iterate values" 
properties

or functions. How should they be called?
Thanks,
Andrei


Awesome, this definitely improves the interface, but how about the 
implementation?
 The current implementation, while fast for reading, is unbelievably 
slow for
adding elements, requires a heap allocation (read:  a global lock) on 
*every*
insertion, and generates an insane amount of false pointers.  Even if 
I succeed in
making heap scanning (mostly) precise, it's not clear if the current 
AA impl.
could easily be made to benefit, since it isn't template based.  It 
uses RTTI

internally instead, and the types it's operating on aren't known to the
implementation at compile time, so I wouldn't be able to use 
templates to generate
the bitmask at compile time.  The structs it uses internally would 
therefor have

to be scanned conservatively.


I'm afraid that efficiency is a matter I need to defer to the 
community for now. Right now, I am trying to get TDPL done. Having or 
not having range-style iteration influences the material. Making that 
efficient is a matter that would not influence the material (as long 
as there is a strong belief that that's doable).


Unrelated: one thing that we need to change about AAs is the inability 
to get a true reference to the stored element. aa[k] returns an 
rvalue, and a[k] = v is done in a manner akin to opIndexAssign. But a 
serious AA should have a method of reaching the actual storage for a 
value, I think.


Isn't that what the in operator does?

   T[U] aa;
   U key = something;
   T* p = key in aa;

-Lars


Correct, sorry for the oversight.

Andrei


Re: Disallow catch without parameter ("LastCatch")

2009-10-28 Thread Christopher Wright

Denis Koroskin wrote:
OutOfMemory exception is supposed to be thrown with a call to 
onOutOfMemoryError(), that throws OutOfMemoryError.classinfo.init (i.e. 
global immutable instance of an Error).


That's clever. I like it.


Re: associative arrays: iteration is finally here

2009-10-28 Thread Lars T. Kyllingstad

Andrei Alexandrescu wrote:

dsimcha wrote:
== Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s 
article

Walter has magically converted his work on T[new] into work on making
associative arrays true templates defined in druntime and not considered
very special by the compiler.
This is very exciting because it opens up or simplifies a number of
possibilities. One is that of implementing true iteration. I actually
managed to implement last night something that allows you to do:
int[int] aa = [ 1:1 ];
auto iter = aa.each;
writeln(iter.front.key);
writeln(iter.front.value);
Two other iterations are possible: by key and by value (in those cases
iter.front just returns a key or a value).
One question is, what names should these bear? I am thinking of makign
opSlice() a universal method of getting the "all" iterator, a default
that every container must implement.
For AAs, there would be a "iterate keys" and "iterate values" properties
or functions. How should they be called?
Thanks,
Andrei


Awesome, this definitely improves the interface, but how about the 
implementation?
 The current implementation, while fast for reading, is unbelievably 
slow for
adding elements, requires a heap allocation (read:  a global lock) on 
*every*
insertion, and generates an insane amount of false pointers.  Even if 
I succeed in
making heap scanning (mostly) precise, it's not clear if the current 
AA impl.
could easily be made to benefit, since it isn't template based.  It 
uses RTTI

internally instead, and the types it's operating on aren't known to the
implementation at compile time, so I wouldn't be able to use templates 
to generate
the bitmask at compile time.  The structs it uses internally would 
therefor have

to be scanned conservatively.


I'm afraid that efficiency is a matter I need to defer to the community 
for now. Right now, I am trying to get TDPL done. Having or not having 
range-style iteration influences the material. Making that efficient is 
a matter that would not influence the material (as long as there is a 
strong belief that that's doable).


Unrelated: one thing that we need to change about AAs is the inability 
to get a true reference to the stored element. aa[k] returns an rvalue, 
and a[k] = v is done in a manner akin to opIndexAssign. But a serious AA 
should have a method of reaching the actual storage for a value, I think.


Isn't that what the in operator does?

   T[U] aa;
   U key = something;
   T* p = key in aa;

-Lars


Re: associative arrays: iteration is finally here

2009-10-28 Thread Pelle Månsson

Lars T. Kyllingstad wrote:

Pelle Månsson wrote:

Lars T. Kyllingstad wrote:

Pelle Månsson wrote:

Andrei Alexandrescu wrote:

Pelle Månsson wrote:
Also, foreach with a single variable should default to keys, in my 
opinion.


That is debatable as it would make the same code do different 
things for e.g. vectors and sparse vectors.



Andrei


Debatable indeed, but I find myself using either just the keys or 
the keys and values together, rarely just the values. Maybe that's 
just me.



I've used iteration over values more often than iteration over keys.

Besides, I think consistency is important. Since the default for an 
ordinary array is to iterate over the values, it should be the same 
for associative arrays.


-Lars
I don't understand this, when do you want the values without the keys? 
If you do, shouldn't you be using a regular array?


Here's an example:

   class SomeObject { ... }
   void doStuffWith(SomeObject s) { ... }
   void doOtherStuffWith(SomeObject s) { ... }

   // Make a collection of objects indexed by ID strings.
   SomeObject[string] myObjects;
   ...

   // First I just want to do something with one of the
   // objects, namely the one called "foo".
   doStuffWith(myObjects["foo"]);

   // Then, I want to do something with all the objects.
   foreach (obj; myObjects)  doOtherStuffWith(obj);

Of course, if iteration was over keys instead of values, I'd just write

   foreach (id, obj; myObjects)  doOtherStuffWith(obj);

But then again, right now, when iteration is over values and I want the 
keys I can just write the same thing. It all comes down to preference, 
and I prefer things the way they are now. :)



Actually, it doesn't matter all that much, as long as we get .keys and 
.values as alternatives.


I still think the default for foreach should be consistent with normal 
arrays.


-Lars

I think foreach should be consistent with opIn, that is,
if (foo in aa) { //it is in the aa.
  foreach (f; aa) { // loop over each item in the aa
//I expect foo to show up in here, since it is "in" the aa.
  }
}

I use key iteration more than I use value iteration, and it is what I am 
used to. It is, as you say, a matter of preference.


Re: associative arrays: iteration is finally here

2009-10-28 Thread Denis Koroskin
On Wed, 28 Oct 2009 23:18:08 +0300, Andrei Alexandrescu  
 wrote:



I'd also like you to add a few things in an AA interface.
 First, opIn should not return a pointer to Value, but a pointer to a  
pair of Key and Value, if possible (i.e. if this change won't sacrifice  
performance).


I'm coy about adding that because it forces the implementation to hold  
keys and values next to each other. I think that was a minor mistake of  
STL - there's too much exposure of layout details.




It doesn't have to be the case: key and value are both properties (i.e.  
methods), and they doesn't have to be located next to each other.


Second, AA.remove method should accept result of opIn operation to  
avoid an additional lookup for removal:

 if (auto value = key in aa) {
aa.remove(key); // an unnecessary lookup
}


I'll make aa.remove(key) always work and return a bool that tells you  
whether there was a mapping or not.




Err... How does it solve the double lookup problem?


Re: associative arrays: iteration is finally here

2009-10-28 Thread Lars T. Kyllingstad

Pelle Månsson wrote:

Lars T. Kyllingstad wrote:

Pelle Månsson wrote:

Andrei Alexandrescu wrote:

Pelle Månsson wrote:
Also, foreach with a single variable should default to keys, in my 
opinion.


That is debatable as it would make the same code do different things 
for e.g. vectors and sparse vectors.



Andrei


Debatable indeed, but I find myself using either just the keys or the 
keys and values together, rarely just the values. Maybe that's just me.



I've used iteration over values more often than iteration over keys.

Besides, I think consistency is important. Since the default for an 
ordinary array is to iterate over the values, it should be the same 
for associative arrays.


-Lars
I don't understand this, when do you want the values without the keys? 
If you do, shouldn't you be using a regular array?


Here's an example:

   class SomeObject { ... }
   void doStuffWith(SomeObject s) { ... }
   void doOtherStuffWith(SomeObject s) { ... }

   // Make a collection of objects indexed by ID strings.
   SomeObject[string] myObjects;
   ...

   // First I just want to do something with one of the
   // objects, namely the one called "foo".
   doStuffWith(myObjects["foo"]);

   // Then, I want to do something with all the objects.
   foreach (obj; myObjects)  doOtherStuffWith(obj);

Of course, if iteration was over keys instead of values, I'd just write

   foreach (id, obj; myObjects)  doOtherStuffWith(obj);

But then again, right now, when iteration is over values and I want the 
keys I can just write the same thing. It all comes down to preference, 
and I prefer things the way they are now. :)



Actually, it doesn't matter all that much, as long as we get .keys and 
.values as alternatives.


I still think the default for foreach should be consistent with normal 
arrays.


-Lars


Re: associative arrays: iteration is finally here

2009-10-28 Thread Andrei Alexandrescu

Denis Koroskin wrote:
On Wed, 28 Oct 2009 17:22:00 +0300, Andrei Alexandrescu 
 wrote:


Walter has magically converted his work on T[new] into work on making 
associative arrays true templates defined in druntime and not 
considered very special by the compiler.




Wow, this is outstanding! (I hope it didn't have any negative impact on 
compile-time AA capabilities).


This is very exciting because it opens up or simplifies a number of 
possibilities. One is that of implementing true iteration. I actually 
managed to implement last night something that allows you to do:


int[int] aa = [ 1:1 ];
auto iter = aa.each;
writeln(iter.front.key);
writeln(iter.front.value);

Two other iterations are possible: by key and by value (in those cases 
iter.front just returns a key or a value).


One question is, what names should these bear? I am thinking of makign 
opSlice() a universal method of getting the "all" iterator, a default 
that every container must implement.


For AAs, there would be a "iterate keys" and "iterate values" 
properties or functions. How should they be called?



Thanks,

Andrei


If AA is providing a way to iterate over both keys and values (and it's 
a default iteration scheme), why should AA provide 2 other iteration 
schemes? Can't they be implemented externally (using adaptor ranges) 
with the same efficiency?


foreach (e; keys(aa)) {
writefln("key: %s", e);
}

foreach (e; values(aa)) {
writefln("value: %s", e);
}


Of course. In fact, given the iterator with .key and .value, you can 
always apply map!"a.key" or map!"a.value" to select the desired member.



I'd also like you to add a few things in an AA interface.

First, opIn should not return a pointer to Value, but a pointer to a 
pair of Key and Value, if possible (i.e. if this change won't sacrifice 
performance).


I'm coy about adding that because it forces the implementation to hold 
keys and values next to each other. I think that was a minor mistake of 
STL - there's too much exposure of layout details.


Second, AA.remove method should accept result of opIn operation to avoid 
an additional lookup for removal:


if (auto value = key in aa) {
aa.remove(key); // an unnecessary lookup
}


I'll make aa.remove(key) always work and return a bool that tells you 
whether there was a mapping or not.



Something like this would be perfect:

struct Element(K,V)
{
const K key;
V value;
}

struct AA(K,V)
{
//...
ref Element opIn(K key) { /* throws an exception if element is not 
found */ }

void remove(ref Element elem) { /* removes an element from an AA */ }
void remove(K key) { remove(key in this); }

AARange!(K,V) opSlice() { /* iterates over both keys and values */ }
}

Last, I believe foreach loop should automatically call opSlice() on 
iteratee.


foreach in D2 should already call opSlice() whenever it's defined. If it 
doesn't, that's a bug in the compiler.



Andrei


Re: associative arrays: iteration is finally here

2009-10-28 Thread Pelle Månsson

Lars T. Kyllingstad wrote:

Pelle Månsson wrote:

Andrei Alexandrescu wrote:

Pelle Månsson wrote:
Also, foreach with a single variable should default to keys, in my 
opinion.


That is debatable as it would make the same code do different things 
for e.g. vectors and sparse vectors.



Andrei


Debatable indeed, but I find myself using either just the keys or the 
keys and values together, rarely just the values. Maybe that's just me.



I've used iteration over values more often than iteration over keys.

Besides, I think consistency is important. Since the default for an 
ordinary array is to iterate over the values, it should be the same for 
associative arrays.


-Lars
I don't understand this, when do you want the values without the keys? 
If you do, shouldn't you be using a regular array?


Actually, it doesn't matter all that much, as long as we get .keys and 
.values as alternatives.


Re: associative arrays: iteration is finally here

2009-10-28 Thread Pelle Månsson

Robert Jacques wrote:
On Wed, 28 Oct 2009 15:06:34 -0400, Denis Koroskin <2kor...@gmail.com> 
wrote:


On Wed, 28 Oct 2009 17:22:00 +0300, Andrei Alexandrescu 
 wrote:


Walter has magically converted his work on T[new] into work on making 
associative arrays true templates defined in druntime and not 
considered very special by the compiler.




Wow, this is outstanding! (I hope it didn't have any negative impact 
on compile-time AA capabilities).


This is very exciting because it opens up or simplifies a number of 
possibilities. One is that of implementing true iteration. I actually 
managed to implement last night something that allows you to do:


int[int] aa = [ 1:1 ];
auto iter = aa.each;
writeln(iter.front.key);
writeln(iter.front.value);

Two other iterations are possible: by key and by value (in those 
cases iter.front just returns a key or a value).


One question is, what names should these bear? I am thinking of 
makign opSlice() a universal method of getting the "all" iterator, a 
default that every container must implement.


For AAs, there would be a "iterate keys" and "iterate values" 
properties or functions. How should they be called?



Thanks,

Andrei


If AA is providing a way to iterate over both keys and values (and 
it's a default iteration scheme), why should AA provide 2 other 
iteration schemes? Can't they be implemented externally (using adaptor 
ranges) with the same efficiency?


foreach (e; keys(aa)) {
 writefln("key: %s", e);
}

foreach (e; values(aa)) {
 writefln("value: %s", e);
}

I'd also like you to add a few things in an AA interface.

First, opIn should not return a pointer to Value, but a pointer to a 
pair of Key and Value, if possible (i.e. if this change won't 
sacrifice performance).
Second, AA.remove method should accept result of opIn operation to 
avoid an additional lookup for removal:


if (auto value = key in aa) {
 aa.remove(key); // an unnecessary lookup
}

Something like this would be perfect:

struct Element(K,V)
{
 const K key;
 V value;
}

struct AA(K,V)
{
 //...
 ref Element opIn(K key) { /* throws an exception if element is 
not found */ }


Not finding an element is a common use case, not an exception. Using 
exceptions to pass information is bad style, slow and prevents the use 
of AAs in pure/nothrow functions. Returning a pointer to an element 
would allow both key and value to be accessed and could be null if no 
element is found.


Also, if opIn throws an exception, it kind of defeats the point of opIn, 
and turns it to opIndex.


Re: associative arrays: iteration is finally here

2009-10-28 Thread Pelle Månsson

Denis Koroskin wrote:
On Wed, 28 Oct 2009 17:22:00 +0300, Andrei Alexandrescu 
 wrote:


Walter has magically converted his work on T[new] into work on making 
associative arrays true templates defined in druntime and not 
considered very special by the compiler.




Wow, this is outstanding! (I hope it didn't have any negative impact on 
compile-time AA capabilities).


This is very exciting because it opens up or simplifies a number of 
possibilities. One is that of implementing true iteration. I actually 
managed to implement last night something that allows you to do:


int[int] aa = [ 1:1 ];
auto iter = aa.each;
writeln(iter.front.key);
writeln(iter.front.value);

Two other iterations are possible: by key and by value (in those cases 
iter.front just returns a key or a value).


One question is, what names should these bear? I am thinking of makign 
opSlice() a universal method of getting the "all" iterator, a default 
that every container must implement.


For AAs, there would be a "iterate keys" and "iterate values" 
properties or functions. How should they be called?



Thanks,

Andrei


If AA is providing a way to iterate over both keys and values (and it's 
a default iteration scheme), why should AA provide 2 other iteration 
schemes? Can't they be implemented externally (using adaptor ranges) 
with the same efficiency?


foreach (e; keys(aa)) {
writefln("key: %s", e);
}

foreach (e; values(aa)) {
writefln("value: %s", e);
}


Why would you prefer keys(aa) over aa.keys?

Last, I believe foreach loop should automatically call opSlice() on 
iteratee. There is currently an inconsistency with built-in types - you 
don't have to call [] on them, yet you must call it on all the other types:


Try implementing the range interface (front, popFront and empty), and 
they are ranges. Magic! opApply is worth mentioning here, as well.


Re: associative arrays: iteration is finally here

2009-10-28 Thread Denis Koroskin
On Wed, 28 Oct 2009 22:24:46 +0300, Robert Jacques   
wrote:


On Wed, 28 Oct 2009 15:06:34 -0400, Denis Koroskin <2kor...@gmail.com>  
wrote:


On Wed, 28 Oct 2009 17:22:00 +0300, Andrei Alexandrescu  
 wrote:


Walter has magically converted his work on T[new] into work on making  
associative arrays true templates defined in druntime and not  
considered very special by the compiler.




Wow, this is outstanding! (I hope it didn't have any negative impact on  
compile-time AA capabilities).


This is very exciting because it opens up or simplifies a number of  
possibilities. One is that of implementing true iteration. I actually  
managed to implement last night something that allows you to do:


int[int] aa = [ 1:1 ];
auto iter = aa.each;
writeln(iter.front.key);
writeln(iter.front.value);

Two other iterations are possible: by key and by value (in those cases  
iter.front just returns a key or a value).


One question is, what names should these bear? I am thinking of makign  
opSlice() a universal method of getting the "all" iterator, a default  
that every container must implement.


For AAs, there would be a "iterate keys" and "iterate values"  
properties or functions. How should they be called?



Thanks,

Andrei


If AA is providing a way to iterate over both keys and values (and it's  
a default iteration scheme), why should AA provide 2 other iteration  
schemes? Can't they be implemented externally (using adaptor ranges)  
with the same efficiency?


foreach (e; keys(aa)) {
 writefln("key: %s", e);
}

foreach (e; values(aa)) {
 writefln("value: %s", e);
}

I'd also like you to add a few things in an AA interface.

First, opIn should not return a pointer to Value, but a pointer to a  
pair of Key and Value, if possible (i.e. if this change won't sacrifice  
performance).
Second, AA.remove method should accept result of opIn operation to  
avoid an additional lookup for removal:


if (auto value = key in aa) {
 aa.remove(key); // an unnecessary lookup
}

Something like this would be perfect:

struct Element(K,V)
{
 const K key;
 V value;
}

struct AA(K,V)
{
 //...
 ref Element opIn(K key) { /* throws an exception if element is not  
found */ }


Not finding an element is a common use case, not an exception. Using  
exceptions to pass information is bad style, slow and prevents the use  
of AAs in pure/nothrow functions. Returning a pointer to an element  
would allow both key and value to be accessed and could be null if no  
element is found.




Ooops, right, I first wrote it to return a pointer but changed to a  
reference in last moment (mixed it up with opIndex for some reason).

AA.remove should accept a pointer, too.

 void remove(ref Element elem) { /* removes an element from an AA  
*/ }

 void remove(K key) { remove(key in this); }

 AARange!(K,V) opSlice() { /* iterates over both keys and values */  
}

}

Last, I believe foreach loop should automatically call opSlice() on  
iteratee. There is currently an inconsistency with built-in types - you  
don't have to call [] on them, yet you must call it on all the other  
types:


// fine if array is T[] or K[V]
foreach (i; array) { ... }

// opSlice() is explicit and mandatory for user-defined containers  
because they are not ranges.

foreach (i; container[]) { ... }

Thanks!


Re: associative arrays: iteration is finally here

2009-10-28 Thread Robert Jacques
On Wed, 28 Oct 2009 15:06:34 -0400, Denis Koroskin <2kor...@gmail.com>  
wrote:


On Wed, 28 Oct 2009 17:22:00 +0300, Andrei Alexandrescu  
 wrote:


Walter has magically converted his work on T[new] into work on making  
associative arrays true templates defined in druntime and not  
considered very special by the compiler.




Wow, this is outstanding! (I hope it didn't have any negative impact on  
compile-time AA capabilities).


This is very exciting because it opens up or simplifies a number of  
possibilities. One is that of implementing true iteration. I actually  
managed to implement last night something that allows you to do:


int[int] aa = [ 1:1 ];
auto iter = aa.each;
writeln(iter.front.key);
writeln(iter.front.value);

Two other iterations are possible: by key and by value (in those cases  
iter.front just returns a key or a value).


One question is, what names should these bear? I am thinking of makign  
opSlice() a universal method of getting the "all" iterator, a default  
that every container must implement.


For AAs, there would be a "iterate keys" and "iterate values"  
properties or functions. How should they be called?



Thanks,

Andrei


If AA is providing a way to iterate over both keys and values (and it's  
a default iteration scheme), why should AA provide 2 other iteration  
schemes? Can't they be implemented externally (using adaptor ranges)  
with the same efficiency?


foreach (e; keys(aa)) {
 writefln("key: %s", e);
}

foreach (e; values(aa)) {
 writefln("value: %s", e);
}

I'd also like you to add a few things in an AA interface.

First, opIn should not return a pointer to Value, but a pointer to a  
pair of Key and Value, if possible (i.e. if this change won't sacrifice  
performance).
Second, AA.remove method should accept result of opIn operation to avoid  
an additional lookup for removal:


if (auto value = key in aa) {
 aa.remove(key); // an unnecessary lookup
}

Something like this would be perfect:

struct Element(K,V)
{
 const K key;
 V value;
}

struct AA(K,V)
{
 //...
 ref Element opIn(K key) { /* throws an exception if element is not  
found */ }


Not finding an element is a common use case, not an exception. Using  
exceptions to pass information is bad style, slow and prevents the use of  
AAs in pure/nothrow functions. Returning a pointer to an element would  
allow both key and value to be accessed and could be null if no element is  
found.


 void remove(ref Element elem) { /* removes an element from an AA */  
}

 void remove(K key) { remove(key in this); }

 AARange!(K,V) opSlice() { /* iterates over both keys and values */ }
}

Last, I believe foreach loop should automatically call opSlice() on  
iteratee. There is currently an inconsistency with built-in types - you  
don't have to call [] on them, yet you must call it on all the other  
types:


// fine if array is T[] or K[V]
foreach (i; array) { ... }

// opSlice() is explicit and mandatory for user-defined containers  
because they are not ranges.

foreach (i; container[]) { ... }

Thanks!


Re: Shared Hell

2009-10-28 Thread Denis Koroskin
On Wed, 28 Oct 2009 20:30:45 +0300, Walter Bright  
 wrote:


 Andrei would suggest a Shared!(T) template that would wrap an unshared  
type and make all methods shared. This would work, but requires full  
AST manipulation capabilities (it's clearly not enough to just mark all  
the members shared). What should we do until then?


shared(T) should transitively make a new type where it's all shared.


Type, yes, but not the methods. It will make a type with *no* methods  
usable (because they still accept and operate on thread-local variables).


I was hinting about template that would create a separate fully  
shared-aware type so that there would be no need for code duplication.  
I.e. it would transform the following class:


class Float
{
this(float value) { this.value = value; }

Float opAdd(Float other)
{
return new Vector(this.value + other.value);
}

private float value;
}

into the following:

class SharedFloat
{
this(float value) shared { this.value = value; }

shared Float opAdd(shared Float other) shared
{
return new shared Vector(this.value + other.value);
}

private shared float value;
}

This obviously requires techniques not available in D currently (AST  
manipulation).


Re: associative arrays: iteration is finally here

2009-10-28 Thread Denis Koroskin
On Wed, 28 Oct 2009 17:22:00 +0300, Andrei Alexandrescu  
 wrote:


Walter has magically converted his work on T[new] into work on making  
associative arrays true templates defined in druntime and not considered  
very special by the compiler.




Wow, this is outstanding! (I hope it didn't have any negative impact on  
compile-time AA capabilities).


This is very exciting because it opens up or simplifies a number of  
possibilities. One is that of implementing true iteration. I actually  
managed to implement last night something that allows you to do:


int[int] aa = [ 1:1 ];
auto iter = aa.each;
writeln(iter.front.key);
writeln(iter.front.value);

Two other iterations are possible: by key and by value (in those cases  
iter.front just returns a key or a value).


One question is, what names should these bear? I am thinking of makign  
opSlice() a universal method of getting the "all" iterator, a default  
that every container must implement.


For AAs, there would be a "iterate keys" and "iterate values" properties  
or functions. How should they be called?



Thanks,

Andrei


If AA is providing a way to iterate over both keys and values (and it's a  
default iteration scheme), why should AA provide 2 other iteration  
schemes? Can't they be implemented externally (using adaptor ranges) with  
the same efficiency?


foreach (e; keys(aa)) {
writefln("key: %s", e);
}

foreach (e; values(aa)) {
writefln("value: %s", e);
}

I'd also like you to add a few things in an AA interface.

First, opIn should not return a pointer to Value, but a pointer to a pair  
of Key and Value, if possible (i.e. if this change won't sacrifice  
performance).
Second, AA.remove method should accept result of opIn operation to avoid  
an additional lookup for removal:


if (auto value = key in aa) {
aa.remove(key); // an unnecessary lookup
}

Something like this would be perfect:

struct Element(K,V)
{
const K key;
V value;
}

struct AA(K,V)
{
//...
ref Element opIn(K key) { /* throws an exception if element is not  
found */ }

void remove(ref Element elem) { /* removes an element from an AA */ }
void remove(K key) { remove(key in this); }

AARange!(K,V) opSlice() { /* iterates over both keys and values */ }
}

Last, I believe foreach loop should automatically call opSlice() on  
iteratee. There is currently an inconsistency with built-in types - you  
don't have to call [] on them, yet you must call it on all the other types:


// fine if array is T[] or K[V]
foreach (i; array) { ... }

// opSlice() is explicit and mandatory for user-defined containers because  
they are not ranges.

foreach (i; container[]) { ... }

Thanks!


Re: Shared Hell

2009-10-28 Thread Denis Koroskin

On Wed, 28 Oct 2009 16:19:11 +0300, dsimcha  wrote:


== Quote from Walter Bright (newshou...@digitalmars.com)'s article

Denis Koroskin wrote:
> I've recently updated to DMD2.035 (from DMD2.031 because all the later
> versions had issues with imports) and for the first time faced  
problems

> with shared modifier.
>
> I don't need shared and all my globals are __gshared (they are  
globally

> unique instances that don't need per-thread copies).
I don't understand. Are you running multiple threads? Are those threads
accessing globals?
A function that accesses shared data has to put in fences. There's no
way to have the same code deal with shared and unshared code.
As an escape from the type system, you can always cast away the
shared-ness. But I wonder about code that both uses global variables
shared across threads that don't need synchronization?


I have at least one use case for __gshareds in multithreaded code.  I  
often use
__gshared variables to hold program parameters that are only set using  
getopt at
program startup and never modified after the program becomes  
multithreaded.


That said, although I use D2 regularly, I basically have ignored shared's
existence up to this point.  The semantics aren't fully implemented, so  
right now
you get all the bondage and discipline of it without any of the  
benefits.  As far
as the problem of synchronized methods automatically being shared,  
here's an easy

workaround until the rough edges of shared are worked out:

//Instead of this:
synchronized SomeType someMethod(Foo args) {
// Do stuff.
}

// Use this:
SomeType someMethod(Foo args) {
synchronized(this) {
// Do stuff.
}
}


Yes, I've though about it. That's probably the only workaround for now and  
I'll give it. Thanks.


Re: No header files?

2009-10-28 Thread BCS

Hello Yigal,


On 27/10/2009 22:50, BCS wrote:


And as soon as you *require* an IDE to view the stuff, working
without one goes from 'less than ideal' to functionally impossible. I
think we have been over this ground before; I have major issues with
tool chains that are more or less impossible to use without a
language aware IDE. I know there are productivity gains to be had
from IDEs and I know that even in the best of cases working without
one will cost something. What I'm saying is that I want it to be
*possible* to work without one.


I'm not requiring anything of that sort. I view the metadata as
machine readable information that is processed by tools like compilers
and third party tools. The metadata is required to link in a binary
lib file NOT as documentation of the interface.



In theory yes. In practice

If the meat data can be used for auto compleat then someone will make an 
IDE tool that does that. Once that happens the meta data can function as 
documentation and soon some people will start expecting people to use it 
as documentation. At that point it we functional be documentation. And now, 
even with the best of intentions, we are where I don't want to be. If it 
is possible to link a program without some human readable "documentation" 
you /will/ end up with libraries where the only documentation is non human 
readable.



comparing to original source is useless - commercial companies may
want to protect their commercial secrets and provide you with only a
binary file and the bare minimum to use that file in your projects.
D needs to support that option.


I agree. What I want is that your "binary file and the bare minimum
to use that file" includes something with the public API that can be
handedly read with a text editor. (Really I'd like DMD to force
people it to include a proper well written documentation file with
good code examples and a nice tutorial but we all know that's not
going to happen).


that handily readable something is documentation. NOT header files. If
you really insist you can generate man pages or text files but for
documentation MUST be easy to navigate such as click-able PDF, HTML,
etc.
look at this this way: a lib is a binary file with binary meta-data
not
ment for human beings. in order to know what API functions that lib
provides you must provide documentation as described above and that's
trivial to generate even without a single comment in the source.
if you allow for header files that gives a reason for lazy programmers
not to generate the documentation, even though it's as simple as
adding
a line in the makefile


Some fraction of libs will be shipped with the minimum needed to compile 
and link. If that doesn't include some form of documentation, some fraction 
of libs will ship without documentation. The point about trivially generateable 
is exactly the crux of the issue; it trivially generateable *with the right 
tools*. With the right tools (a good IDE) it's not no only trivial to generate 
but you don't even need to generate it. And now we are right back where I 
don;t want to be with a language that end up being nearly impossible to use 
without a pile of extra tools that are not otherwise needed. 
.

I'm cynical enough that I'd bet if D switches to a "smarter lib
format"
a lot of people would manage to forget the documentation.
With the current system, the library must be shipped with, at a
minimum,
a human readable list of prototypes.

without proper documentation people will have no way to know how to
use such libraries.


If the lib is worth useing, the function names will tell you
something.


and those function names would be provided in an easy to navigate and
read HTML format. not in header files.


"What HTML file? The vendor didn't ship my PHB an HTML file."





Re: Need some help with this...

2009-10-28 Thread Bane
grauzone Wrote:

> Bane wrote:
> > Following code will freeze app on std.gc.fullCollect(), when 
> > sqlite3_close() in destructor is called. If destructor is called manualy, 
> > everything goes ok.
> > 
> > Is it a bug, and if is, with what? It behaves same on winxp64 and centos5.2 
> > using dmd 1.30 and sqlite 3.6.5 or 3.6.19 statically import lib. Libraries 
> > are tested so I do not suspect problem lies in them (they are compiled with 
> > dmc/gcc using full threading support).
> 
> It's not your fault, it's a well known bug. The following is what happens:
> 
> - in thread 1, a C function (e.g. malloc()) enters an internal lock
> - while thread 1 holds the lock, thread 2 triggers a D garbage 
> collection cycle
> - thread 2 pauses all threads forcibly, including thread 1
> - thread 2 collects some objects and calls finalizers on it
> - your finalizer calls a C function, which tries to enter the same lock 
> that is held by thread 1
> - but thread 1 has been paused
> - the GC won't resume the other threads until your function returns, and 
> you have a deadlock
> 
> As a solution, switch to D2 or Tango. These resume all suspended threads 
> before running the finalizers.

Thank you, grauzone. That clears it. Switching to D2 or Tango is a bit overkill 
as existing codebase is big and fairly tested (not including this issue :).

 I assume this happens pretty rarely, because I haven't noticed this bug so 
far. Manual delete seems to be workaround, so it is not a critical issue.

Thanks again :) Love D.


Re: class .sizeof

2009-10-28 Thread dsimcha
== Quote from Lars T. Kyllingstad (pub...@kyllingen.nospamnet)'s article
> dsimcha wrote:
> > For making the GC precise, I need to be able to get at the size of a class
> > instance at compile time.  The .sizeof property returns the size of a
> > reference, i.e. (void*).sizeof.  I need the amount of bytes an instance 
> > uses.
> Not sure if it's what you're after, but there is something called
> __traits(classInstanceSize,T).
> -Lars

Yep, that's it.  For some reason I didn't think to look there.


Re: Need some help with this...

2009-10-28 Thread grauzone

Bane wrote:

Following code will freeze app on std.gc.fullCollect(), when sqlite3_close() in 
destructor is called. If destructor is called manualy, everything goes ok.

Is it a bug, and if is, with what? It behaves same on winxp64 and centos5.2 
using dmd 1.30 and sqlite 3.6.5 or 3.6.19 statically import lib. Libraries are 
tested so I do not suspect problem lies in them (they are compiled with dmc/gcc 
using full threading support).


It's not your fault, it's a well known bug. The following is what happens:

- in thread 1, a C function (e.g. malloc()) enters an internal lock
- while thread 1 holds the lock, thread 2 triggers a D garbage 
collection cycle

- thread 2 pauses all threads forcibly, including thread 1
- thread 2 collects some objects and calls finalizers on it
- your finalizer calls a C function, which tries to enter the same lock 
that is held by thread 1

- but thread 1 has been paused
- the GC won't resume the other threads until your function returns, and 
you have a deadlock


As a solution, switch to D2 or Tango. These resume all suspended threads 
before running the finalizers.


Re: GC Sentinel

2009-10-28 Thread Leandro Lucarella
bearophile, el 28 de octubre a las 13:19 me escribiste:
> Leandro Lucarella:
> 
> > > If that's true then handier (compile-time?) solutions can be found.
> > 
> > What do you mean?
> 
> For example something run-time that doesn't work with a version(), like
> something that can be added to the GC API. If this is seen as too much
> slow or hard to do, then just the GC may be compiled, and used as
> separated dynamic lib. With LDC other intermediate solutions may be
> possible. Think of this problem from the point of view of someone that
> wants something handy. Debugging data structures is something I do
> often.

You can compile just the GC as a shared object (.so) in Linux and then
preload it to change just the GC implementation at "dynamic-link-time".
Just run:
$ LD_PRELOAD=mygc.so ./myprogram

If your GC with extra checks is compiled as a shared object in mygc.so.

-- 
Leandro Lucarella (AKA luca) http://llucax.com.ar/
--
GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)
--
Que barbaridad, este país se va cada ves más pa' tras, más pa' tras...
-- Sidharta Kiwi


Re: class .sizeof

2009-10-28 Thread Lars T. Kyllingstad

dsimcha wrote:

For making the GC precise, I need to be able to get at the size of a class
instance at compile time.  The .sizeof property returns the size of a
reference, i.e. (void*).sizeof.  I need the amount of bytes an instance uses.


Not sure if it's what you're after, but there is something called 
__traits(classInstanceSize,T).


-Lars


Re: Need some help with this...

2009-10-28 Thread Jason House
Object destructors can be tricky in a GC'd language. It looks like you're 
accessing a deallocated pointer in your destructor. Order of 
collection/destruction is not guaranteed.

Bane Wrote:

> Following code will freeze app on std.gc.fullCollect(), when sqlite3_close() 
> in destructor is called. If destructor is called manualy, everything goes ok.
> 
> Is it a bug, and if is, with what? It behaves same on winxp64 and centos5.2 
> using dmd 1.30 and sqlite 3.6.5 or 3.6.19 statically import lib. Libraries 
> are tested so I do not suspect problem lies in them (they are compiled with 
> dmc/gcc using full threading support).
> 
> Is this some problem with GC or, more likely, my knowledge? I would 
> appreciate some clarification, this thing took me a lot of hours to track.
> 
> Thanks, 
> Bane
> 
> ==
> 
> 
> import std.stdio;
> import std.gc;
> import std.string;
> import std.thread;
> 
> pragma(lib, "sqlite3.lib");
> const int SQLITE_OK = 0;  // Successful result.
> struct sqlite3 {}
> extern(C) int sqlite3_open (char* filename, sqlite3** database);
> extern(C) int sqlite3_close(sqlite3* database);
> 
> class SQLite {
>   sqlite3* h;
>   this(){
> assert(sqlite3_open(toStringz(":memory:"), &h) == SQLITE_OK);
>   }
>   ~this(){
> writefln("~this start"); // to help debug
> assert(sqlite3_close(h) == SQLITE_OK);
> writefln("~this stop"); // to help debug
>   }
> }
> 
> class T : Thread {
>   int run(){
> SQLite s = new SQLite;
> // if next line is uncommented then app wont freeze
> // delete s;
> return 0;
>   }
> }
> 
> void main(){
>   while(true){
> T t = new T;
> t.start;
> writefln(Thread.nthreads);
> if(Thread.nthreads > 10)
>   fullCollect; // this will freeze app
>   }
> }



class .sizeof

2009-10-28 Thread dsimcha
For making the GC precise, I need to be able to get at the size of a class
instance at compile time.  The .sizeof property returns the size of a
reference, i.e. (void*).sizeof.  I need the amount of bytes an instance uses.


Re: associative arrays: iteration is finally here

2009-10-28 Thread bearophile
Andrei Alexandrescu:

> That is debatable as it would make the same code do different things for 
> e.g. vectors and sparse vectors.

Iterating on the keys is more useful, in real-world programs.

Regarding the names:
- "keys", "values" return lazy iterators. "keys" returns a set-like object that 
supports an O(1) opIn_r (and eventually few other basic set operations).
- "items" returns a lazy iterator of Tuple(key, value) (structs, then). This 
may also be named "pairs".
- "allkeys", "allvalues", "allitems"/"allpairs" return arrays of 
items/vales/Tuple.

If you want to keep the API small you can even omit "allkeys", "allvalues", 
"allitems" (so you need to do array(aa.keys) if you want them all.

Bye,
bearophile


Re: LLVM 2.6 Release!

2009-10-28 Thread Andrei Alexandrescu

bearophile wrote:

Andrei Alexandrescu:


Sounds good, thanks. If anyone is up to the task, we'd all be grateful.


LLVM is good for D/LDC for several things:

[snip]

Thanks. Just in case I was misunderstood - I said: If anyone would want 
to write an article (e.g. a blog entry, magazine article, self-published 
Web article etc., but not a newsgroup article) about ldc, I'd be glad to 
cite it and link to it from within TDPL.



Andrei


Re: LLVM 2.6 Release!

2009-10-28 Thread bearophile
Andrei Alexandrescu:

> Sounds good, thanks. If anyone is up to the task, we'd all be grateful.

LLVM is good for D/LDC for several things:
- The LLVM optimizer is good, usually quite better than the DMD one. If you 
write D C-like code you usually reach performance similar to true C code.
- LLVM misses some optimizations, like de-virtualization, auto-vectorization, 
and some more, but LLVM is a very alive project (partially paid by Apple) so 
probably eventually those things will be added. I have shown bugs to LLVM 
people and they have fixed it in few days. They were almost as fast as LDC 
developers (lately LDC devs seem sleepy to me).
- LLVM is written in a good enough C++, its API is not bad. You can use LLVM 
for your purposes in just few days for small projects. Try doing the same thing 
with GCC.
- LLVM is not a compiler, it's a compilation framework. More and more projects 
use it in several different ways. D is not a VM-based language, but eventually 
it can even be possible for LDC to compile and run code at runtime, for example 
to instantiate templates at runtime. LLVM can be used for several other things.
- LLVM will probably offer ways to implement a lint tool for D.
- LLVM is designed for all different kinds of purposes, so inside it you can 
find things like overflow-safe fixed-sized integers, stack canaries, other 
stack protection means, ways to design a precise GC that keeps in account the 
stack too (and eventually registers too).
- LLVM offers and will offer some modern things, like link-time optimization, a 
good (goden) linker, an ecosystem of tools that work and can communicate to 
each other using reliable languages like bc and ll.
- You can use LLVM on 64 bit CPUs too, and eventually exceptions on Windows 
too, etc. Some of the other optimizations useful for C++ (de-virtualization) 
will be pushed in the back-end (and not in the new front-end Clang) so they 
will be usable by LDC too for free.
- LLVM is made of parts, so you can use them and re-combine them for many 
different purposes. There are many research papers written on and with llvm, 
and more will come, because hacking llvm is quite simpler than doing similar 
things with gcc (despite gcc 4.5 has now a plug-in system. LLVM doesn't need it 
because it works in the opposite way). So LLVM will allow to do things that 
today we haven't invented yet.
- Some of the top LLVM developers are paid by Apple, this has disadvantages 
too. You can see an example of this from the missing videos/PDFs of the last 
conference, they were not allowed to show them, because Apple is sometimes even 
more corporative than Microsoft:
http://llvm.org/devmtg/2009-10/

Bye,
bearophile


Re: associative arrays: iteration is finally here

2009-10-28 Thread Lars T. Kyllingstad

Pelle Månsson wrote:

Andrei Alexandrescu wrote:

Pelle Månsson wrote:
Also, foreach with a single variable should default to keys, in my 
opinion.


That is debatable as it would make the same code do different things 
for e.g. vectors and sparse vectors.



Andrei


Debatable indeed, but I find myself using either just the keys or the 
keys and values together, rarely just the values. Maybe that's just me.



I've used iteration over values more often than iteration over keys.

Besides, I think consistency is important. Since the default for an 
ordinary array is to iterate over the values, it should be the same for 
associative arrays.


-Lars


Re: Shared Hell

2009-10-28 Thread Walter Bright

Denis Koroskin wrote:
On Wed, 28 Oct 2009 13:17:43 +0300, Walter Bright 
 wrote:



Denis Koroskin wrote:
I've recently updated to DMD2.035 (from DMD2.031 because all the 
later versions had issues with imports) and for the first time faced 
problems with shared modifier.
 I don't need shared and all my globals are __gshared (they are 
globally unique instances that don't need per-thread copies).


I don't understand. Are you running multiple threads? Are those 
threads accessing globals?




Yes.

A function that accesses shared data has to put in fences. There's no 
way to have the same code deal with shared and unshared code.




That's frustrating. I'd like to use the same class for both cases.

But I wonder about code that both uses global variables shared across 
threads that don't need synchronization?


You missed the point. I do the synchronization myself and I'm fine with 
switching to shared (I do believe it is a nice concept). The reason I 
use __gshared is because shared object were garbage-collected while 
still being in use a few versions of DMD back and I had no choice but to 
switch to __gshared. I hope it is fixed by now.


Which OS are you using? This is definitely a bug. If it's still there, 
you can work around by adding the tls data as a "root" to the gc.



But I still can't make my data shared, since shared is transitive 
(viral). After a few hours or work I still can't even compile my code.


As an escape from the type system, you can always cast away the 
shared-ness.


That's the only way I have now. Casts from shared to unshared *everywhere*:

class BuildManager : BuildListener
{
synchronized void build(shared Target target)
{
// ...

_buildingThread = new shared(Thread)(&_startBuild); // creating 
a new shared Thread. Yes, shared Thread, because BuildManager is global.


//_buildingThread.start(); // Error: function 
core.thread.Thread.start () is not callable using argument types () shared
(cast(Thread)_buildingThread).start(); // works, but ugly, and I 
don't have a reason to hijack the type system in this case


// ...
}
}

Andrei would suggest a Shared!(T) template that would wrap an unshared 
type and make all methods shared. This would work, but requires full AST 
manipulation capabilities (it's clearly not enough to just mark all the 
members shared). What should we do until then?


shared(T) should transitively make a new type where it's all shared.


Re: GC Sentinel

2009-10-28 Thread bearophile
Leandro Lucarella:

> > If that's true then handier (compile-time?) solutions can be found.
> 
> What do you mean?

For example something run-time that doesn't work with a version(), like 
something that can be added to the GC API. If this is seen as too much slow or 
hard to do, then just the GC may be compiled, and used as separated dynamic 
lib. With LDC other intermediate solutions may be possible. Think of this 
problem from the point of view of someone that wants something handy. Debugging 
data structures is something I do often.

Bye,
bearophile


Re: ICE: template.c:806: failed assertion `i < parameters->dim'

2009-10-28 Thread grauzone

Don wrote:

Jacob Carlborg wrote:

On 10/28/09 16:32, Don wrote:

Jacob Carlborg wrote:

I have quite a big project and when I compile it I get this internal
compiler error: template.c:806: failed assertion `i < parameters->dim'.
I don't know what could cause that error so I don't know where to look
in my code to try to produce a small test case and report an issue.
I'm using quite a lot of templates, template mixins and string mixins.


Bugzilla 2229. Was fixed in DMD1.049.
There have been about 60 ICE bugs fixed since 1.045.
I want to find out what the regressions are that are stopping people
from using the latest DMD -- it's time for the ICE age to end.


I haven't been using any later version because of various known 
regressions, I think they've been solved know. I tried to compile 
Tango trunk with DMD trunk and it failed with:


/Users/doob/development/d/tango-trunk/build/user/../../user/tango/io/compress/BzipStream.d(270): 
Error: var has no effect in expression (w)


It's returning a value in a void function.

I don't know if it's a regression that hasn't been solved or if it's 
something wrong with Tango.


The compiler now catches a few bugs that used to slip past before. Just 
change the "return w;" into "return;".


I thought that was a feature?


Re: ICE: template.c:806: failed assertion `i < parameters->dim'

2009-10-28 Thread Andrei Alexandrescu

Don wrote:

Jacob Carlborg wrote:
I have quite a big project and when I compile it I get this internal 
compiler error: template.c:806: failed assertion `i < parameters->dim'.
I don't know what could cause that error so I don't know where to look 
in my code to try to produce a small test case and report an issue. 
I'm using quite a lot of templates, template mixins and string mixins.


Bugzilla 2229. Was fixed in DMD1.049.
There have been about 60 ICE bugs fixed since 1.045.
I want to find out what the regressions are that are stopping people 
from using the latest DMD -- it's time for the ICE age to end.


Very nicely put!

Andrei


Re: What Does Haskell Have to Do with C++?

2009-10-28 Thread Andrei Alexandrescu

Don wrote:

Jeremie Pelletier wrote:
http://bartoszmilewski.wordpress.com/2009/10/21/what-does-haskell-have-to-do-with-c/ 



Bartosz's second part of 'Template Metaprogramming Made Easy (Huh?)', 
its quite a read :)


Yes, it is excellent. Two comments:
(1) Bartosz's D examples make me seriously question 'static foreach' 
which is scheduled for implementation (bugzilla 3377).
If implemented, it will be a source of frustration, since it will not be 
very usable inside templates. The ability to exit from a 'static 
foreach' is something which is possible with a 'return'-style syntax, 
but is not possible with the 'eponymous template hack'.


I think breaking early out of a static foreach is not necessary (but 
indeed convenient) for writing good loops.


(2) It seems pretty clear that we need to allow the eponymous trick to 
continue to work when more than one template member is present. I think 
everyone who's ever attempted template metaprogramming in D has proposed 
 it!


Yes, that was on the list for a long time. Bartosz even has participated 
to many related discussions. I'm surprised the article made it seem an 
unescapable matter of principles, when it clearly is a trivially fixable 
bug in the language definition.


We discussed using "this" instead of the template's name, but that has a 
few ambiguity problems. Currently, we want to allow a template to define 
private symbols in addition to the eponymous trick (a term that Bartosz 
shouldn't have implied paternity of, sigh). Those private members would 
be accessible from inside the template's definition, but not from the 
outside. That would effectively remove the issue.



Andrei


Re: Shared Hell

2009-10-28 Thread Kagamin
Denis Koroskin Wrote:

> > As an escape from the type system, you can always cast away the  
> > shared-ness.
> 
> That's the only way I have now. Casts from shared to unshared *everywhere*:
> 
> class BuildManager : BuildListener
> {
>  synchronized void build(shared Target target)
>  {
>  // ...
> 
>  _buildingThread = new shared(Thread)(&_startBuild); // creating a  
> new shared Thread. Yes, shared Thread, because BuildManager is global.
> 
>  //_buildingThread.start(); // Error: function  
> core.thread.Thread.start () is not callable using argument types () shared
>  (cast(Thread)_buildingThread).start(); // works, but ugly, and I  
> don't have a reason to hijack the type system in this case
> 
>  // ...
>  }
> }

you can use local variables to not have to cast in every statement.

class BuildManager : BuildListener
{
 synchronized void build(shared Target target)
 {
 // ...
 bt = new Thread(&_startBuild);
 _buildingThread = cast(shared Thread)bt; //store as shared
 bt.start(); //work with non-shared variable normally
 // ...
 }
}


Re: associative arrays: iteration is finally here

2009-10-28 Thread Andrei Alexandrescu

yigal chripun wrote:

Andrei Alexandrescu Wrote:

Walter has magically converted his work on T[new] into work on making 
associative arrays true templates defined in druntime and not considered 
very special by the compiler.


This is very exciting because it opens up or simplifies a number of 
possibilities. One is that of implementing true iteration. I actually 
managed to implement last night something that allows you to do:


int[int] aa = [ 1:1 ];
auto iter = aa.each;
writeln(iter.front.key);
writeln(iter.front.value);

Two other iterations are possible: by key and by value (in those cases 
iter.front just returns a key or a value).


One question is, what names should these bear? I am thinking of makign 
opSlice() a universal method of getting the "all" iterator, a default 
that every container must implement.


For AAs, there would be a "iterate keys" and "iterate values" properties 
or functions. How should they be called?



Thanks,

Andrei


that looks neat. What's the mechanism to tie the templates to the syntax? 

I don't understand why all containers must provide a default range. What's the default for a binary tree?  it has several orderings (pre, post, infix) but i can't say that one is "more default" the the other.  


The cheapest to implement. As I mentioned in the Bogo containers 
discussion, I think any container must implement some way of iterating 
it. A container without an iterator is not a container.


Andrei


Re: associative arrays: iteration is finally here

2009-10-28 Thread Leandro Lucarella
Pelle Månsson, el 28 de octubre a las 15:48 me escribiste:
> Andrei Alexandrescu wrote:
> >Walter has magically converted his work on T[new] into work on
> >making associative arrays true templates defined in druntime and
> >not considered very special by the compiler.
> >
> >This is very exciting because it opens up or simplifies a number
> >of possibilities. One is that of implementing true iteration. I
> >actually managed to implement last night something that allows you
> >to do:
> >
> >int[int] aa = [ 1:1 ];
> >auto iter = aa.each;
> >writeln(iter.front.key);
> >writeln(iter.front.value);
> >
> >Two other iterations are possible: by key and by value (in those
> >cases iter.front just returns a key or a value).
> >
> >One question is, what names should these bear? I am thinking of
> >makign opSlice() a universal method of getting the "all" iterator,
> >a default that every container must implement.
> >
> >For AAs, there would be a "iterate keys" and "iterate values"
> >properties or functions. How should they be called?
> >
> >
> >Thanks,
> >
> >Andrei
> aa.each, aa.keys and aa.values seem good names?

I might be too pythonic, but aa.items sounds a little better for me ;)

-- 
Leandro Lucarella (AKA luca) http://llucax.com.ar/
--
GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)
--
Can you stand up?
I do believe it's working, good.
That'll keep you going through the show
Come on it's time to go.


Re: ICE: template.c:806: failed assertion `i < parameters->dim'

2009-10-28 Thread Don

Jacob Carlborg wrote:

On 10/28/09 16:32, Don wrote:

Jacob Carlborg wrote:

I have quite a big project and when I compile it I get this internal
compiler error: template.c:806: failed assertion `i < parameters->dim'.
I don't know what could cause that error so I don't know where to look
in my code to try to produce a small test case and report an issue.
I'm using quite a lot of templates, template mixins and string mixins.


Bugzilla 2229. Was fixed in DMD1.049.
There have been about 60 ICE bugs fixed since 1.045.
I want to find out what the regressions are that are stopping people
from using the latest DMD -- it's time for the ICE age to end.


I haven't been using any later version because of various known 
regressions, I think they've been solved know. I tried to compile Tango 
trunk with DMD trunk and it failed with:


/Users/doob/development/d/tango-trunk/build/user/../../user/tango/io/compress/BzipStream.d(270): 
Error: var has no effect in expression (w)


It's returning a value in a void function.

I don't know if it's a regression that hasn't been solved or if it's 
something wrong with Tango.


The compiler now catches a few bugs that used to slip past before. Just 
change the "return w;" into "return;".


Re: ICE: template.c:806: failed assertion `i < parameters->dim'

2009-10-28 Thread Jacob Carlborg

On 10/28/09 16:32, Don wrote:

Jacob Carlborg wrote:

I have quite a big project and when I compile it I get this internal
compiler error: template.c:806: failed assertion `i < parameters->dim'.
I don't know what could cause that error so I don't know where to look
in my code to try to produce a small test case and report an issue.
I'm using quite a lot of templates, template mixins and string mixins.


Bugzilla 2229. Was fixed in DMD1.049.
There have been about 60 ICE bugs fixed since 1.045.
I want to find out what the regressions are that are stopping people
from using the latest DMD -- it's time for the ICE age to end.


I haven't been using any later version because of various known 
regressions, I think they've been solved know. I tried to compile Tango 
trunk with DMD trunk and it failed with:


/Users/doob/development/d/tango-trunk/build/user/../../user/tango/io/compress/BzipStream.d(270): 
Error: var has no effect in expression (w)


It's returning a value in a void function.

I don't know if it's a regression that hasn't been solved or if it's 
something wrong with Tango.


Ugly identifiers

2009-10-28 Thread Leandro Lucarella
Denis Koroskin, el 28 de octubre a las 08:05 me escribiste:
> I've recently updated to DMD2.035 (from DMD2.031 because all the
> later versions had issues with imports) and for the first time faced
> problems with shared modifier.
> 
> I don't need shared and all my globals are __gshared (they are
> globally unique instances that don't need per-thread copies).

BTW, will __gshared and __traits be renamed to something that doesn't hurt
my eyes before DMD2 is finalized or we will have to live with that until
the end of the ages? I don't remember if I'm missing any other ugly
identifier.

-- 
Leandro Lucarella (AKA luca) http://llucax.com.ar/
--
GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)
--
- Mire, don Inodoro! Una paloma con un anillo en la pata! Debe ser
  mensajera y cayó aquí!
- Y... si no es mensajera es coqueta... o casada.
-- Mendieta e Inodoro Pereyra


Re: associative arrays: iteration is finally here

2009-10-28 Thread yigal chripun
Andrei Alexandrescu Wrote:

> Walter has magically converted his work on T[new] into work on making 
> associative arrays true templates defined in druntime and not considered 
> very special by the compiler.
> 
> This is very exciting because it opens up or simplifies a number of 
> possibilities. One is that of implementing true iteration. I actually 
> managed to implement last night something that allows you to do:
> 
> int[int] aa = [ 1:1 ];
> auto iter = aa.each;
> writeln(iter.front.key);
> writeln(iter.front.value);
> 
> Two other iterations are possible: by key and by value (in those cases 
> iter.front just returns a key or a value).
> 
> One question is, what names should these bear? I am thinking of makign 
> opSlice() a universal method of getting the "all" iterator, a default 
> that every container must implement.
> 
> For AAs, there would be a "iterate keys" and "iterate values" properties 
> or functions. How should they be called?
> 
> 
> Thanks,
> 
> Andrei

that looks neat. What's the mechanism to tie the templates to the syntax? 

I don't understand why all containers must provide a default range. What's the 
default for a binary tree?  it has several orderings (pre, post, infix) but i 
can't say that one is "more default" the the other.  



Re: GC Sentinel

2009-10-28 Thread Leandro Lucarella
bearophile, el 28 de octubre a las 03:52 me escribiste:
> Leandro Lucarella:
> 
> > I think that's used to check for memory corruption, by storing a known
> > patter before and after the actual object. Then, each time you can, you
> > check that the unused memory block is intact (meaning nobody wrote to an
> > invalid memory area).
> 
> Such things can be quite useful. Do you need to compile Phobos again to do 
> that?

Yes, it's added through a version statement.

> If that's true then handier (compile-time?) solutions can be found.

What do you mean?

-- 
Leandro Lucarella (AKA luca) http://llucax.com.ar/
--
GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)
--
Se ha dicho tanto que las apariencias engañan
Por supuesto que engañarán a quien sea tan vulgar como para creerlo


Re: associative arrays: iteration is finally here

2009-10-28 Thread Andrei Alexandrescu

dsimcha wrote:

== Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s article

Walter has magically converted his work on T[new] into work on making
associative arrays true templates defined in druntime and not considered
very special by the compiler.
This is very exciting because it opens up or simplifies a number of
possibilities. One is that of implementing true iteration. I actually
managed to implement last night something that allows you to do:
int[int] aa = [ 1:1 ];
auto iter = aa.each;
writeln(iter.front.key);
writeln(iter.front.value);
Two other iterations are possible: by key and by value (in those cases
iter.front just returns a key or a value).
One question is, what names should these bear? I am thinking of makign
opSlice() a universal method of getting the "all" iterator, a default
that every container must implement.
For AAs, there would be a "iterate keys" and "iterate values" properties
or functions. How should they be called?
Thanks,
Andrei


Awesome, this definitely improves the interface, but how about the 
implementation?
 The current implementation, while fast for reading, is unbelievably slow for
adding elements, requires a heap allocation (read:  a global lock) on *every*
insertion, and generates an insane amount of false pointers.  Even if I succeed 
in
making heap scanning (mostly) precise, it's not clear if the current AA impl.
could easily be made to benefit, since it isn't template based.  It uses RTTI
internally instead, and the types it's operating on aren't known to the
implementation at compile time, so I wouldn't be able to use templates to 
generate
the bitmask at compile time.  The structs it uses internally would therefor have
to be scanned conservatively.


I'm afraid that efficiency is a matter I need to defer to the community 
for now. Right now, I am trying to get TDPL done. Having or not having 
range-style iteration influences the material. Making that efficient is 
a matter that would not influence the material (as long as there is a 
strong belief that that's doable).


Unrelated: one thing that we need to change about AAs is the inability 
to get a true reference to the stored element. aa[k] returns an rvalue, 
and a[k] = v is done in a manner akin to opIndexAssign. But a serious AA 
should have a method of reaching the actual storage for a value, I think.



Andrei


Re: ICE: template.c:806: failed assertion `i < parameters->dim'

2009-10-28 Thread Jacob Carlborg

On 10/28/09 16:32, Don wrote:

Jacob Carlborg wrote:

I have quite a big project and when I compile it I get this internal
compiler error: template.c:806: failed assertion `i < parameters->dim'.
I don't know what could cause that error so I don't know where to look
in my code to try to produce a small test case and report an issue.
I'm using quite a lot of templates, template mixins and string mixins.


Bugzilla 2229. Was fixed in DMD1.049.
There have been about 60 ICE bugs fixed since 1.045.
I want to find out what the regressions are that are stopping people
from using the latest DMD -- it's time for the ICE age to end.


Thanks, I'll try the latest DMD.


Re: associative arrays: iteration is finally here

2009-10-28 Thread Max Samukha
On Wed, 28 Oct 2009 09:22:00 -0500, Andrei Alexandrescu
 wrote:

>Walter has magically converted his work on T[new] into work on making 
>associative arrays true templates defined in druntime and not considered 
>very special by the compiler.
>
>This is very exciting because it opens up or simplifies a number of 
>possibilities. One is that of implementing true iteration. I actually 
>managed to implement last night something that allows you to do:
>
>int[int] aa = [ 1:1 ];
>auto iter = aa.each;
>writeln(iter.front.key);
>writeln(iter.front.value);
>
>Two other iterations are possible: by key and by value (in those cases 
>iter.front just returns a key or a value).
>
>One question is, what names should these bear? I am thinking of makign 
>opSlice() a universal method of getting the "all" iterator, a default 
>that every container must implement.

It looks pretty intuitive to me.

>
>For AAs, there would be a "iterate keys" and "iterate values" properties 
>or functions. How should they be called?

eachKey, eachValue
keyRange, valueRange
keys, values

I'd prefer the last one.

>
>
>Thanks,
>
>Andrei


Re: associative arrays: iteration is finally here

2009-10-28 Thread dsimcha
== Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s article
> Walter has magically converted his work on T[new] into work on making
> associative arrays true templates defined in druntime and not considered
> very special by the compiler.
> This is very exciting because it opens up or simplifies a number of
> possibilities. One is that of implementing true iteration. I actually
> managed to implement last night something that allows you to do:
> int[int] aa = [ 1:1 ];
> auto iter = aa.each;
> writeln(iter.front.key);
> writeln(iter.front.value);
> Two other iterations are possible: by key and by value (in those cases
> iter.front just returns a key or a value).
> One question is, what names should these bear? I am thinking of makign
> opSlice() a universal method of getting the "all" iterator, a default
> that every container must implement.
> For AAs, there would be a "iterate keys" and "iterate values" properties
> or functions. How should they be called?
> Thanks,
> Andrei

Awesome, this definitely improves the interface, but how about the 
implementation?
 The current implementation, while fast for reading, is unbelievably slow for
adding elements, requires a heap allocation (read:  a global lock) on *every*
insertion, and generates an insane amount of false pointers.  Even if I succeed 
in
making heap scanning (mostly) precise, it's not clear if the current AA impl.
could easily be made to benefit, since it isn't template based.  It uses RTTI
internally instead, and the types it's operating on aren't known to the
implementation at compile time, so I wouldn't be able to use templates to 
generate
the bitmask at compile time.  The structs it uses internally would therefor have
to be scanned conservatively.


Re: ICE: template.c:806: failed assertion `i < parameters->dim'

2009-10-28 Thread Don

Jacob Carlborg wrote:
I have quite a big project and when I compile it I get this internal 
compiler error: template.c:806: failed assertion `i < parameters->dim'.
I don't know what could cause that error so I don't know where to look 
in my code to try to produce a small test case and report an issue. I'm 
using quite a lot of templates, template mixins and string mixins.


Bugzilla 2229. Was fixed in DMD1.049.
There have been about 60 ICE bugs fixed since 1.045.
I want to find out what the regressions are that are stopping people 
from using the latest DMD -- it's time for the ICE age to end.


ICE: template.c:806: failed assertion `i < parameters->dim'

2009-10-28 Thread Jacob Carlborg
I have quite a big project and when I compile it I get this internal 
compiler error: template.c:806: failed assertion `i < parameters->dim'.
I don't know what could cause that error so I don't know where to look 
in my code to try to produce a small test case and report an issue. I'm 
using quite a lot of templates, template mixins and string mixins.


I'm using dmd v1.045, I also get the same error with the latest ldc with 
this backtrace:


Assertion failed: (i < parameters->dim), function 
deduceFunctionTemplateMatch, file 
/Users/doob/development/d/ldc/ldc/dmd/template.c, line 816.
0   ldc   0x00bea798 
llvm::sys::RWMutexImpl::writer_release() + 312
1   ldc   0x00beb231 
llvm::sys::RemoveFileOnSignal(llvm::sys::Path const&, std::string*) + 1393

2   libSystem.B.dylib 0x955072bb _sigtramp + 43
3   libSystem.B.dylib 0x _sigtramp + 1789889903
4   libSystem.B.dylib 0x9557b23a raise + 26
5   libSystem.B.dylib 0x95587679 abort + 73
6   libSystem.B.dylib 0x9557c3db __assert_rtn + 101
7   ldc   0x000a0911 
TemplateDeclaration::deduceFunctionTemplateMatch(Loc, Objects*, 
Expression*, Expressions*, Objects*) + 2129
8   ldc   0x000a11fa 
TemplateDeclaration::deduceFunctionTemplate(Scope*, Loc, Objects*, 
Expression*, Expressions*, int) + 250

9   ldc   0x00041981 CallExp::semantic(Scope*) + 1009
10  ldc   0x000942f2 ReturnStatement::semantic(Scope*) + 386
11  ldc   0x0009146f CompoundStatement::semantic(Scope*) + 191
12  ldc   0x00048c83 FuncDeclaration::semantic3(Scope*) + 1875
13  ldc   0x0009a8b6 TemplateInstance::semantic3(Scope*) + 182
14  ldc   0x000a26c0 TemplateInstance::semantic(Scope*) + 1440
15  ldc   0x00042844 CallExp::semantic(Scope*) + 4788
16  ldc   0x000942f2 ReturnStatement::semantic(Scope*) + 386
17  ldc   0x0009146f CompoundStatement::semantic(Scope*) + 191
18  ldc   0x00048c83 FuncDeclaration::semantic3(Scope*) + 1875
19  ldc   0x7b81 AttribDeclaration::semantic3(Scope*) + 113
20  ldc   0x000979e6 AggregateDeclaration::semantic3(Scope*) 
+ 150

21  ldc   0x00064886 Module::semantic3(Scope*) + 166
22  ldc   0x0010185f main + 4223
23  ldc   0x4036 start + 54
24  ldc   0x0011 start + 18446744073709535249


Need some help with this...

2009-10-28 Thread Bane
Following code will freeze app on std.gc.fullCollect(), when sqlite3_close() in 
destructor is called. If destructor is called manualy, everything goes ok.

Is it a bug, and if is, with what? It behaves same on winxp64 and centos5.2 
using dmd 1.30 and sqlite 3.6.5 or 3.6.19 statically import lib. Libraries are 
tested so I do not suspect problem lies in them (they are compiled with dmc/gcc 
using full threading support).

Is this some problem with GC or, more likely, my knowledge? I would appreciate 
some clarification, this thing took me a lot of hours to track.

Thanks, 
Bane

==


import std.stdio;
import std.gc;
import std.string;
import std.thread;

pragma(lib, "sqlite3.lib");
const int SQLITE_OK = 0;// Successful result.
struct sqlite3 {}
extern(C) int sqlite3_open (char* filename, sqlite3** database);
extern(C) int sqlite3_close(sqlite3* database);

class SQLite {
  sqlite3* h;
  this(){
assert(sqlite3_open(toStringz(":memory:"), &h) == SQLITE_OK);
  }
  ~this(){
writefln("~this start"); // to help debug
assert(sqlite3_close(h) == SQLITE_OK);
writefln("~this stop"); // to help debug
  }
}

class T : Thread {
  int run(){
SQLite s = new SQLite;
// if next line is uncommented then app wont freeze
// delete s;
return 0;
  }
}

void main(){
  while(true){
T t = new T;
t.start;
writefln(Thread.nthreads);
if(Thread.nthreads > 10)
  fullCollect; // this will freeze app
  }
}


Re: associative arrays: iteration is finally here

2009-10-28 Thread Pelle Månsson

Andrei Alexandrescu wrote:

Pelle Månsson wrote:

Andrei Alexandrescu wrote:
Walter has magically converted his work on T[new] into work on making 
associative arrays true templates defined in druntime and not 
considered very special by the compiler.


This is very exciting because it opens up or simplifies a number of 
possibilities. One is that of implementing true iteration. I actually 
managed to implement last night something that allows you to do:


int[int] aa = [ 1:1 ];
auto iter = aa.each;
writeln(iter.front.key);
writeln(iter.front.value);

Two other iterations are possible: by key and by value (in those 
cases iter.front just returns a key or a value).


One question is, what names should these bear? I am thinking of 
makign opSlice() a universal method of getting the "all" iterator, a 
default that every container must implement.


For AAs, there would be a "iterate keys" and "iterate values" 
properties or functions. How should they be called?



Thanks,

Andrei

aa.each, aa.keys and aa.values seem good names?


The latter two would break existing definitions of keys and values.

Is this bad? If you want an array from them you could just construct it 
from the iterator.


Also, foreach with a single variable should default to keys, in my 
opinion.


That is debatable as it would make the same code do different things for 
e.g. vectors and sparse vectors.



Andrei


Debatable indeed, but I find myself using either just the keys or the 
keys and values together, rarely just the values. Maybe that's just me.


Re: associative arrays: iteration is finally here

2009-10-28 Thread Justin Johansson
Andrei Alexandrescu Wrote:

> Walter has magically converted his work on T[new] into work on making 
> associative arrays true templates defined in druntime and not considered 
> very special by the compiler.
> 
> This is very exciting because it opens up or simplifies a number of 
> possibilities. One is that of implementing true iteration. I actually 
> managed to implement last night something that allows you to do:
> 
> int[int] aa = [ 1:1 ];
> auto iter = aa.each;
> writeln(iter.front.key);
> writeln(iter.front.value);
> 
> Two other iterations are possible: by key and by value (in those cases 
> iter.front just returns a key or a value).
> 
> One question is, what names should these bear? I am thinking of makign 
> opSlice() a universal method of getting the "all" iterator, a default 
> that every container must implement.
> 
> For AAs, there would be a "iterate keys" and "iterate values" properties 
> or functions. How should they be called?
> 
> 
> Thanks,
> 
> Andrei

Don't know if I'm off track but ...
would "opApplyKeys" and "opApplyValues" be a useful analogy in somewhat
keeping with the current semantics of opApply?

Justin






Re: associative arrays: iteration is finally here

2009-10-28 Thread Andrei Alexandrescu

Pelle Månsson wrote:

Andrei Alexandrescu wrote:
Walter has magically converted his work on T[new] into work on making 
associative arrays true templates defined in druntime and not 
considered very special by the compiler.


This is very exciting because it opens up or simplifies a number of 
possibilities. One is that of implementing true iteration. I actually 
managed to implement last night something that allows you to do:


int[int] aa = [ 1:1 ];
auto iter = aa.each;
writeln(iter.front.key);
writeln(iter.front.value);

Two other iterations are possible: by key and by value (in those cases 
iter.front just returns a key or a value).


One question is, what names should these bear? I am thinking of makign 
opSlice() a universal method of getting the "all" iterator, a default 
that every container must implement.


For AAs, there would be a "iterate keys" and "iterate values" 
properties or functions. How should they be called?



Thanks,

Andrei

aa.each, aa.keys and aa.values seem good names?


The latter two would break existing definitions of keys and values.


Also, foreach with a single variable should default to keys, in my opinion.


That is debatable as it would make the same code do different things for 
e.g. vectors and sparse vectors.



Andrei


Re: associative arrays: iteration is finally here

2009-10-28 Thread Pelle Månsson

Andrei Alexandrescu wrote:
Walter has magically converted his work on T[new] into work on making 
associative arrays true templates defined in druntime and not considered 
very special by the compiler.


This is very exciting because it opens up or simplifies a number of 
possibilities. One is that of implementing true iteration. I actually 
managed to implement last night something that allows you to do:


int[int] aa = [ 1:1 ];
auto iter = aa.each;
writeln(iter.front.key);
writeln(iter.front.value);

Two other iterations are possible: by key and by value (in those cases 
iter.front just returns a key or a value).


One question is, what names should these bear? I am thinking of makign 
opSlice() a universal method of getting the "all" iterator, a default 
that every container must implement.


For AAs, there would be a "iterate keys" and "iterate values" properties 
or functions. How should they be called?



Thanks,

Andrei

aa.each, aa.keys and aa.values seem good names?

Also, foreach with a single variable should default to keys, in my opinion.


What is the air speed velocity of an unladen swallow?

2009-10-28 Thread Justin Johansson
Just stumbled across this LtU resource, July 2009, and feel that D community 
might learn
something from it (sorry if this is olde news) ...

Unladen Swallow: LLVM based Python compiler
http://lambda-the-ultimate.org/node/3491

and from there:

unladen-swallow: A faster implementation of Python 
http://code.google.com/p/unladen-swallow/

Now further looking at RelevantPapers that these folks are citing, it looks
like they might actually be quite a tuned-in bunch:

http://code.google.com/p/unladen-swallow/wiki/RelevantPapers

And their project plan makes them look quite serious too.  For example,

http://code.google.com/p/unladen-swallow/wiki/GarbageCollector

I must confess that I've largely ignored Python in the past but there certainly
seems to be some interesting reading here and I suspect some reasons for
D people to become a little more introspective.

Cheers
Justin Johansson



associative arrays: iteration is finally here

2009-10-28 Thread Andrei Alexandrescu
Walter has magically converted his work on T[new] into work on making 
associative arrays true templates defined in druntime and not considered 
very special by the compiler.


This is very exciting because it opens up or simplifies a number of 
possibilities. One is that of implementing true iteration. I actually 
managed to implement last night something that allows you to do:


int[int] aa = [ 1:1 ];
auto iter = aa.each;
writeln(iter.front.key);
writeln(iter.front.value);

Two other iterations are possible: by key and by value (in those cases 
iter.front just returns a key or a value).


One question is, what names should these bear? I am thinking of makign 
opSlice() a universal method of getting the "all" iterator, a default 
that every container must implement.


For AAs, there would be a "iterate keys" and "iterate values" properties 
or functions. How should they be called?



Thanks,

Andrei


Re: LLVM 2.6 Release!

2009-10-28 Thread Andrei Alexandrescu

Justin Johansson wrote:

Andrei Alexandrescu Wrote:


Justin Johansson wrote:

Denis Koroskin Wrote:

Amazon mentions March 15, 2010:
http://www.amazon.com/exec/obidos/ASIN/0321635361/modecdesi-20

Thanks; just had a look at that link.

Did Andrei give a preview of the table of contents somewhere?  I'd
certainly welcome a chapter (or at least a detailed honorable mention)
on LLVM though perhaps that would be outside of the scope of his book.
An overview of LLVM/ldc would be outside of the scope of the book, but I 
encourage you to write about it and I'll make sure I'll insert a pointer 
in the book.


Andrei


Sorry Andrei; I missed this reply by you until now.

My interest in LLVM is largely outside of D  but I think it is a really 
important
technology for the future of D.  (I have another non-D LLVM project that
I'm working on in slow time.) Currently I don't actually use ldc because of
its Tango affinity.  Nevertheless it would be good if you can get someone from
the ldc community to produce a salient one-pager on why LLVM for D to
make reference to in your book.

Justin



Sounds good, thanks. If anyone is up to the task, we'd all be grateful.

Andrei


Re: LLVM 2.6 Release!

2009-10-28 Thread Justin Johansson
Andrei Alexandrescu Wrote:

> Justin Johansson wrote:
> > Denis Koroskin Wrote:
> >> Amazon mentions March 15, 2010:
> >> http://www.amazon.com/exec/obidos/ASIN/0321635361/modecdesi-20
> > 
> > Thanks; just had a look at that link.
> > 
> > Did Andrei give a preview of the table of contents somewhere?  I'd
> > certainly welcome a chapter (or at least a detailed honorable mention)
> > on LLVM though perhaps that would be outside of the scope of his book.
> 
> An overview of LLVM/ldc would be outside of the scope of the book, but I 
> encourage you to write about it and I'll make sure I'll insert a pointer 
> in the book.
> 
> Andrei

Sorry Andrei; I missed this reply by you until now.

My interest in LLVM is largely outside of D  but I think it is a really 
important
technology for the future of D.  (I have another non-D LLVM project that
I'm working on in slow time.) Currently I don't actually use ldc because of
its Tango affinity.  Nevertheless it would be good if you can get someone from
the ldc community to produce a salient one-pager on why LLVM for D to
make reference to in your book.

Justin



Re: Shared Hell

2009-10-28 Thread dsimcha
== Quote from Walter Bright (newshou...@digitalmars.com)'s article
> Denis Koroskin wrote:
> > I've recently updated to DMD2.035 (from DMD2.031 because all the later
> > versions had issues with imports) and for the first time faced problems
> > with shared modifier.
> >
> > I don't need shared and all my globals are __gshared (they are globally
> > unique instances that don't need per-thread copies).
> I don't understand. Are you running multiple threads? Are those threads
> accessing globals?
> A function that accesses shared data has to put in fences. There's no
> way to have the same code deal with shared and unshared code.
> As an escape from the type system, you can always cast away the
> shared-ness. But I wonder about code that both uses global variables
> shared across threads that don't need synchronization?

I have at least one use case for __gshareds in multithreaded code.  I often use
__gshared variables to hold program parameters that are only set using getopt at
program startup and never modified after the program becomes multithreaded.

That said, although I use D2 regularly, I basically have ignored shared's
existence up to this point.  The semantics aren't fully implemented, so right 
now
you get all the bondage and discipline of it without any of the benefits.  As 
far
as the problem of synchronized methods automatically being shared, here's an 
easy
workaround until the rough edges of shared are worked out:

//Instead of this:
synchronized SomeType someMethod(Foo args) {
// Do stuff.
}

// Use this:
SomeType someMethod(Foo args) {
synchronized(this) {
// Do stuff.
}
}


Re: Mini proposal: rename float.min to float.min_normal

2009-10-28 Thread Jason House
Don Wrote:

> Don wrote:
> > This is another small imperfection we should get rid of.
> > The floating point types have a property called ".min", but unlike the 
> > integer ".min", it's not the minimum!
> 
> > This misnaming is bad because (a) it causes confusion; and (b) it 
> > interfere with generic code, requiring special cases.
> > We should rename this while we have the chance. I don't think we should 
> > depart too far from the C/C++ name, but anything other than ".min" will 
> > work. I propose:
> > 
> > real.min > real.min_normal
> > If there is no objection to this, I will create a patch. It's very simple.
> 
> Patch is in bugzilla 3446. It took about 2 minutes to do.

You're missing documentation updates.


Re: Shared Hell

2009-10-28 Thread Jason House
Christopher Wright Wrote:

> Denis Koroskin wrote:
> > I've recently updated to DMD2.035 (from DMD2.031 because all the later 
> > versions had issues with imports) and for the first time faced problems 
> > with shared modifier.
> > 
> > I don't need shared and all my globals are __gshared (they are globally 
> > unique instances that don't need per-thread copies).
> > 
> > Yet some of methods of the class hierarchy (a root singleton class and 
> > everything which is accessible through it) are synchronized (well, you 
> > know why). That's where the problems begin.
> > 
> > Marking a method as synchronized automatically makes it shared (more or 
> > less obvious). And marking the method shared makes it unable to invoke 
> > with non-shared instance (and __gshared != shared), meaning that I'm 
> > unable to use my __gshared variables anymore, making this attribute 
> > useless for any serious safe programming.
> > 
> > So I started with replacing __gshared with shared and quickly understood 
> > how viral it is. Not only you mast mark all the members shared (methods 
> > and field), instantiate classes with shared attribute, you also have to 
> > create a duplicate all the methods to make them accessible with both 
> > shared and non-shared (thread-local) instances:
> 
> Why can't you use a non-shared method on a shared object? The compiler 
> could insert locking on the caller side.
> 
> Why can't you use a shared method on a non-shared object? The compiler 
> could, as an optimization, duplicate the method, minus the 
> synchronization. Or it could leave in the locking, which is expensive 
> but correct.

The caller would have to acquire locks for all the data accessed by the 
non-shared method and all non-shared methods it calls. Additionally, non-shared 
functions can access thread-local data. Neither of those issues are easily 
solved. Bartosz's scheme would solve the first one due to implied ownership.  


Re: Shared Hell

2009-10-28 Thread Jason House
Denis Koroskin Wrote:

> I've recently updated to DMD2.035 (from DMD2.031 because all the later  
> versions had issues with imports) and for the first time faced problems  
> with shared modifier.

A quick trip over to bugzilla is all you need to see that shared is completely 
broken. Here's what I see as basic functionality that is broken: 3035, 3089, 
3090, 3091, 3102, 3349.

Half of those are bugs I created within an hour of trying to use shared for 
real. I also have a bug in druntime as well, but Batosz's rewrite should 
address that issue. Interestingly, even though I reported that bug long ago, it 
still bit me. It was the source of a segfault that took me 3 months to track 
down (mostly due to an inability to use gdb until recently).

My code is littered with places that cast away shared since it was so utterly 
unusable when I tried. I'm still waiting for basic bugs to be closed before I 
try again. 


Re: Disallow catch without parameter ("LastCatch")

2009-10-28 Thread Denis Koroskin
On Wed, 28 Oct 2009 14:23:38 +0300, Christopher Wright  
 wrote:



Denis Koroskin wrote:

On Wed, 28 Oct 2009 00:21:47 +0300, grauzone  wrote:


BCS wrote:

Hello grauzone,


PS: I wonder, should the runtime really execute finally blocks if an
"Error" exception is thrown? (Errors are for runtime errors,  
Exception

for normal exceptions.) Isn't it dangerous to execute arbitrary user
code in presence of what is basically an internal error?

 If a thrown Error doesn't run finally blocks you will have a very  
hard time arguing for catch working and once those don't happen it  
might as well kill -9 the process so why even have it in the first  
place?


You still can use a "catch (Throwable t)" to catch all kinds of  
errors. I just think that finally (and scope(exit)) should be designed  
with high level code in mind. Code that allocates heap memory in a  
finally block is already broken (what if an out of memory error was  
thrown?).
 I've seen code that has throws new OutOfMemoryException(__FILE__,  
__LINE__); when malloc (or other allocation mechanism) has failed, and  
it always make me smile.


Maybe the GC should reserve a small amount of space (~1KB) for its  
exceptions, when memory is tight.


OutOfMemory exception is supposed to be thrown with a call to  
onOutOfMemoryError(), that throws OutOfMemoryError.classinfo.init (i.e.  
global immutable instance of an Error).


Re: Shared Hell

2009-10-28 Thread Denis Koroskin
On Wed, 28 Oct 2009 13:17:43 +0300, Walter Bright  
 wrote:



Denis Koroskin wrote:
I've recently updated to DMD2.035 (from DMD2.031 because all the later  
versions had issues with imports) and for the first time faced problems  
with shared modifier.
 I don't need shared and all my globals are __gshared (they are  
globally unique instances that don't need per-thread copies).


I don't understand. Are you running multiple threads? Are those threads  
accessing globals?




Yes.

A function that accesses shared data has to put in fences. There's no  
way to have the same code deal with shared and unshared code.




That's frustrating. I'd like to use the same class for both cases.

But I wonder about code that both uses global variables shared across  
threads that don't need synchronization?


You missed the point. I do the synchronization myself and I'm fine with  
switching to shared (I do believe it is a nice concept). The reason I use  
__gshared is because shared object were garbage-collected while still  
being in use a few versions of DMD back and I had no choice but to switch  
to __gshared. I hope it is fixed by now.


But I still can't make my data shared, since shared is transitive (viral).  
After a few hours or work I still can't even compile my code.


As an escape from the type system, you can always cast away the  
shared-ness.


That's the only way I have now. Casts from shared to unshared *everywhere*:

class BuildManager : BuildListener
{
synchronized void build(shared Target target)
{
// ...

_buildingThread = new shared(Thread)(&_startBuild); // creating a  
new shared Thread. Yes, shared Thread, because BuildManager is global.


//_buildingThread.start(); // Error: function  
core.thread.Thread.start () is not callable using argument types () shared
(cast(Thread)_buildingThread).start(); // works, but ugly, and I  
don't have a reason to hijack the type system in this case


// ...
}
}

Andrei would suggest a Shared!(T) template that would wrap an unshared  
type and make all methods shared. This would work, but requires full AST  
manipulation capabilities (it's clearly not enough to just mark all the  
members shared). What should we do until then?


Re: Disallow catch without parameter ("LastCatch")

2009-10-28 Thread Christopher Wright

Denis Koroskin wrote:

On Wed, 28 Oct 2009 00:21:47 +0300, grauzone  wrote:


BCS wrote:

Hello grauzone,


PS: I wonder, should the runtime really execute finally blocks if an
"Error" exception is thrown? (Errors are for runtime errors, Exception
for normal exceptions.) Isn't it dangerous to execute arbitrary user
code in presence of what is basically an internal error?

 If a thrown Error doesn't run finally blocks you will have a very 
hard time arguing for catch working and once those don't happen it 
might as well kill -9 the process so why even have it in the first 
place?


You still can use a "catch (Throwable t)" to catch all kinds of 
errors. I just think that finally (and scope(exit)) should be designed 
with high level code in mind. Code that allocates heap memory in a 
finally block is already broken (what if an out of memory error was 
thrown?).


I've seen code that has throws new OutOfMemoryException(__FILE__, 
__LINE__); when malloc (or other allocation mechanism) has failed, and 
it always make me smile.


Maybe the GC should reserve a small amount of space (~1KB) for its 
exceptions, when memory is tight.


Re: Shared Hell

2009-10-28 Thread Christopher Wright

Walter Bright wrote:

Denis Koroskin wrote:
I've recently updated to DMD2.035 (from DMD2.031 because all the later 
versions had issues with imports) and for the first time faced 
problems with shared modifier.


I don't need shared and all my globals are __gshared (they are 
globally unique instances that don't need per-thread copies).


I don't understand. Are you running multiple threads? Are those threads 
accessing globals?


A function that accesses shared data has to put in fences. There's no 
way to have the same code deal with shared and unshared code.


Acquiring a lock on a non-shared instance is safe, just an unnecessary 
expense. I would have looked into optimizing this expense away rather 
than punting the problem to the programmer.


As an escape from the type system, you can always cast away the 
shared-ness. But I wonder about code that both uses global variables 
shared across threads that don't need synchronization?


Maybe the methods are mostly inherently threadsafe. Only a small portion 
requires locking, so it's more efficient to handle it manually.


Re: Shared Hell

2009-10-28 Thread Christopher Wright

Denis Koroskin wrote:
I've recently updated to DMD2.035 (from DMD2.031 because all the later 
versions had issues with imports) and for the first time faced problems 
with shared modifier.


I don't need shared and all my globals are __gshared (they are globally 
unique instances that don't need per-thread copies).


Yet some of methods of the class hierarchy (a root singleton class and 
everything which is accessible through it) are synchronized (well, you 
know why). That's where the problems begin.


Marking a method as synchronized automatically makes it shared (more or 
less obvious). And marking the method shared makes it unable to invoke 
with non-shared instance (and __gshared != shared), meaning that I'm 
unable to use my __gshared variables anymore, making this attribute 
useless for any serious safe programming.


So I started with replacing __gshared with shared and quickly understood 
how viral it is. Not only you mast mark all the members shared (methods 
and field), instantiate classes with shared attribute, you also have to 
create a duplicate all the methods to make them accessible with both 
shared and non-shared (thread-local) instances:


Why can't you use a non-shared method on a shared object? The compiler 
could insert locking on the caller side.


Why can't you use a shared method on a non-shared object? The compiler 
could, as an optimization, duplicate the method, minus the 
synchronization. Or it could leave in the locking, which is expensive 
but correct.


Re: Shared Hell

2009-10-28 Thread Walter Bright

Denis Koroskin wrote:
I've recently updated to DMD2.035 (from DMD2.031 because all the later 
versions had issues with imports) and for the first time faced problems 
with shared modifier.


I don't need shared and all my globals are __gshared (they are globally 
unique instances that don't need per-thread copies).


I don't understand. Are you running multiple threads? Are those threads 
accessing globals?


A function that accesses shared data has to put in fences. There's no 
way to have the same code deal with shared and unshared code.


As an escape from the type system, you can always cast away the 
shared-ness. But I wonder about code that both uses global variables 
shared across threads that don't need synchronization?


Re: Mini proposal: rename float.min to float.min_normal

2009-10-28 Thread Don

Don wrote:

This is another small imperfection we should get rid of.
The floating point types have a property called ".min", but unlike the 
integer ".min", it's not the minimum!


This misnaming is bad because (a) it causes confusion; and (b) it 
interfere with generic code, requiring special cases.
We should rename this while we have the chance. I don't think we should 
depart too far from the C/C++ name, but anything other than ".min" will 
work. I propose:


real.min > real.min_normal
If there is no objection to this, I will create a patch. It's very simple.


Patch is in bugzilla 3446. It took about 2 minutes to do.


Re: Disallow catch without parameter ("LastCatch")

2009-10-28 Thread Max Samukha
On Wed, 28 Oct 2009 02:12:12 +0300, "Denis Koroskin"
<2kor...@gmail.com> wrote:

>On Wed, 28 Oct 2009 00:21:47 +0300, grauzone  wrote:
>
>> BCS wrote:
>>> Hello grauzone,
>>>
 PS: I wonder, should the runtime really execute finally blocks if an
 "Error" exception is thrown? (Errors are for runtime errors, Exception
 for normal exceptions.) Isn't it dangerous to execute arbitrary user
 code in presence of what is basically an internal error?

>>>  If a thrown Error doesn't run finally blocks you will have a very hard  
>>> time arguing for catch working and once those don't happen it might as  
>>> well kill -9 the process so why even have it in the first place?
>>
>> You still can use a "catch (Throwable t)" to catch all kinds of errors.  
>> I just think that finally (and scope(exit)) should be designed with high  
>> level code in mind. Code that allocates heap memory in a finally block  
>> is already broken (what if an out of memory error was thrown?).
>
>I've seen code that has throws new OutOfMemoryException(__FILE__,  
>__LINE__); when malloc (or other allocation mechanism) has failed, and it  
>always make me smile.
>
>QtD did just that last time I've checked.

The "or other allocation mechanism" part is important. If an
allocation on GC heap has failed, it doesn't make sense to allocate
the exception object there. But in the case of malloc, the program may
still be able to allocate the exception object and try to recover if
it makes sense. So I don't think the code is all that amusing.


Re: Shared Hell

2009-10-28 Thread #ponce
Denis Koroskin Wrote:

> I've recently updated to DMD2.035 (from DMD2.031 because all the later  
> versions had issues with imports) and for the first time faced problems  
> with shared modifier.
> 
> I don't need shared and all my globals are __gshared (they are globally  
> unique instances that don't need per-thread copies).
> 
> Yet some of methods of the class hierarchy (a root singleton class and  
> everything which is accessible through it) are synchronized (well, you  
> know why). That's where the problems begin.
> 
> Marking a method as synchronized automatically makes it shared (more or  
> less obvious). And marking the method shared makes it unable to invoke  
> with non-shared instance (and __gshared != shared), meaning that I'm  
> unable to use my __gshared variables anymore, making this attribute  
> useless for any serious safe programming.
> 
> So I started with replacing __gshared with shared and quickly understood  
> how viral it is. Not only you mast mark all the members shared (methods  
> and field), instantiate classes with shared attribute, you also have to  
> create a duplicate all the methods to make them accessible with both  
> shared and non-shared (thread-local) instances:
> 
> class Array(T)
> {
>  const(T) opIndex(uint index) const
>  {
>  return data[index];
>  }
> 
>  T opIndex(uint index)
>  {
>  return data[index];
>  }
> 
>  const(T) opIndex(uint index) shared const
>  {
>  return data[index];
>  }
> 
>  shared(T) opIndex(uint index) shared
>  {
>  return data[index];
>  }
> 
>  private T[] data;
> }
> 
> And that's just opIndex. Ooops...
> 
> But not only that, every interface now have to specify the same method  
> twice, too:
> 
> interface Target
> {
>  bool build();
>  bool build() shared;
>  void clean();
>  void clean() shared;
>  bool ready();
>  bool ready() shared;
>  void setBuildListener(BuildListener buildListener);
>  void setBuildListener(shared BuildListener buildListener) shared;
> }
> 
> That's a bit frustrating. Most importantly, I don't even need shared  
> (__gshared was more than enough for me), yet I'm imposed on using it.
> 
> Oh, I can't use any of the druntime/Phobos classes (e.g. create an  
> instance of shared(Thread)), because none of them are shared-aware.
> 
> I think the design needs to be given a second thought before the plane  
> takes off because I'm afraid it may painfully crash.

Wow.
I certainly won't switch to D2 if every getter must be written 4 times (and 4 
times more with immutable(T) ?).

I don't even understand the fundamental difference between __gshared 
and shared : is this only transitivity ?




Re: What Does Haskell Have to Do with C++?

2009-10-28 Thread Don

Jeremie Pelletier wrote:
http://bartoszmilewski.wordpress.com/2009/10/21/what-does-haskell-have-to-do-with-c/ 



Bartosz's second part of 'Template Metaprogramming Made Easy (Huh?)', 
its quite a read :)


Yes, it is excellent. Two comments:
(1) Bartosz's D examples make me seriously question 'static foreach' 
which is scheduled for implementation (bugzilla 3377).
If implemented, it will be a source of frustration, since it will not be 
very usable inside templates. The ability to exit from a 'static 
foreach' is something which is possible with a 'return'-style syntax, 
but is not possible with the 'eponymous template hack'.


(2) It seems pretty clear that we need to allow the eponymous trick to 
continue to work when more than one template member is present. I think 
everyone who's ever attempted template metaprogramming in D has proposed 
 it!


Re: GC Sentinel

2009-10-28 Thread bearophile
Leandro Lucarella:

> I think that's used to check for memory corruption, by storing a known
> patter before and after the actual object. Then, each time you can, you
> check that the unused memory block is intact (meaning nobody wrote to an
> invalid memory area).

Such things can be quite useful. Do you need to compile Phobos again to do 
that? If that's true then handier (compile-time?) solutions can be found.

Bye,
bearophile