Re: static arrays becoming value types

2009-10-20 Thread language_fan
Tue, 20 Oct 2009 16:25:05 -0400, Robert Jacques thusly wrote:

> On Tue, 20 Oct 2009 15:19:15 -0400, language_fan 
> wrote:
> 
>> Real tuple types do not have a special type tag which gets injected
>> implicitly with structs. So every time you try to do something
>> lightweight by emulating tuples, you need to refer to the global Tuple
>> type or bang your head to the wall.
> 
> Or use a templated opAssign mixin to allow two desperate types to be
> assigned to each other.

Wow, you need templates to implement == for built-in values types, nice..

> Besides, I think you're comparing apples to oranges. In the SOL example,
> you use the same declaration for all types. Shouldn't the SOL example
> be:
> 
>val a = (1,2) : [Int,Int]
>val b = (1,2) : [Int,Int]
>val c = (2,3) : MyCustomTupleType[Int,Int]
> 
> which would probably generate:
>   assert(a == b); // ok
>   assert(a != c); // Error: incompatible types for ((a) != (b))

If you have built-in tuple literals, there is no way you can build a 
MyCustomTupleType without resorting to other language features. There are 
no apples and oranges, cause they both are seen as (Int,Int) by the 
equivalence checker. Do you understand how equivalence works in 
structural typing system (http://en.wikipedia.org/wiki/
Structural_type_system) vs nominal typing? In structural equivalence 
there are no names attached to the types (well there might be, but those 
are omitted in the comparison), only their internal structure matters.

Why would anyone want to create two incompatible tuples by default as you 
still would have 'typedef' and 'struct' for implementing just that.


Re: static arrays becoming value types

2009-10-20 Thread Yigal Chripun

On 21/10/2009 05:48, Robert Jacques wrote:

On Tue, 20 Oct 2009 23:30:48 -0400, Leandro Lucarella 
wrote:

Robert Jacques, el 20 de octubre a las 21:06 me escribiste:

Now, if SOL allowed tuples to do things you can't do today in D,
like assign a tuple to a struct with the same signature, then this
might be a point. But that wasn't the example given.


Yes, that's another thing that can be done without real tuple support in
the language. Anyway, I guess I was a little exaggerated with '*way far*
from ideal', but I'm convinced there is plenty of room for improvements.
=)



Would you happen to know of a language which does tuples well already?


pick any functional language.. my favorite is ML


Re: Condition Mutexes

2009-10-20 Thread Graham St Jack
On Wed, 21 Oct 2009 00:56:13 +, dsimcha wrote:

> I'm messing around w/ core.sync.  Does anyone know what I'm doing wrong
> in this program?  It just hangs. If I could figure out ()##$) condition
> mutexes (right now, I'm using busy spinning), I might have a decent
> implementation of parallelForeach over ranges.
> 
> import core.sync.mutex, core.sync.condition, core.thread, std.stdio;
> 
> __gshared Condition condition;
> 
> void waitThenPrint() {
> condition.wait();
> writeln("FOO");
> }
> 
> void main() {
> condition = new Condition( new Mutex() ); auto T = new
> Thread(&waitThenPrint);
> T.start();
> condition.notify();  // Never wakes up and prints FOO.
> }

There are a few problems. The most serious is that you have to lock the 
mutex before calling condition.wait(). The underlying operating-system 
stuff atomically 

This means that the mutex needs to be an attribute of the class, and 
waitThenPrint() should be more like this:

void waitThenPrint() {
  synchronized(myMutex) {
condition.wait();
  }
  writeln("FOO");
}

While it isn't strictly necessary in this case, you should also:

Put the condition.notify() call into a synchronized(myMutex) block.

When some state variables are involved in the condition, you should do 
something like this:

void waitThenPrint() {
  synchronized(myMutex) {
while (state_not_right()) {
  condition.wait();
}
  }
  writeln("FOO");
}

and

synchronized(myMutex) {
  set_state_to_right();
  condition.notify();
}


Re: static arrays becoming value types

2009-10-20 Thread Robert Jacques
On Tue, 20 Oct 2009 23:30:48 -0400, Leandro Lucarella   
wrote:

Robert Jacques, el 20 de octubre a las 21:06 me escribiste:

Now, if SOL allowed tuples to do things you can't do today in D,
like assign a tuple to a struct with the same signature, then this
might be a point. But that wasn't the example given.


Yes, that's another thing that can be done without real tuple support in
the language. Anyway, I guess I was a little exaggerated with '*way far*
from ideal', but I'm convinced there is plenty of room for improvements.
=)



Would you happen to know of a language which does tuples well already?


Re: static arrays becoming value types

2009-10-20 Thread Robert Jacques
On Tue, 20 Oct 2009 22:45:53 -0400, Andrei Alexandrescu  
 wrote:

Robert Jacques wrote:
On Tue, 20 Oct 2009 20:38:33 -0400, Leandro Lucarella  
 wrote:

Yes, D support for tuples is way far from ideal.
 How so? I think this is merely the difference between a library type  
in a flexible language and a built-in type in an inflexible language. I  
mean the example was essentially:

In D:
 Apple a
 Apple b
 Orange c
  assert(a != c); // Error: incompatible types Apple and Orange
 In SOL:
 Apple a
 Apple b
 Apple c
  assert(a != c); // ok, both a and c are apples.
 Now, if SOL allowed tuples to do things you can't do today in D, like  
assign a tuple to a struct with the same signature, then this might be  
a point. But that wasn't the example given.


I also don't understand all the argument about structural vs. name  
equivalence.


Andrei


The original thread stated that D's value tuples (as opposed to type  
tuples) were far from ideal, because it's not a built-in type. So two  
people could make value tuple structs types that were incompatible with  
each other. (One counter to this is it's simple to define a templated  
opAssgin method that works correctly. Another counter is to relate this  
problem to typedefs).
My issue was with the example comparing D to some-other-language (SOL).  
The issue was that only the built-in value-tuple type in SOL was shown,  
and not a value-tuple interfacing with something else that wasn't the  
built-in value-tuple. This indicates that SOL isn't flexible/expressive  
enough to have library value-tuple-types, or the problems with D's  
value-tuple type solution.


Re: static arrays becoming value types

2009-10-20 Thread Leandro Lucarella
Robert Jacques, el 20 de octubre a las 21:06 me escribiste:
> >>Real tuple types do not have a special type tag which gets injected
> >>implicitly with structs. So every time you try to do something
> >>lightweight by emulating tuples, you need to refer to the global Tuple
> >>type or bang your head to the wall.
> >
> >Yes, D support for tuples is way far from ideal.
> 
> How so? I think this is merely the difference between a library type
> in a flexible language and a built-in type in an inflexible
> language. I mean the example was essentially:
> In D:
>  Apple a
>  Apple b
>  Orange c
> 
>  assert(a != c); // Error: incompatible types Apple and Orange
> 
> In SOL:
>  Apple a
>  Apple b
>  Apple c
> 
>  assert(a != c); // ok, both a and c are apples.

I wasn't referring to this particular example, even when I agree this is
not a big issue, is much more difficult to end up comparing Apples to
Oranges if the language have support for tuple literals (like in the
example). In D I think you might find yourself in this situation more
often, but still rarely.

I think tuple literals is an important thing to encourage people using
tuples, specially when you want to support functional programming style.

> Now, if SOL allowed tuples to do things you can't do today in D,
> like assign a tuple to a struct with the same signature, then this
> might be a point. But that wasn't the example given.

Yes, that's another thing that can be done without real tuple support in
the language. Anyway, I guess I was a little exaggerated with '*way far*
from ideal', but I'm convinced there is plenty of room for improvements.
=)

-- 
Leandro Lucarella (AKA luca) http://llucax.com.ar/
--
GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)
--
FINALMENTE EL CABALLITO FABIAN VA A PASAR UNA BUENA NAVIDAD
-- Crónica TV


Re: static arrays becoming value types

2009-10-20 Thread Andrei Alexandrescu

Robert Jacques wrote:
On Tue, 20 Oct 2009 20:38:33 -0400, Leandro Lucarella  
wrote:

Yes, D support for tuples is way far from ideal.


How so? I think this is merely the difference between a library type in 
a flexible language and a built-in type in an inflexible language. I 
mean the example was essentially:

In D:
 Apple a
 Apple b
 Orange c

 assert(a != c); // Error: incompatible types Apple and Orange

In SOL:
 Apple a
 Apple b
 Apple c

 assert(a != c); // ok, both a and c are apples.

Now, if SOL allowed tuples to do things you can't do today in D, like 
assign a tuple to a struct with the same signature, then this might be a 
point. But that wasn't the example given.


I also don't understand all the argument about structural vs. name 
equivalence.


Andrei


Re: dmd support for IDEs + network GUI

2009-10-20 Thread Adam D. Ruppe
On 10/20/09, Nick B  wrote:
 > Re your use of a binary protocol.
 >
 > Perhaps instead of re-inventing the wheel,
 
 Eh, my code is already written and works. One of the advantages to my
 code generator reading a C like syntax is that I might be able to
 fully automate porting some existing APIs down the wire - I'm
 particularly looking at OpenGL, or at least a subset of it, to work
 with ease. Another one is porting my generator to a new language is easy
- I already know the code and it is fairly simple anyway.
 
 Anyway, quickly skimming through the google page, their system isn't
 bad (Coincidentally, it and I encode unsigned ints and strings in
 exactly the same way! Cool.), but I don't think there's much to gain
 by me switching to it. Though, their signed varint algorithm is pretty
 elegant; I might have to use that.
 
 And it reminds me that I didn't even consider optional arguments on
 functions. Trivial to implement though, even compatible with my
 current protocol: if message length is shorter than what you expect,
 use default values for the rest of the arguments. This means they'd
 have to be at the end, but you expect that from C like functions
 anyway.
 
 > It is also worthwhile reading the section called "A bit of history".
 
 I thought about future compatibility, which is why my protocol has a
 length field on every message. If you read the length and the function
 number and find it isn't something you know, you can simply skip past
 the whole message and carry on.
 
 The ugliness might be that future functions deprecate old functions...
 if this took off and was widely used, it might end up looking like
 Win32 in 10 years (CreateWindow() nope, CreateWindowEx()!), but I'm ok
 with that.
 
 
 > One more point. What, if any, library to you plan to use ?
 
 The D side is all custom code (I've written various libraries and
 helper tools over the years for myself - the big one used here is a
 network manager thing that handles incoming connections and output
 buffering) and Phobos - nothing third party there. I'm using D2, but
 there's nothing preventing it from being backported to D1 at some
 point. I actually want to do a DMDScript port that speaks it too, but
 that's way down the line.
 
 For the viewer, my current implementation is C++ with Qt. I'd like to
 actually do various viewers, including ones with less fat dependencies
 (asking users to download Qt just to run it is a bit heavy), but for
 now, I just wanted something I could start using immediately without
 too much trouble for me. Qt on C++ works pretty well across platforms,
 and I already know how to use it, so it was an easy choice for early
 stages.
 
 >
 > cheers
 > Nick B
 >
 
 Thanks!
 -Adam


Re: static arrays becoming value types

2009-10-20 Thread Robert Jacques
On Tue, 20 Oct 2009 20:38:33 -0400, Leandro Lucarella   
wrote:

language_fan, el 20 de octubre a las 19:19 me escribiste:

>> One nasty thing about D's structs is that they don't have structural
>> equivalence relation unlike tuples. So you need to use the same
>> container struct type to get the same semantics. To achieve that you
>> would need some kind of STuple on standard library level or other  
kinds

>> of hacks.
>>
>> What I find unfortunate in D is that your abstractions come in two
>> sizes - either you use the modest tiny construct that does not scale
>> elegantly or the enormous hammer to crush things down theatretically.
>
> I don't understand very well what are you saying anyways...

Because of the unnecessary nominal typing in D's tuple emulation,
redefinitions of Tuples do not have implicit equivalence relation:

  struct Tuple(T...) {
T t;
  }
  struct Tuple2(T...) {
T t;
  }

  void main() {
Tuple!(int,int) a;
Tuple!(int,int) b;
Tuple2!(int,int) c;

assert(a == b); // ok
assert(a != c); // Error: incompatible types for ((a) != (b))
  }

In some other language:

  val a = (1,2) : [Int,Int]
  val b = (1,2) : [Int,Int]
  val c = (2,3) : [Int,Int]

  assert(a == b); // ok
  assert(a != c); // ok

Did you get it now?


Yes, thanks for the clarification.


Real tuple types do not have a special type tag which gets injected
implicitly with structs. So every time you try to do something
lightweight by emulating tuples, you need to refer to the global Tuple
type or bang your head to the wall.


Yes, D support for tuples is way far from ideal.


How so? I think this is merely the difference between a library type in a  
flexible language and a built-in type in an inflexible language. I mean  
the example was essentially:

In D:
 Apple a
 Apple b
 Orange c

 assert(a != c); // Error: incompatible types Apple and Orange

In SOL:
 Apple a
 Apple b
 Apple c

 assert(a != c); // ok, both a and c are apples.

Now, if SOL allowed tuples to do things you can't do today in D, like  
assign a tuple to a struct with the same signature, then this might be a  
point. But that wasn't the example given.


Now, the example was a good argument for making it easier and more natural  
to use the built-in tuple type. Adding syntaxtic sugar for tuples has been  
recommended before. I prefer using the slice syntax '..', as it would  
allow clean multi-dimensional slicing and mixed indexing and slicing, both  
of which are important to supporting arrays.


Re: stack frame optimization problem

2009-10-20 Thread downs
sprucely wrote:
> To try to be sure I had the correct syntax I tried the -S option of g++ along 
> with a switch for intel syntax to output the assembly. However the portion 
> corresponding to the inline assembly was still in ATT syntax.
> 
> For my resulting D executable I tried using hte, but it would abort after 
> mentioning something about a nonexistent htcfg file. I didn't find much info 
> after a cursory search. I gave up easily because I wasn't sure if I would be 
> able to make proper use of it. Maybe I should take an x86 assembly course.
> 
> Vladimir Panteleev Wrote:
> 
>> On Tue, 20 Oct 2009 18:45:50 +0300, sprucely  wrote:
>>
>>> This works with g++ and inline ATT assembly, but I have had no such luck  
>>> in D. I have many simple functions that need to be executed sequentially  
>>> and have identical stack frames. To avoid the overhead of setting up and  
>>> tearing down the stack frames I want to jmp from the body of one  
>>> function to the body of the next. A simplified example...
>>>
>>> extern(C) byte jumpHere;
>>>
>>> byte* jumpTo = &jumpHere;
>>>
>>> void f1()
>>> {
>>> asm
>>> {
>>> //jmp dword ptr jumpTo;
>>> mov EAX, jumpTo;
>>> jmp EAX;
>>> //jmp [EAX]
>>> }
>>> }
>>>
>>> void f2()
>>> {
>>> asm{jumpHere:;}
>>> }
>>>
>>> No matter what I try I get a segfault. My assembly skills are very  
>>> limited. I'm not using the naked keyword yet, because I want to get a  
>>> proof-of-concept working first. Anyone see anything wrong with this? Any  
>>> suggestions?
>> Just disassemble the resulting machine code and look at what's going on.
>>
>> -- 
>> Best regards,
>>   Vladimir  mailto:thecybersha...@gmail.com
> 

Try dropping an "int 3" before and after, then running it in gdb and using the 
"disassemble" and "info registers" commands.


Condition Mutexes

2009-10-20 Thread dsimcha
I'm messing around w/ core.sync.  Does anyone know what I'm doing wrong in
this program?  It just hangs. If I could figure out ()##$) condition mutexes
(right now, I'm using busy spinning), I might have a decent implementation of
parallelForeach over ranges.

import core.sync.mutex, core.sync.condition, core.thread, std.stdio;

__gshared Condition condition;

void waitThenPrint() {
condition.wait();
writeln("FOO");
}

void main() {
condition = new Condition( new Mutex() );
auto T = new Thread(&waitThenPrint);
T.start();
condition.notify();  // Never wakes up and prints FOO.
}


Re: static arrays becoming value types

2009-10-20 Thread Leandro Lucarella
language_fan, el 20 de octubre a las 19:19 me escribiste:
> >> One nasty thing about D's structs is that they don't have structural
> >> equivalence relation unlike tuples. So you need to use the same
> >> container struct type to get the same semantics. To achieve that you
> >> would need some kind of STuple on standard library level or other kinds
> >> of hacks.
> >> 
> >> What I find unfortunate in D is that your abstractions come in two
> >> sizes - either you use the modest tiny construct that does not scale
> >> elegantly or the enormous hammer to crush things down theatretically.
> > 
> > I don't understand very well what are you saying anyways...
> 
> Because of the unnecessary nominal typing in D's tuple emulation, 
> redefinitions of Tuples do not have implicit equivalence relation:
> 
>   struct Tuple(T...) {
> T t;
>   }
>   struct Tuple2(T...) {
> T t;
>   }
> 
>   void main() {
> Tuple!(int,int) a;
> Tuple!(int,int) b;
> Tuple2!(int,int) c;
> 
> assert(a == b); // ok
> assert(a != c); // Error: incompatible types for ((a) != (b))
>   }
> 
> In some other language:
> 
>   val a = (1,2) : [Int,Int]
>   val b = (1,2) : [Int,Int]
>   val c = (2,3) : [Int,Int]
> 
>   assert(a == b); // ok
>   assert(a != c); // ok
> 
> Did you get it now?

Yes, thanks for the clarification.

> Real tuple types do not have a special type tag which gets injected 
> implicitly with structs. So every time you try to do something 
> lightweight by emulating tuples, you need to refer to the global Tuple 
> type or bang your head to the wall.

Yes, D support for tuples is way far from ideal.

-- 
Leandro Lucarella (AKA luca) http://llucax.com.ar/
--
GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)
--
If you want to be alone, just be alone
If you want to watch the sea, just watch the sea
But do it now, timing is the answer, do it now
Timing is the answer to success


Re: d3 ?

2009-10-20 Thread Stewart Gordon

Jason House wrote:

dolive Wrote:

will appear d3 ? What are the tasks ?  it's not backward compatible 
with D2 ? What major changes ?


My understanding is that there will be a significant gap between the 
finalization of D2 and the start of D3. Bartosz's ownership scheme may 
be part of D3.



It would be really good if this is going to happen.  It might even mean 
that D1 is going to be finished first.


What sparked this subject, anyway?

Stewart.


Re: LRU cache for ~=

2009-10-20 Thread Christopher Wright

Brad Roberts wrote:

On Mon, 19 Oct 2009, Walter Bright wrote:


Denis Koroskin wrote:

Safe as in SafeD (i.e. no memory corruption) :)

Right. The problems with other definitions of safe is they are too
ill-defined.


There's SafeD, which has a fairly formal definition.


But a fairly generic name, which confuses people repeatedly. I'm not the 
first to recommend that the name be changed. It does more harm than good.


Re: stack frame optimization problem

2009-10-20 Thread sprucely
To try to be sure I had the correct syntax I tried the -S option of g++ along 
with a switch for intel syntax to output the assembly. However the portion 
corresponding to the inline assembly was still in ATT syntax.

For my resulting D executable I tried using hte, but it would abort after 
mentioning something about a nonexistent htcfg file. I didn't find much info 
after a cursory search. I gave up easily because I wasn't sure if I would be 
able to make proper use of it. Maybe I should take an x86 assembly course.

Vladimir Panteleev Wrote:

> On Tue, 20 Oct 2009 18:45:50 +0300, sprucely  wrote:
> 
> > This works with g++ and inline ATT assembly, but I have had no such luck  
> > in D. I have many simple functions that need to be executed sequentially  
> > and have identical stack frames. To avoid the overhead of setting up and  
> > tearing down the stack frames I want to jmp from the body of one  
> > function to the body of the next. A simplified example...
> >
> > extern(C) byte jumpHere;
> >
> > byte* jumpTo = &jumpHere;
> >
> > void f1()
> > {
> > asm
> > {
> > //jmp dword ptr jumpTo;
> > mov EAX, jumpTo;
> > jmp EAX;
> > //jmp [EAX]
> > }
> > }
> >
> > void f2()
> > {
> > asm{jumpHere:;}
> > }
> >
> > No matter what I try I get a segfault. My assembly skills are very  
> > limited. I'm not using the naked keyword yet, because I want to get a  
> > proof-of-concept working first. Anyone see anything wrong with this? Any  
> > suggestions?
> 
> Just disassemble the resulting machine code and look at what's going on.
> 
> -- 
> Best regards,
>   Vladimir  mailto:thecybersha...@gmail.com



Re: Communicating between in and out contracts

2009-10-20 Thread Jérôme M. Berger

Steven Schveighoffer wrote:
On Tue, 20 Oct 2009 13:13:07 -0400, Michel Fortin 
 wrote:


So what we need is semi-pure functions that can see all the globals as 
const data, or in other terms having no side effect but which can be 
affected by their environment. Another function qualifier, isn't it 
great! :-)


Yeah, I meant which functions to allow among the functions types we 
already have.  To introduce another function type *just to allow 
contracts to call them* is insanity.


	Note that there already is a gcc extension for this kind of 
function in C/C++: __attribute__((const))


Jerome
--
mailto:jeber...@free.fr
http://jeberger.free.fr
Jabber: jeber...@jabber.fr



signature.asc
Description: OpenPGP digital signature


Re: Proposed D2 Feature: => for anonymous delegates

2009-10-20 Thread Andrei Alexandrescu

Pelle Månsson wrote:

Jason House wrote:

Andrei Alexandrescu Wrote:


Jason House wrote:

Am I the only one that has trouble remembering how to write an inline
anonymous delegate when calling a function? At a minimum, both Scala
and C# use (args) => { body; } syntax. Can we please sneak it into
D2?

We have (args) { body; }

Andrei


Somehow, I missed that. What kind of type inference, if any, is 
allowed? Scala and C# allow omiting the type. Lately I'm doing a lot 
of (x) => { return x.foo(7); } in C# and it's nice to omit the 
amazingly long type for x. The IDE even knows the type of x for 
intellisense... I think scala would allow x => foo(7), or maybe even 
=> _.foo(7) or even _.foo(7). I haven't written much scala, so I may 
be way off...


Recent experiments by myself indicate you cannot omit the type and you 
cannot use auto for the type, so you actually need to type your 
VeryLongClassName!(With, Templates) if you need it.


I sort of miss automatic type deduction.


Actually, full type deduction should be in vigor, but it is known that 
the feature has more than a few bugs. Feel free to report any instance 
in which type deduction does not work in bugzilla.


Andrei


Re: dmd support for IDEs + network GUI

2009-10-20 Thread Nick B

Nick Sabalausky wrote:
"Adam D. Ruppe"  wrote in message 
news:mailman.208.1255923114.20261.digitalmar...@puremagic.com...

On Mon, Oct 12, 2009 at 09:06:38PM -0400, Nick Sabalausky wrote:
Excellent! Sounds exactly like what I had in mind. I'll definately want 
to

keep an eye on this. Any webpage or svn or anything yet?

I wrote up some of a webpage for it over the weekend:

http://arsdnet.net/dws/

I haven't had a chance to clean up my code yet, so it isn't posted, but
there's some overview text there, including some implementation details 
that

I haven't discussed yet here, but the document still has a long way to go.

But there it is, becoming more organized than anything I've written on it
before.



Adam.  What you written at your web site is a very interesting read.

Re your use of a binary protocol.

Perhaps instead of re-inventing the wheel, you may want to look at what
Google has done with the design of their  Google Protocol Buffers, which 
also implements a very fast binary protocol.  See here for a overview:


http://code.google.com/apis/protocolbuffers/docs/overview.html

It is also worthwhile reading the section called "A bit of history".

Note that there is also D implementation.

http://256.makerslocal.org/wiki/index.php/ProtocolBuffer

I don't know how current this is, though.

One more point. What, if any, library to you plan to use ?

cheers
Nick B


Re: stack frame optimization problem

2009-10-20 Thread sprucely
bearophile,

DMD 1.0.43 I think. But I'll have to check to make sure, because I was 
experimenting with LDC at one point.

So does this mean there's nothing inherently wrong with my snippet?

My C++ code was also modifying the this pointer as it jumped from a member 
function of one class to a member function of another. But I decided not to 
even try that until I got the jumps working.

Thanks,
sprucely


bearophile Wrote:

> sprucely:
> 
> >This works with g++ and inline ATT assembly, but I have had no such luck in 
> >D.<
> 
> What compiler are you using? I think LDC isn't yet able to do this (it's LLVM 
> limit, that may get lifted in future).
> 
> Bye,
> bearophile



Re: Proposed D2 Feature: => for anonymous delegates

2009-10-20 Thread Pelle Månsson

Jason House wrote:

Andrei Alexandrescu Wrote:


Jason House wrote:

Am I the only one that has trouble remembering how to write an inline
anonymous delegate when calling a function? At a minimum, both Scala
and C# use (args) => { body; } syntax. Can we please sneak it into
D2?

We have (args) { body; }

Andrei


Somehow, I missed that. What kind of type inference, if any, is allowed? Scala and C# 
allow omiting the type. Lately I'm doing a lot of (x) => { return x.foo(7); } in C# 
and it's nice to omit the amazingly long type for x. The IDE even knows the type of x 
for intellisense... I think scala would allow x => foo(7), or maybe even => 
_.foo(7) or even _.foo(7). I haven't written much scala, so I may be way off...


Recent experiments by myself indicate you cannot omit the type and you 
cannot use auto for the type, so you actually need to type your 
VeryLongClassName!(With, Templates) if you need it.


I sort of miss automatic type deduction.


Re: stack frame optimization problem

2009-10-20 Thread Vladimir Panteleev

On Tue, 20 Oct 2009 18:45:50 +0300, sprucely  wrote:

This works with g++ and inline ATT assembly, but I have had no such luck  
in D. I have many simple functions that need to be executed sequentially  
and have identical stack frames. To avoid the overhead of setting up and  
tearing down the stack frames I want to jmp from the body of one  
function to the body of the next. A simplified example...


extern(C) byte jumpHere;

byte* jumpTo = &jumpHere;

void f1()
{
asm
{
//jmp dword ptr jumpTo;
mov EAX, jumpTo;
jmp EAX;
//jmp [EAX]
}
}

void f2()
{
asm{jumpHere:;}
}

No matter what I try I get a segfault. My assembly skills are very  
limited. I'm not using the naked keyword yet, because I want to get a  
proof-of-concept working first. Anyone see anything wrong with this? Any  
suggestions?


Just disassemble the resulting machine code and look at what's going on.

--
Best regards,
 Vladimir  mailto:thecybersha...@gmail.com


Re: static arrays becoming value types

2009-10-20 Thread Robert Jacques
On Tue, 20 Oct 2009 15:19:15 -0400, language_fan   
wrote:



Tue, 20 Oct 2009 12:39:47 -0300, Leandro Lucarella thusly wrote:


language_fan, el 20 de octubre a las 13:52 me escribiste:

Tue, 20 Oct 2009 10:34:35 -0300, Leandro Lucarella thusly wrote:

> dsimcha, el 20 de octubre a las 02:44 me escribiste:
>> == Quote from Walter Bright (newshou...@digitalmars.com)'s article
>> > Currently, static arrays are (as in C) half-value types and
>> > half-reference types. This tends to cause a series of weird
>> > problems and special cases in the language semantics, such as
>> > functions not being able to return static arrays, and out
>> > parameters not being possible to be static arrays.
>> > Andrei and I agonized over this for some time, and eventually came
>> > to the conclusion that static arrays should become value types.
>> > I.e.,
>> >T[3]
>> > should behave much as if it were:
>> >struct ??
>> >{
>> >   T[3];
>> >}
>> > Then it can be returned from a function. In particular,
>> >void foo(T[3] a)
>> > is currently done (as in C) by passing a pointer to the array, and
>> > then with a bit of compiler magic 'a' is rewritten as (*a)[3].
>> > Making this change would mean that the entire array would be
>> > pushed onto the parameter stack, i.e. a copy of the array, rather
>> > than a reference to it. Making this change would clean up the
>> > internal behavior of types. They'll be more orthogonal and
>> > consistent, and templates will work better. The previous behavior
>> > for function parameters can be retained by making it a ref
>> > parameter:
>> > void foo(ref T[3] a)
>>
>> Vote++.  It's funny, I use static arrays so little that I never
>> realized that they weren't passed by value to functions.  I'd
>> absolutely love to be able to just return static arrays from
>> functions, and often use structs to do that now, but using structs
>> feels like a really ugly hack.
>
> It would be the poor men tuple for returning (homogeneous) stuff =P

It depends on how you define things. Traditionally tuples are seen as a
generalization of pairs (2 elements -> n elements). Records, on the
other


In what tradition? C++ maybe. I never saw a pair type outside C++, but
saw tuples everywhere (even in other structured languages like SQL).


Pairs are pretty common actually. You might have applications that have
mappings, functions, or zip (list operation) etc. I admit these are more
common in functional languages but the main reason for this is that most
mainstream languages do not support the Pair or Tuple types in any way.
Even D has broken support (from this point of view).


One nasty thing about D's structs is that they don't have structural
equivalence relation unlike tuples. So you need to use the same
container struct type to get the same semantics. To achieve that you
would need some kind of STuple on standard library level or other kinds
of hacks.

What I find unfortunate in D is that your abstractions come in two
sizes - either you use the modest tiny construct that does not scale
elegantly or the enormous hammer to crush things down theatretically.


I don't understand very well what are you saying anyways...


Because of the unnecessary nominal typing in D's tuple emulation,
redefinitions of Tuples do not have implicit equivalence relation:

  struct Tuple(T...) {
T t;
  }
  struct Tuple2(T...) {
T t;
  }

  void main() {
Tuple!(int,int) a;
Tuple!(int,int) b;
Tuple2!(int,int) c;

assert(a == b); // ok
assert(a != c); // Error: incompatible types for ((a) != (b))
  }

In some other language:

  val a = (1,2) : [Int,Int]
  val b = (1,2) : [Int,Int]
  val c = (2,3) : [Int,Int]

  assert(a == b); // ok
  assert(a != c); // ok

Did you get it now?

Real tuple types do not have a special type tag which gets injected
implicitly with structs. So every time you try to do something
lightweight by emulating tuples, you need to refer to the global Tuple
type or bang your head to the wall.


Or use a templated opAssign mixin to allow two desperate types to be  
assigned to each other.
Besides, I think you're comparing apples to oranges. In the SOL example,  
you use the same declaration for all types. Shouldn't the SOL example be:


  val a = (1,2) : [Int,Int]
  val b = (1,2) : [Int,Int]
  val c = (2,3) : MyCustomTupleType[Int,Int]

which would probably generate:
 assert(a == b); // ok
 assert(a != c); // Error: incompatible types for ((a) != (b))


Re: Eliminate "new" for class object creation?

2009-10-20 Thread Pelle Månsson

Andrei Alexandrescu wrote:

Max Samukha wrote:

On Tue, 20 Oct 2009 18:12:39 +0800, Lionello Lunesu
 wrote:


On 20-10-2009 6:38, Andrei Alexandrescu wrote:

I hereby suggest we get rid of new for class object creation. What do
you guys think?

I don't agree with this one.

There's extra cost involved, and the added keyword makes that clear. 
Also, somebody mentioned using 'new' to allocate structs on the heap; 
I've never actually done that, but it sounds like using 'new' would 
be the perfect way to do just that.


L.


I don't think the extra cost should be emphasized with 'new' every
time you instantiate a class. For example, in C#, they use 'new' for
creating structs on stack (apparently to make them consistent with
classes, in a silly way).

I think the rarer cases when a class instance is allocated in-place (a
struct on heap) can be handled by the library.

BTW, why "in-situ" is better in this context than the more common
"in-place"? Would be nice to know.


The term originated with this:

class A {
InSitu!B b;
...
}

meaning that B is embedded inside A. But I guess InPlace is just as good.


Andrei


I actually do not understand what InSitu is supposed to mean.

I like the name Scope, but InPlace works for me.


Re: The demise of T[new]

2009-10-20 Thread Steven Schveighoffer

On Tue, 20 Oct 2009 14:43:16 -0400, Bill Baxter  wrote:


On Tue, Oct 20, 2009 at 11:30 AM, Andrei Alexandrescu
 wrote:

Steven Schveighoffer wrote:


If your goal is to affect the original array, then you should accept a  
ref

argument or not append to it.


I think that's an entirely reasonable (and easy to explain) stance.


I've definitely spent time tracking down exactly such bugs, where I
meant to make the argument a ref but didn't.  If the above is to be
the official stance, then I think it should be enforced by the
compiler.  Appending to non-ref slice args should be an error.


Except when you are passing ownership of an array.  Basically, there are  
four modes:


T[] x : a) you are passing ownership to the function, and the array might  
not be deterministically usable after the function returns, e.g. T[]  
padAndCapitalize(T[]).  Usually such functions' return values are  
deterministic, and the focus of what you care about.


  -or-
b) you are lending ownership to the function, only the array  
elements will be altered, e.g. replace().


ref T[] x : You are lending ownership of the array to the function, but  
you get ownership back, e.g. push().  Fully deterministic altering.


const(T)[] x : You retain ownership of the array, the function cannot  
alter it, e.g. toUpper().


So it's not possible to flag on the signature between the first a) and b)  
modes, we must currently rely on documentation.  But the "append and  
modify" routines are pretty rare.


Essentially, what you want is a type where the length is const, but the  
data is mutable.  I think creating such a type should be possible in the  
library.


But fixing the append problem alone is a huge step forward, since all of  
these cases could result in corruption of totally unrelated data if the  
arrays were appended to.  Especially the const version is bad.


-Steve


Re: Revamped concurrency API (Don can you contact Bartosz ?)

2009-10-20 Thread Nick B

Don wrote:




Don, are you able to contact Bartosz, re the details of this test case.

Nick B


Bartosz has sent it to me. I can reproduce the error. It's my top 
priority, but it'll take a while -- it's nasty.


Don - thanks for filing this. I did try to contact you, via bugzilla 
email (which bounced back)and via Skype,(no reply), without success.


Nick B



Re: static arrays becoming value types

2009-10-20 Thread language_fan
Tue, 20 Oct 2009 12:39:47 -0300, Leandro Lucarella thusly wrote:

> language_fan, el 20 de octubre a las 13:52 me escribiste:
>> Tue, 20 Oct 2009 10:34:35 -0300, Leandro Lucarella thusly wrote:
>> 
>> > dsimcha, el 20 de octubre a las 02:44 me escribiste:
>> >> == Quote from Walter Bright (newshou...@digitalmars.com)'s article
>> >> > Currently, static arrays are (as in C) half-value types and
>> >> > half-reference types. This tends to cause a series of weird
>> >> > problems and special cases in the language semantics, such as
>> >> > functions not being able to return static arrays, and out
>> >> > parameters not being possible to be static arrays.
>> >> > Andrei and I agonized over this for some time, and eventually came
>> >> > to the conclusion that static arrays should become value types.
>> >> > I.e.,
>> >> >T[3]
>> >> > should behave much as if it were:
>> >> >struct ??
>> >> >{
>> >> >   T[3];
>> >> >}
>> >> > Then it can be returned from a function. In particular,
>> >> >void foo(T[3] a)
>> >> > is currently done (as in C) by passing a pointer to the array, and
>> >> > then with a bit of compiler magic 'a' is rewritten as (*a)[3].
>> >> > Making this change would mean that the entire array would be
>> >> > pushed onto the parameter stack, i.e. a copy of the array, rather
>> >> > than a reference to it. Making this change would clean up the
>> >> > internal behavior of types. They'll be more orthogonal and
>> >> > consistent, and templates will work better. The previous behavior
>> >> > for function parameters can be retained by making it a ref
>> >> > parameter:
>> >> > void foo(ref T[3] a)
>> >> 
>> >> Vote++.  It's funny, I use static arrays so little that I never
>> >> realized that they weren't passed by value to functions.  I'd
>> >> absolutely love to be able to just return static arrays from
>> >> functions, and often use structs to do that now, but using structs
>> >> feels like a really ugly hack.
>> > 
>> > It would be the poor men tuple for returning (homogeneous) stuff =P
>> 
>> It depends on how you define things. Traditionally tuples are seen as a
>> generalization of pairs (2 elements -> n elements). Records, on the
>> other
> 
> In what tradition? C++ maybe. I never saw a pair type outside C++, but
> saw tuples everywhere (even in other structured languages like SQL).

Pairs are pretty common actually. You might have applications that have 
mappings, functions, or zip (list operation) etc. I admit these are more 
common in functional languages but the main reason for this is that most 
mainstream languages do not support the Pair or Tuple types in any way. 
Even D has broken support (from this point of view).

>> One nasty thing about D's structs is that they don't have structural
>> equivalence relation unlike tuples. So you need to use the same
>> container struct type to get the same semantics. To achieve that you
>> would need some kind of STuple on standard library level or other kinds
>> of hacks.
>> 
>> What I find unfortunate in D is that your abstractions come in two
>> sizes - either you use the modest tiny construct that does not scale
>> elegantly or the enormous hammer to crush things down theatretically.
> 
> I don't understand very well what are you saying anyways...

Because of the unnecessary nominal typing in D's tuple emulation, 
redefinitions of Tuples do not have implicit equivalence relation:

  struct Tuple(T...) {
T t;
  }
  struct Tuple2(T...) {
T t;
  }

  void main() {
Tuple!(int,int) a;
Tuple!(int,int) b;
Tuple2!(int,int) c;

assert(a == b); // ok
assert(a != c); // Error: incompatible types for ((a) != (b))
  }

In some other language:

  val a = (1,2) : [Int,Int]
  val b = (1,2) : [Int,Int]
  val c = (2,3) : [Int,Int]

  assert(a == b); // ok
  assert(a != c); // ok

Did you get it now?

Real tuple types do not have a special type tag which gets injected 
implicitly with structs. So every time you try to do something 
lightweight by emulating tuples, you need to refer to the global Tuple 
type or bang your head to the wall.


Re: Eliminate "new" for class object creation?

2009-10-20 Thread Chad J
Andrei Alexandrescu wrote:
> Leandro Lucarella wrote:
>> Andrei Alexandrescu, el 19 de octubre a las 22:16 me escribiste:
>>> No problem. You will be able to use InSitu!T. It is much better to
>>> confine unsafe features to libraries instead of putting them in the
>>> language.
>>>
>>> {
>>> auto foo = InSitu!(Foo)(args);
>>> // use foo
>>> ...
>>> // foo goes away
>>> }
>>
>> 
>>
>> Why not Scoped!T ? I think the purpose for this that the lifetime of the
>> object is bounded to the scope, right? I think is hard to figure that out
>> from InSitu!T than Scoped!T.
>>
>> 
> 
> It's not a useless discussions, names are important. Scoped is more
> evocative for in-function definition, whereas InPlace/InSitu are (at
> least to me) more evocative when inside a class.
> 
> class A {
>InPlace!B member;
> }
> 
> 
> Andrei

InPlace actually sounds good.  InSitu, while appropriate, will just
sound vaguely snooty after the user looks it up in the dictionary (IMO).

InPlace might seem odd in functions though.

void foo(...)
{
InPlace!B variable;
...
}

In conclusion, I couldn't give a damn.  ;)


Re: Eliminate "new" for class object creation?

2009-10-20 Thread Rainer Deyke
Andrei Alexandrescu wrote:
> Lionello Lunesu wrote:
>> Also, somebody mentioned using 'new' to allocate structs on the heap;
>> I've never actually done that, but it sounds like using 'new' would be
>> the perfect way to do just that.
> 
> Yah, I guess I'll drop it.

Consistency with structs demands that for a class type 'X', 'new X'
allocates a *reference*, not an instance, on the heap.


-- 
Rainer Deyke - rain...@eldwood.com


Re: The demise of T[new]

2009-10-20 Thread Bill Baxter
On Tue, Oct 20, 2009 at 11:30 AM, Andrei Alexandrescu
 wrote:
> Steven Schveighoffer wrote:
>>
>> If your goal is to affect the original array, then you should accept a ref
>> argument or not append to it.
>
> I think that's an entirely reasonable (and easy to explain) stance.

I've definitely spent time tracking down exactly such bugs, where I
meant to make the argument a ref but didn't.  If the above is to be
the official stance, then I think it should be enforced by the
compiler.  Appending to non-ref slice args should be an error.

--bb


Re: The demise of T[new]

2009-10-20 Thread Andrei Alexandrescu

Steven Schveighoffer wrote:
If your goal is to affect the 
original array, then you should accept a ref argument or not append to it.


I think that's an entirely reasonable (and easy to explain) stance.

Andrei


Re: Communicating between in and out contracts

2009-10-20 Thread Steven Schveighoffer
On Tue, 20 Oct 2009 13:13:07 -0400, Michel Fortin  
 wrote:


So what we need is semi-pure functions that can see all the globals as  
const data, or in other terms having no side effect but which can be  
affected by their environment. Another function qualifier, isn't it  
great! :-)


Yeah, I meant which functions to allow among the functions types we  
already have.  To introduce another function type *just to allow contracts  
to call them* is insanity.


-Steve


Re: The demise of T[new]

2009-10-20 Thread grauzone

Andrei Alexandrescu wrote:

grauzone wrote:

Steven Schveighoffer wrote:
I still think having an Appender object or struct is a worthwhile 
thing, the "pre-allocate array then set length to zero" model is a 
hack at best.


Would that work with Andrei's append cache at all? Setting the length 
to zero and then appending is like taking a slice of length 0 and then 
appending.


Maybe introduce a write/readable .capacity property, that magically 
accesses the cache/GC?


For my money, I'd get rid of that trick:

a.length = 1000;
a.length = 0;
for (...) a ~= x;


Yes, that's what we currently use for setting the capacity of an array. 
And it looks stupid and non-intuitive; someone new to D might think it's 
a no-op.


Better way please?



Andrei


Re: The demise of T[new]

2009-10-20 Thread Steven Schveighoffer

On Tue, 20 Oct 2009 12:30:57 -0400, Bill Baxter  wrote:


On Tue, Oct 20, 2009 at 8:50 AM, Steven Schveighoffer
 wrote:
On Tue, 20 Oct 2009 11:10:20 -0400, Bill Baxter   
wrote:



On Tue, Oct 20, 2009 at 6:25 AM, Steven Schveighoffer
 wrote:


On Sun, 18 Oct 2009 17:05:39 -0400, Walter Bright
 wrote:

The purpose of T[new] was to solve the problems T[] had with passing  
T[]
to a function and then the function resizes the T[]. What happens  
with

the
original?

The solution we came up with was to create a third array type,  
T[new],

which was a reference type.

Andrei had the idea that T[new] could be dispensed with by making a
"builder" library type to handle creating arrays by doing things like
appending, and then delivering a finished T[] type. This is similar  
to

what
std.outbuffer and std.array.Appender do, they just need a bit of
refining.

The .length property of T[] would then become an rvalue only, not an
lvalue, and ~= would no longer be allowed for T[].

We both feel that this would simplify D, make it more flexible, and
remove
some awkward corner cases like the inability to say a.length++.

What do you think?


At the risk of sounding like bearophile -- I've proposed 2 solutions  
in

the
past for this that *don't* involve creating a T[new] type.

1. Store the allocated length in the GC structure, then only allow
appending
when the length of the array being appended matches the allocated  
length.


2. Store the allocated length at the beginning of the array, and use a
bit
in the array length to determine if it starts at the beginning of the
block.

The first solution has space concerns, and the second has lots more
concerns, but can help in the case of having to do a GC lookup to
determine
if a slice can be appended (you'd still have to lock the GC to do an
actual
append or realloc).  I prefer the first solution over the second.

I like the current behavior *except* for appending.  Most of the time  
it

does what you want, and the syntax is beautiful.

In regards to disallowing x ~= y, I'd propose you at least make it
equivalent to x = x ~ y instead of removing it.


If you're going to do ~= a lot then you should convert to the dynamic
array type.
If you're not going to do ~= a lot, then you can afford to write out x  
= x

~ y.

The bottom line is that it just doesn't make sense to append onto a
"view" type.  It's really a kind of constness.  Having a view says the
underlying memory locations you are looking at are fixed.  It doesn't
make sense to imply there's an operation that can change those memory
locations (other than shrinking the window to view fewer of them).


Having the append operation extend into already allocated memory is an
optimization.  In this case, it's an optimization that can corrupt  
memory.


If we can make append extend into already allocated memory *and* not  
cause
corruption, I don't see the downside.  And then there is one less array  
type

to deal with (, create functions that handle, etc.).

Besides, I think Andrei's LRU solution is better than mine (and pretty  
much

in line with it).

I still think having an Appender object or struct is a worthwhile  
thing, the

"pre-allocate array then set length to zero" model is a hack at best.


But you still have the problem Andrei posted.  Code like this:

void func(int[] x)
{
 x ~= 3;
 x[0] = 42;
}


depending on what you want, you then rewrite:


void func(int[] x)
{
 x[0] = 42;
 x ~= 3;
}


or



void func(int[] x)
{
 x = x ~ 3;
 x[0] = 42;
}


Generally when you are appending, you are not also changing the original  
data, so you don't care whether it's an optimization or not.


Your code is obviously broken anyways, since *nobody* ever sees the 3.


it'll compile and maybe run just fine, but there's no way to know if
the caller will see the 42 or not.   Unpredictable behavior like that
is breeding grounds for subtle bugs.


I'm sure we could spend days coming up with code that introduces subtle  
bugs.  You can't fix all bugs that people may write.  I don't think your  
scenario is very likely.


More importantly, the problem with the current appending behavior is this:

void foo(int[] x)
{
  x ~= 3;
  ...
}

That may have just corrupted some data that you don't own, so defensively,  
you should write:


void foo(int[] x)
{
  x = x ~ 3;
  ...
}

But with Andrei's solution, you cannot possibly corrupt data with this  
line.  Now, if you then go and set one of the values in the original array  
(like you did), then you may or may not change the original array.  But as  
the function takes a mutable array, *you own the array* so it is a mistake  
to think when you pass in an array that's not const, you should expect it  
to remain unchanged.  If your goal is to affect the original array, then  
you should accept a ref argument or not append to it.


-Steve


Re: The demise of T[new]

2009-10-20 Thread Bill Baxter
On Tue, Oct 20, 2009 at 10:05 AM, Andrei Alexandrescu
 wrote:
> Bill Baxter wrote:
>>
>> To Andrei, do you really feel comfortable trying to explain this in
>> your book?  It seems like it will be difficult to explain that ~= is
>> sometimes efficient for appending but not necessarily if you're
>> working with a lot of arrays because it actually keeps this cache
>> under the hood that may or may not remember the actual underlying
>> capacity of the array you're appending to, so you should probably use
>> ArrayBuilder if you can, despite the optimization.
>
> I guess I'll try and let you all know.

I can also see this becoming an Effective D tip --
"""
#23  Use ArrayBuilder for appending

For common cases appending to slices is fast.  However the performance
depends on a hidden LRU cache to remember the capacities of the most
recent N arrays.  This works fine until you hit that N limit.
Unfortunately as you compose code together it is easy to overflow that
cache without realizing it, leading to sudden performance drops for no
apparent reason.   Thus we suggest you always use ArrayBuilder when
appending to arrays rather than slices.
"""

Or not.  This is one of those places where some data is really needed.
 It may be that 99.9% of code is only actively appending to 4 arrays
or fewer.  It just seems too tricky that this innocent-looking code:

 int[] i;
 foreach(k; 1..20_000) {
 i ~= some_function(k);
 }

could hit a performance cliff based on how many arrays get used deep
in the call chain of some_function().   Granted, cache issues can
cause these kinds of cliffs for any kind of code, but I suspect this
cliff would be particularly noticeable, given the slowness of
allocations.

--bb


Re: The demise of T[new]

2009-10-20 Thread Denis Koroskin
On Tue, 20 Oct 2009 21:01:17 +0400, Andrei Alexandrescu  
 wrote:



grauzone wrote:

Steven Schveighoffer wrote:
I still think having an Appender object or struct is a worthwhile  
thing, the "pre-allocate array then set length to zero" model is a  
hack at best.
 Would that work with Andrei's append cache at all? Setting the length  
to zero and then appending is like taking a slice of length 0 and then  
appending.
 Maybe introduce a write/readable .capacity property, that magically  
accesses the cache/GC?


For my money, I'd get rid of that trick:

a.length = 1000;
a.length = 0;
for (...) a ~= x;


Andrei


I agree it's ugly but that's the best we have in D, and it looks like  
things are getting even worse...


Re: stack frame optimization problem

2009-10-20 Thread bearophile
sprucely:

>This works with g++ and inline ATT assembly, but I have had no such luck in D.<

What compiler are you using? I think LDC isn't yet able to do this (it's LLVM 
limit, that may get lifted in future).

Bye,
bearophile


Re: Communicating between in and out contracts

2009-10-20 Thread Michel Fortin
On 2009-10-20 12:04:20 -0400, "Steven Schveighoffer" 
 said:


On Tue, 20 Oct 2009 11:57:05 -0400, Michel Fortin  
 wrote:


On 2009-10-20 11:44:00 -0400, "Steven Schveighoffer"  
 said:


On Tue, 20 Oct 2009 08:36:14 -0400, Michel Fortin   
 wrote:


On 2009-10-20 08:16:01 -0400, "Steven Schveighoffer"   
 said:


Incidentally, shouldn't all access to the object in the in contract  be 
  const by default anyways?
 Hum, access to everything (including global variables, arguments),  
not  just the object, should be const in a contract. That might be  
harder to  implement though.
 Yeah, you are probably right.  Of course, a const function can still  
alter  global state, but if you strictly disallowed altering global  
state, we are  left with only pure functions (and I think that's a  
little harsh).


Not exactly. Pure functions can't even read global state (so their  
result can't depend on anything but their arguments), but it makes  
perfect sense to read global state in a contract. What you really need  
is to have a const view of the global state. And this could apply to 
all  asserts too.


Yes, but what I'm talking about is "what functions can you call while 
in a  contract."  Access to data should be const as you say.  But if 
you follow  that logic to the most strict interpretation, the only 
"safe" functions to  allow are pure functions.


i.e.:

int x;

class C
{
   void foo()
   in
   {
 x = 5; // I agree this should be an error
 bar(); // ok?
   }
   {}

   void bar() const
   {
 x = 5;
   }
}


When you try to write to x yes it's an error. But if you were reading x 
it should not be an error. Basically inside the contract a global like 
x should be seen as const(int) just like the object should be seen as 
const(C).


Pure functions are somewhat alike, but are more restrictive since you 
can only access immutable globals. So what we need is semi-pure 
functions that can see all the globals as const data, or in other terms 
having no side effect but which can be affected by their environment. 
Another function qualifier, isn't it great! :-)



--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: The demise of T[new]

2009-10-20 Thread Andrei Alexandrescu

Bill Baxter wrote:

To Andrei, do you really feel comfortable trying to explain this in
your book?  It seems like it will be difficult to explain that ~= is
sometimes efficient for appending but not necessarily if you're
working with a lot of arrays because it actually keeps this cache
under the hood that may or may not remember the actual underlying
capacity of the array you're appending to, so you should probably use
ArrayBuilder if you can, despite the optimization.


I guess I'll try and let you all know.

Andrei


Re: The demise of T[new]

2009-10-20 Thread Andrei Alexandrescu

grauzone wrote:

Steven Schveighoffer wrote:
I still think having an Appender object or struct is a worthwhile 
thing, the "pre-allocate array then set length to zero" model is a 
hack at best.


Would that work with Andrei's append cache at all? Setting the length to 
zero and then appending is like taking a slice of length 0 and then 
appending.


Maybe introduce a write/readable .capacity property, that magically 
accesses the cache/GC?


For my money, I'd get rid of that trick:

a.length = 1000;
a.length = 0;
for (...) a ~= x;


Andrei


Re: static arrays becoming value types

2009-10-20 Thread dsimcha
== Quote from Steven Schveighoffer (schvei...@yahoo.com)'s article
> On Mon, 19 Oct 2009 21:50:46 -0400, Walter Bright
>  wrote:
> > Currently, static arrays are (as in C) half-value types and
> > half-reference types. This tends to cause a series of weird problems and
> > special cases in the language semantics, such as functions not being
> > able to return static arrays, and out parameters not being possible to
> > be static arrays.
> >
> > Andrei and I agonized over this for some time, and eventually came to
> > the conclusion that static arrays should become value types. I.e.,
> >
> >T[3]
> >
> > should behave much as if it were:
> >
> >struct ??
> >{
> >   T[3];
> >}
> >
> > Then it can be returned from a function. In particular,
> >
> >void foo(T[3] a)
> >
> > is currently done (as in C) by passing a pointer to the array, and then
> > with a bit of compiler magic 'a' is rewritten as (*a)[3]. Making this
> > change would mean that the entire array would be pushed onto the
> > parameter stack, i.e. a copy of the array, rather than a reference to it.
> >
> > Making this change would clean up the internal behavior of types.
> > They'll be more orthogonal and consistent, and templates will work
> > better.
> >
> > The previous behavior for function parameters can be retained by making
> > it a ref parameter:
> >
> > void foo(ref T[3] a)
> What happens for IFTI?
> void foo(T)(T t)
> {
> return t[2];
> }
> void main()
> {
> int[3] x;
> x[] = 5;
> printf(foo(x));
> }
> I would think T would resolve to int[3], which means pass by value.  You'd
> need a specialization for static arrays to get the current behavior.
> Don't get me wrong, I would love to see static arrays become real types,
> but I wonder if there are any ways we can "optimize out" the staticness of
> an array argument for templates.  In particular, I hate how IFTI likes to
> assume static array for literals...
> In the absence of such an optimization, I'd still prefer static arrays
> become value types like you say.
> -Steve

To me, static arrays are an optimization that you don't use unless you really 
need
it.  Dynamic arrays should be most programmers' "default" array type.  If you
insist on using static arrays, then the onus should be on you to make sure 
nothing
like this happens by doing something like:

print(foo(x[]));  // Slice operator converts x into an int[], passed
  //the way dynamic arrays are.

The what type are literals question, though, is a legit issue.


Re: The demise of T[new]

2009-10-20 Thread Bill Baxter
On Tue, Oct 20, 2009 at 8:50 AM, Steven Schveighoffer
 wrote:
> On Tue, 20 Oct 2009 11:10:20 -0400, Bill Baxter  wrote:
>
>> On Tue, Oct 20, 2009 at 6:25 AM, Steven Schveighoffer
>>  wrote:
>>>
>>> On Sun, 18 Oct 2009 17:05:39 -0400, Walter Bright
>>>  wrote:
>>>
 The purpose of T[new] was to solve the problems T[] had with passing T[]
 to a function and then the function resizes the T[]. What happens with
 the
 original?

 The solution we came up with was to create a third array type, T[new],
 which was a reference type.

 Andrei had the idea that T[new] could be dispensed with by making a
 "builder" library type to handle creating arrays by doing things like
 appending, and then delivering a finished T[] type. This is similar to
 what
 std.outbuffer and std.array.Appender do, they just need a bit of
 refining.

 The .length property of T[] would then become an rvalue only, not an
 lvalue, and ~= would no longer be allowed for T[].

 We both feel that this would simplify D, make it more flexible, and
 remove
 some awkward corner cases like the inability to say a.length++.

 What do you think?
>>>
>>> At the risk of sounding like bearophile -- I've proposed 2 solutions in
>>> the
>>> past for this that *don't* involve creating a T[new] type.
>>>
>>> 1. Store the allocated length in the GC structure, then only allow
>>> appending
>>> when the length of the array being appended matches the allocated length.
>>>
>>> 2. Store the allocated length at the beginning of the array, and use a
>>> bit
>>> in the array length to determine if it starts at the beginning of the
>>> block.
>>>
>>> The first solution has space concerns, and the second has lots more
>>> concerns, but can help in the case of having to do a GC lookup to
>>> determine
>>> if a slice can be appended (you'd still have to lock the GC to do an
>>> actual
>>> append or realloc).  I prefer the first solution over the second.
>>>
>>> I like the current behavior *except* for appending.  Most of the time it
>>> does what you want, and the syntax is beautiful.
>>>
>>> In regards to disallowing x ~= y, I'd propose you at least make it
>>> equivalent to x = x ~ y instead of removing it.
>>
>> If you're going to do ~= a lot then you should convert to the dynamic
>> array type.
>> If you're not going to do ~= a lot, then you can afford to write out x = x
>> ~ y.
>>
>> The bottom line is that it just doesn't make sense to append onto a
>> "view" type.  It's really a kind of constness.  Having a view says the
>> underlying memory locations you are looking at are fixed.  It doesn't
>> make sense to imply there's an operation that can change those memory
>> locations (other than shrinking the window to view fewer of them).
>
> Having the append operation extend into already allocated memory is an
> optimization.  In this case, it's an optimization that can corrupt memory.
>
> If we can make append extend into already allocated memory *and* not cause
> corruption, I don't see the downside.  And then there is one less array type
> to deal with (, create functions that handle, etc.).
>
> Besides, I think Andrei's LRU solution is better than mine (and pretty much
> in line with it).
>
> I still think having an Appender object or struct is a worthwhile thing, the
> "pre-allocate array then set length to zero" model is a hack at best.

But you still have the problem Andrei posted.  Code like this:

void func(int[] x)
{
 x ~= 3;
 x[0] = 42;
}

it'll compile and maybe run just fine, but there's no way to know if
the caller will see the 42 or not.   Unpredictable behavior like that
is breeding grounds for subtle bugs.

Perhaps that potential for bugs can be reduced by turning off the LRU
stuff in debug builds, and just making ~= reallocate always there.
Since, as you said, it's an optimization, makes sense to only turn it
on in release or maybe optimized builds.

To Andrei, do you really feel comfortable trying to explain this in
your book?  It seems like it will be difficult to explain that ~= is
sometimes efficient for appending but not necessarily if you're
working with a lot of arrays because it actually keeps this cache
under the hood that may or may not remember the actual underlying
capacity of the array you're appending to, so you should probably use
ArrayBuilder if you can, despite the optimization.

--bb


Re: The demise of T[new]

2009-10-20 Thread grauzone

Steven Schveighoffer wrote:
I still think having an Appender object or struct is a worthwhile thing, 
the "pre-allocate array then set length to zero" model is a hack at best.


Would that work with Andrei's append cache at all? Setting the length to 
zero and then appending is like taking a slice of length 0 and then 
appending.


Maybe introduce a write/readable .capacity property, that magically 
accesses the cache/GC?



-Steve


Re: LRU cache for ~=

2009-10-20 Thread Robert Jacques
On Tue, 20 Oct 2009 11:24:21 -0400, Steven Schveighoffer  
 wrote:


On Tue, 20 Oct 2009 10:48:31 -0400, Robert Jacques   
wrote:


On Tue, 20 Oct 2009 10:05:42 -0400, Steven Schveighoffer  
 wrote:


I'd think you only want to clear the entries affected by the  
collection.




If it was free and simple to only clear the affected entries, sure. But  
doing so requires (very heavy?) modification of the GC in order to  
track and check changes.


Why?  All you have to do is check whether a block is referenced in the  
LRU while freeing the block.  I don't even think it would be that  
performance critical.  Using my vastly novice assumptions about how the  
GC collection cycle works:


step 1, mark all blocks that are not referenced by any roots.
step 2, check which blocks are referenced by the LRU, if they are, then  
remove them from the LRU.

step 3, recycle free blocks.


I agree, but my mind hadn't gotten there yet. (It was thinking of the  
overhead of generational/concurrent collections, for some strange reason)



But this requires the LRU to be part of the GC.


I think we're already in that boat.  If the LRU isn't attached to the  
GC, then ~= becomes a locking operation even if the GC is thread-local,  
which makes no sense.


-Steve


Of course, Andrei just stated the cache should be thread-local (and  
probably in the function, not the GC) which throws a spanner into the  
works.


Re: Communicating between in and out contracts

2009-10-20 Thread Steven Schveighoffer
On Tue, 20 Oct 2009 11:57:05 -0400, Michel Fortin  
 wrote:


On 2009-10-20 11:44:00 -0400, "Steven Schveighoffer"  
 said:


On Tue, 20 Oct 2009 08:36:14 -0400, Michel Fortin   
 wrote:


On 2009-10-20 08:16:01 -0400, "Steven Schveighoffer"   
 said:


Incidentally, shouldn't all access to the object in the in contract  
be   const by default anyways?
 Hum, access to everything (including global variables, arguments),  
not  just the object, should be const in a contract. That might be  
harder to  implement though.
 Yeah, you are probably right.  Of course, a const function can still  
alter  global state, but if you strictly disallowed altering global  
state, we are  left with only pure functions (and I think that's a  
little harsh).


Not exactly. Pure functions can't even read global state (so their  
result can't depend on anything but their arguments), but it makes  
perfect sense to read global state in a contract. What you really need  
is to have a const view of the global state. And this could apply to all  
asserts too.


Yes, but what I'm talking about is "what functions can you call while in a  
contract."  Access to data should be const as you say.  But if you follow  
that logic to the most strict interpretation, the only "safe" functions to  
allow are pure functions.


i.e.:

int x;

class C
{
  void foo()
  in
  {
x = 5; // I agree this should be an error
bar(); // ok?
  }
  {}

  void bar() const
  {
x = 5;
  }
}

-Steve


Re: Communicating between in and out contracts

2009-10-20 Thread Michel Fortin
On 2009-10-20 11:44:00 -0400, "Steven Schveighoffer" 
 said:


On Tue, 20 Oct 2009 08:36:14 -0400, Michel Fortin  
 wrote:


On 2009-10-20 08:16:01 -0400, "Steven Schveighoffer"  
 said:


Incidentally, shouldn't all access to the object in the in contract be  
 const by default anyways?


Hum, access to everything (including global variables, arguments), not  
just the object, should be const in a contract. That might be harder to 
 implement though.


Yeah, you are probably right.  Of course, a const function can still 
alter  global state, but if you strictly disallowed altering global 
state, we are  left with only pure functions (and I think that's a 
little harsh).


Not exactly. Pure functions can't even read global state (so their 
result can't depend on anything but their arguments), but it makes 
perfect sense to read global state in a contract. What you really need 
is to have a const view of the global state. And this could apply to 
all asserts too.



--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: static arrays becoming value types

2009-10-20 Thread Steven Schveighoffer
On Mon, 19 Oct 2009 21:50:46 -0400, Walter Bright  
 wrote:


Currently, static arrays are (as in C) half-value types and  
half-reference types. This tends to cause a series of weird problems and  
special cases in the language semantics, such as functions not being  
able to return static arrays, and out parameters not being possible to  
be static arrays.


Andrei and I agonized over this for some time, and eventually came to  
the conclusion that static arrays should become value types. I.e.,


   T[3]

should behave much as if it were:

   struct ??
   {
  T[3];
   }

Then it can be returned from a function. In particular,

   void foo(T[3] a)

is currently done (as in C) by passing a pointer to the array, and then  
with a bit of compiler magic 'a' is rewritten as (*a)[3]. Making this  
change would mean that the entire array would be pushed onto the  
parameter stack, i.e. a copy of the array, rather than a reference to it.


Making this change would clean up the internal behavior of types.  
They'll be more orthogonal and consistent, and templates will work  
better.


The previous behavior for function parameters can be retained by making  
it a ref parameter:


void foo(ref T[3] a)


What happens for IFTI?

void foo(T)(T t)
{
   return t[2];
}

void main()
{
   int[3] x;
   x[] = 5;
   printf(foo(x));
}

I would think T would resolve to int[3], which means pass by value.  You'd  
need a specialization for static arrays to get the current behavior.


Don't get me wrong, I would love to see static arrays become real types,  
but I wonder if there are any ways we can "optimize out" the staticness of  
an array argument for templates.  In particular, I hate how IFTI likes to  
assume static array for literals...


In the absence of such an optimization, I'd still prefer static arrays  
become value types like you say.


-Steve


Re: The demise of T[new]

2009-10-20 Thread Steven Schveighoffer

On Tue, 20 Oct 2009 11:10:20 -0400, Bill Baxter  wrote:


On Tue, Oct 20, 2009 at 6:25 AM, Steven Schveighoffer
 wrote:

On Sun, 18 Oct 2009 17:05:39 -0400, Walter Bright
 wrote:

The purpose of T[new] was to solve the problems T[] had with passing  
T[]
to a function and then the function resizes the T[]. What happens with  
the

original?

The solution we came up with was to create a third array type, T[new],
which was a reference type.

Andrei had the idea that T[new] could be dispensed with by making a
"builder" library type to handle creating arrays by doing things like
appending, and then delivering a finished T[] type. This is similar to  
what
std.outbuffer and std.array.Appender do, they just need a bit of  
refining.


The .length property of T[] would then become an rvalue only, not an
lvalue, and ~= would no longer be allowed for T[].

We both feel that this would simplify D, make it more flexible, and  
remove

some awkward corner cases like the inability to say a.length++.

What do you think?


At the risk of sounding like bearophile -- I've proposed 2 solutions in  
the

past for this that *don't* involve creating a T[new] type.

1. Store the allocated length in the GC structure, then only allow  
appending
when the length of the array being appended matches the allocated  
length.


2. Store the allocated length at the beginning of the array, and use a  
bit
in the array length to determine if it starts at the beginning of the  
block.


The first solution has space concerns, and the second has lots more
concerns, but can help in the case of having to do a GC lookup to  
determine
if a slice can be appended (you'd still have to lock the GC to do an  
actual

append or realloc).  I prefer the first solution over the second.

I like the current behavior *except* for appending.  Most of the time it
does what you want, and the syntax is beautiful.

In regards to disallowing x ~= y, I'd propose you at least make it
equivalent to x = x ~ y instead of removing it.


If you're going to do ~= a lot then you should convert to the dynamic
array type.
If you're not going to do ~= a lot, then you can afford to write out x =  
x ~ y.


The bottom line is that it just doesn't make sense to append onto a
"view" type.  It's really a kind of constness.  Having a view says the
underlying memory locations you are looking at are fixed.  It doesn't
make sense to imply there's an operation that can change those memory
locations (other than shrinking the window to view fewer of them).


Having the append operation extend into already allocated memory is an  
optimization.  In this case, it's an optimization that can corrupt memory.


If we can make append extend into already allocated memory *and* not cause  
corruption, I don't see the downside.  And then there is one less array  
type to deal with (, create functions that handle, etc.).


Besides, I think Andrei's LRU solution is better than mine (and pretty  
much in line with it).


I still think having an Appender object or struct is a worthwhile thing,  
the "pre-allocate array then set length to zero" model is a hack at best.


-Steve


stack frame optimization problem

2009-10-20 Thread sprucely
This works with g++ and inline ATT assembly, but I have had no such luck in D. 
I have many simple functions that need to be executed sequentially and have 
identical stack frames. To avoid the overhead of setting up and tearing down 
the stack frames I want to jmp from the body of one function to the body of the 
next. A simplified example...

extern(C) byte jumpHere;

byte* jumpTo = &jumpHere;

void f1()
{
asm
{
//jmp dword ptr jumpTo;
mov EAX, jumpTo;
jmp EAX;
//jmp [EAX]
}
}

void f2()
{
asm{jumpHere:;}
}

No matter what I try I get a segfault. My assembly skills are very limited. I'm 
not using the naked keyword yet, because I want to get a proof-of-concept 
working first. Anyone see anything wrong with this? Any suggestions?


Re: Communicating between in and out contracts

2009-10-20 Thread Steven Schveighoffer
On Tue, 20 Oct 2009 08:36:14 -0400, Michel Fortin  
 wrote:


On 2009-10-20 08:16:01 -0400, "Steven Schveighoffer"  
 said:


Incidentally, shouldn't all access to the object in the in contract be   
const by default anyways?


Hum, access to everything (including global variables, arguments), not  
just the object, should be const in a contract. That might be harder to  
implement though.


Yeah, you are probably right.  Of course, a const function can still alter  
global state, but if you strictly disallowed altering global state, we are  
left with only pure functions (and I think that's a little harsh).


-Steve


Re: Eliminate "new" for class object creation?

2009-10-20 Thread Leandro Lucarella
Andrei Alexandrescu, el 20 de octubre a las 08:42 me escribiste:
> >Why not Scoped!T ? I think the purpose for this that the lifetime of the
> >object is bounded to the scope, right? I think is hard to figure that out
> >from InSitu!T than Scoped!T.
> >
> >
> 
> It's not a useless discussions, names are important. Scoped is more

OK, let's continue then... ;)

> evocative for in-function definition, whereas InPlace/InSitu are (at
> least to me) more evocative when inside a class.
> 
> class A {
>InPlace!B member;
> }

Yes, but I think Scoped!T is clearer in average. The member effectively
live in the same scope the class does.

-- 
Leandro Lucarella (AKA luca) http://llucax.com.ar/
--
GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)
--
Si pudiera acercarme un poco más, hacia vos
Te diría que me tiemblan los dos pies, cuando me mirás
Si supieras todo lo que me costó, llegar
Hoy sabrías que me cuesta respirar, cuando me mirás


Re: static arrays becoming value types

2009-10-20 Thread Leandro Lucarella
language_fan, el 20 de octubre a las 13:52 me escribiste:
> Tue, 20 Oct 2009 10:34:35 -0300, Leandro Lucarella thusly wrote:
> 
> > dsimcha, el 20 de octubre a las 02:44 me escribiste:
> >> == Quote from Walter Bright (newshou...@digitalmars.com)'s article
> >> > Currently, static arrays are (as in C) half-value types and
> >> > half-reference types. This tends to cause a series of weird problems
> >> > and special cases in the language semantics, such as functions not
> >> > being able to return static arrays, and out parameters not being
> >> > possible to be static arrays.
> >> > Andrei and I agonized over this for some time, and eventually came to
> >> > the conclusion that static arrays should become value types. I.e.,
> >> >T[3]
> >> > should behave much as if it were:
> >> >struct ??
> >> >{
> >> >   T[3];
> >> >}
> >> > Then it can be returned from a function. In particular,
> >> >void foo(T[3] a)
> >> > is currently done (as in C) by passing a pointer to the array, and
> >> > then with a bit of compiler magic 'a' is rewritten as (*a)[3]. Making
> >> > this change would mean that the entire array would be pushed onto the
> >> > parameter stack, i.e. a copy of the array, rather than a reference to
> >> > it. Making this change would clean up the internal behavior of types.
> >> > They'll be more orthogonal and consistent, and templates will work
> >> > better. The previous behavior for function parameters can be retained
> >> > by making it a ref parameter:
> >> > void foo(ref T[3] a)
> >> 
> >> Vote++.  It's funny, I use static arrays so little that I never
> >> realized that they weren't passed by value to functions.  I'd
> >> absolutely love to be able to just return static arrays from functions,
> >> and often use structs to do that now, but using structs feels like a
> >> really ugly hack.
> > 
> > It would be the poor men tuple for returning (homogeneous) stuff =P
> 
> It depends on how you define things. Traditionally tuples are seen as a 
> generalization of pairs (2 elements -> n elements). Records, on the other 

In what tradition? C++ maybe. I never saw a pair type outside C++, but saw
tuples everywhere (even in other structured languages like SQL).

> hand, are generalization of tuples (simple number index -> named 
> elements). You need couple of additional layers of generalization to come 
> up with structs (subtyping, member functions, generics etc.)
> 
> One nasty thing about D's structs is that they don't have structural 
> equivalence relation unlike tuples. So you need to use the same container 
> struct type to get the same semantics. To achieve that you would need 
> some kind of STuple on standard library level or other kinds of hacks.
> 
> What I find unfortunate in D is that your abstractions come in two sizes 
> - either you use the modest tiny construct that does not scale elegantly 
> or the enormous hammer to crush things down theatretically.

I don't understand very well what are you saying anyways...

-- 
Leandro Lucarella (AKA luca) http://llucax.com.ar/
--
GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)
--
Fantasy is as important as wisdom


Re: LRU cache for ~=

2009-10-20 Thread Steven Schveighoffer
On Tue, 20 Oct 2009 10:48:31 -0400, Robert Jacques   
wrote:


On Tue, 20 Oct 2009 10:05:42 -0400, Steven Schveighoffer  
 wrote:



I'd think you only want to clear the entries affected by the collection.



If it was free and simple to only clear the affected entries, sure. But  
doing so requires (very heavy?) modification of the GC in order to track  
and check changes.


Why?  All you have to do is check whether a block is referenced in the LRU  
while freeing the block.  I don't even think it would be that performance  
critical.  Using my vastly novice assumptions about how the GC collection  
cycle works:


step 1, mark all blocks that are not referenced by any roots.
step 2, check which blocks are referenced by the LRU, if they are, then  
remove them from the LRU.

step 3, recycle free blocks.


But this requires the LRU to be part of the GC.


I think we're already in that boat.  If the LRU isn't attached to the GC,  
then ~= becomes a locking operation even if the GC is thread-local, which  
makes no sense.


-Steve


Re: LRU cache for ~=

2009-10-20 Thread Andrei Alexandrescu

Robert Jacques wrote:
So you want to synchronize the ~= function? I thought the LRU would be 
thread local and therefore independent of these issues, as well as being 
faster. And if the LRU isn't thread-local, then why not make it part of 
the GC? It would both be more general and much simpler/cleaner to 
implement.


I think the cache should be thread-local.

Andrei


Re: The demise of T[new]

2009-10-20 Thread Bill Baxter
On Tue, Oct 20, 2009 at 6:25 AM, Steven Schveighoffer
 wrote:
> On Sun, 18 Oct 2009 17:05:39 -0400, Walter Bright
>  wrote:
>
>> The purpose of T[new] was to solve the problems T[] had with passing T[]
>> to a function and then the function resizes the T[]. What happens with the
>> original?
>>
>> The solution we came up with was to create a third array type, T[new],
>> which was a reference type.
>>
>> Andrei had the idea that T[new] could be dispensed with by making a
>> "builder" library type to handle creating arrays by doing things like
>> appending, and then delivering a finished T[] type. This is similar to what
>> std.outbuffer and std.array.Appender do, they just need a bit of refining.
>>
>> The .length property of T[] would then become an rvalue only, not an
>> lvalue, and ~= would no longer be allowed for T[].
>>
>> We both feel that this would simplify D, make it more flexible, and remove
>> some awkward corner cases like the inability to say a.length++.
>>
>> What do you think?
>
> At the risk of sounding like bearophile -- I've proposed 2 solutions in the
> past for this that *don't* involve creating a T[new] type.
>
> 1. Store the allocated length in the GC structure, then only allow appending
> when the length of the array being appended matches the allocated length.
>
> 2. Store the allocated length at the beginning of the array, and use a bit
> in the array length to determine if it starts at the beginning of the block.
>
> The first solution has space concerns, and the second has lots more
> concerns, but can help in the case of having to do a GC lookup to determine
> if a slice can be appended (you'd still have to lock the GC to do an actual
> append or realloc).  I prefer the first solution over the second.
>
> I like the current behavior *except* for appending.  Most of the time it
> does what you want, and the syntax is beautiful.
>
> In regards to disallowing x ~= y, I'd propose you at least make it
> equivalent to x = x ~ y instead of removing it.

If you're going to do ~= a lot then you should convert to the dynamic
array type.
If you're not going to do ~= a lot, then you can afford to write out x = x ~ y.

The bottom line is that it just doesn't make sense to append onto a
"view" type.  It's really a kind of constness.  Having a view says the
underlying memory locations you are looking at are fixed.  It doesn't
make sense to imply there's an operation that can change those memory
locations (other than shrinking the window to view fewer of them).

--bb


Re: LRU cache for ~=

2009-10-20 Thread Steven Schveighoffer
On Tue, 20 Oct 2009 10:14:52 -0400, Andrei Alexandrescu  
 wrote:



Steven Schveighoffer wrote:
 This is a very good idea.  Incidentally, you only need the upper bound  
location, the beginning location is irrelevant, since you don't grow  
down.


Awesome, didn't think of that. So now more cases are caught:

auto a = new int[100];
a ~= 42;
a = a[50 .. $];
a ~= 52;

That wouldn't have worked with my original suggestion, but it does work  
safely with yours.


It was one of the coolest parts of my original proposal :)  
http://www.digitalmars.com/webnews/newsgroups.php?art_group=digitalmars.D&article_id=63146


But using a cache solves a lot of the problems I didn't.



What do you do in the case where the memory was recycled?  Does a GC  
collection cycle clean out the cache as well?


As you saw, there was some discussion about that as well.


Yeah, I'm reading in thread order :)  Still got 91 unread messages, so  
maybe I'll read all before replying again...


-Steve


Re: LRU cache for ~=

2009-10-20 Thread Steven Schveighoffer
On Tue, 20 Oct 2009 10:37:40 -0400, Robert Jacques   
wrote:


So you want to synchronize the ~= function? I thought the LRU would be  
thread local and therefore independent of these issues, as well as being  
faster. And if the LRU isn't thread-local, then why not make it part of  
the GC? It would both be more general and much simpler/cleaner to  
implement.


quoting myself earlier:

On Tue, 20 Oct 2009 09:58:01 -0400, Steven Schveighoffer  
 wrote:


In response to other's queries about how many LRUs to use, you'd  
probably want one per heap, and you'd want to lock/not lock based on  
whether the heap is thread local or not.


You need a locked operation in the case where the heap is shared,  
otherwise, you lose safety.


At the moment all we *have* is a shared heap.  So ~= is a synchronized  
operation until thread-local heaps are available.


I think the only logical place for the LRU is the GC, it makes no sense to  
have a a shared LRU for an unshared GC or vice versa.


-Steve


Re: LRU cache for ~=

2009-10-20 Thread Robert Jacques
On Tue, 20 Oct 2009 10:05:42 -0400, Steven Schveighoffer  
 wrote:



On Mon, 19 Oct 2009 22:37:26 -0400, dsimcha  wrote:

== Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s  
article

dsimcha wrote:
> == Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s  
article

>> dsimcha wrote:
>>> == Quote from Andrei Alexandrescu  
(seewebsiteforem...@erdani.org)'s article

 dsimcha wrote:
> Started playing w/ the implementation a little and I see a  
problem.  What

about

> the garbage collector?  There are two possibilities:
 [snip]
> The only possible solutions I see would be to have the GC know  
everything

about
> the LRU cache and evict stale entries (probably slows down GC a  
lot, a

huge PITA
> to implement, couples things that shouldn't be tightly coupled),  
or clear the
> cache every time GC is run (probably would make appending so  
slow as to

> defeat the
> purpose of having the cache).
 I think GC.collect may simply evict the entire cache. The  
collection
 cycle costs so much, the marginal cost of losing cached  
information is

 lost in the noise.
 Andrei
>>> But then you have to copy the whole array again, likely triggering  
another GC if
>>> the array is large.  Then things really get ugly as, for all  
practical purposes,

>>> you've completely done away with the cache.
>> This happens whether or not a cache is in use.
>> Andrei
>
> But the array isn't guaranteed to get reallocated immediately after  
*every* GC
> run.  If you're appending to a huge array, the GC will likely run  
several times

> while you're appending, leading to several unnecessary reallocations.
I don't think I understand this.
1. Request for an append comes that runs out of memory
2. GC runs and clears memory
3. Array is reallocated and the capacity cached.
No?


This is entirely correct.


> Each of
> those unnecessary reallocations will increase the memory footprint  
of your
> program, possibly triggering another GC run and wiping out your  
cache again in

> short order, until, for sufficiently large arrays,
>
> a ~= b;
>
> is almost equivalent to
>
> a = a ~ b;
I don't understand how the cache makes that all worse.
Andrei


The cache doesn't make anything *worse* than with no cache.  The only  
point I'm
trying to make is that, for large arrays, if the GC clears the cache  
every time it
runs, things would start to get *almost as bad as* having no cache  
because the

copy operations become expensive and the GC may run frequently.


The cache can't be "cleared" every time, or else you might as well only  
keep one LRU entry:


int[] twos, threes;

for(int i = 1; i < 1; i++)
{
   twos ~= i * 2;
   threes ~= i * 3;
}

At some point, twos or threes needs an allocation triggering a  
collection, and that clears the cache, making the other array need an  
allocation, clearing the cache, etc.


I'd think you only want to clear the entries affected by the collection.

-Steve


If it was free and simple to only clear the affected entries, sure. But  
doing so requires (very heavy?) modification of the GC in order to track  
and check changes. It also reduces collection performance. I think, that  
if GC allocations added entries to the LRU, and therefore the information  
in the LRU is never stale, you could avoid clearing the LRU. But this  
requires the LRU to be part of the GC.


Re: LRU cache for ~=

2009-10-20 Thread Robert Jacques
On Tue, 20 Oct 2009 10:14:52 -0400, Andrei Alexandrescu  
 wrote:

Steven Schveighoffer wrote:
On Mon, 19 Oct 2009 14:51:32 -0400, Andrei Alexandrescu  
 wrote:


I just wrote this to Sean and Walter and subsequently discussed it  
with Walter. Walter thinks this should work. Does anyone have the time  
and inclination to test this out? It would involve hacking into  
druntime's implementation of ~= (I'm not sure what the function name  
is). I'd really appreciate this; I'm overloaded as it is.


==

In wake of the recent demise of T[new], I was thinking of finding ways  
of making ~= efficient for T[].


Currently ~= is slow because accessing GC.sizeOf(void*) acquires a  
global lock and generally must figure out a lot of things about the  
pointer to make a decision.


Also, ~= is dangerous because it allows slices to stomp over other  
slices.


I was thinking of solving these issues by keeping an LRU (Least  
Recently Used) cache inside the implementation of ~=. The LRU would  
only have a few entries (4-8) and would store the parameters of the  
last ~= calls, and their cached capacities.


So whenever code calls arr ~= b, the LRU is searched first. If the  
system finds "arr" (both bounds) in the LRU, that means the cached  
capacity is correct and can solve the matter without an actual trip to  
the GC at all! Otherwise, do the deed and cache the new slice and the  
new capacity.


This also solves the lack of safety: if you request a growth on an  
array you just grew, it's impossible  to have a valid slice beyond  
that array.


This LRU would allow us to keep the slice API as it currently is, and  
also at excellent efficiency.


What do you think?

 This is a very good idea.  Incidentally, you only need the upper bound  
location, the beginning location is irrelevant, since you don't grow  
down.


Awesome, didn't think of that. So now more cases are caught:

auto a = new int[100];
a ~= 42;
a = a[50 .. $];
a ~= 52;

That wouldn't have worked with my original suggestion, but it does work  
safely with yours.


What do you do in the case where the memory was recycled?  Does a GC  
collection cycle clean out the cache as well?


As you saw, there was some discussion about that as well.

This is better than my two previous ideas.  The only drawback I see is  
if you have many threads doing appending, or you are appending more  
than 8 arrays at once in a round-robin fashion, you would lose all the  
benefit (although it shouldn't affect correctness).  At that point  
however, you'd have to ask yourself why you aren't using a specialized  
appender type or function.


Yah. As I suspect a lot of code is actually doing round-robin naturally,  
I'm considering using a random eviction strategy. That way performance  
will degrade smoother. A more advanced algorithm would use introspection  
to choose dynamically between LRU and random.



Andrei


So you want to synchronize the ~= function? I thought the LRU would be  
thread local and therefore independent of these issues, as well as being  
faster. And if the LRU isn't thread-local, then why not make it part of  
the GC? It would both be more general and much simpler/cleaner to  
implement.


Re: LRU cache for ~=

2009-10-20 Thread Andrei Alexandrescu

Steven Schveighoffer wrote:
On Mon, 19 Oct 2009 14:51:32 -0400, Andrei Alexandrescu 
 wrote:


I just wrote this to Sean and Walter and subsequently discussed it 
with Walter. Walter thinks this should work. Does anyone have the time 
and inclination to test this out? It would involve hacking into 
druntime's implementation of ~= (I'm not sure what the function name 
is). I'd really appreciate this; I'm overloaded as it is.


==

In wake of the recent demise of T[new], I was thinking of finding ways 
of making ~= efficient for T[].


Currently ~= is slow because accessing GC.sizeOf(void*) acquires a 
global lock and generally must figure out a lot of things about the 
pointer to make a decision.


Also, ~= is dangerous because it allows slices to stomp over other 
slices.


I was thinking of solving these issues by keeping an LRU (Least 
Recently Used) cache inside the implementation of ~=. The LRU would 
only have a few entries (4-8) and would store the parameters of the 
last ~= calls, and their cached capacities.


So whenever code calls arr ~= b, the LRU is searched first. If the 
system finds "arr" (both bounds) in the LRU, that means the cached 
capacity is correct and can solve the matter without an actual trip to 
the GC at all! Otherwise, do the deed and cache the new slice and the 
new capacity.


This also solves the lack of safety: if you request a growth on an 
array you just grew, it's impossible  to have a valid slice beyond 
that array.


This LRU would allow us to keep the slice API as it currently is, and 
also at excellent efficiency.


What do you think?



This is a very good idea.  Incidentally, you only need the upper bound 
location, the beginning location is irrelevant, since you don't grow 
down.


Awesome, didn't think of that. So now more cases are caught:

auto a = new int[100];
a ~= 42;
a = a[50 .. $];
a ~= 52;

That wouldn't have worked with my original suggestion, but it does work 
safely with yours.


What do you do in the case where the memory was recycled?  Does a 
GC collection cycle clean out the cache as well?


As you saw, there was some discussion about that as well.

This is better than my two previous ideas.  The only drawback I see is 
if you have many threads doing appending, or you are appending more than 
8 arrays at once in a round-robin fashion, you would lose all the 
benefit (although it shouldn't affect correctness).  At that point 
however, you'd have to ask yourself why you aren't using a specialized 
appender type or function.


Yah. As I suspect a lot of code is actually doing round-robin naturally, 
I'm considering using a random eviction strategy. That way performance 
will degrade smoother. A more advanced algorithm would use introspection 
to choose dynamically between LRU and random.



Andrei


Re: LRU cache for ~=

2009-10-20 Thread Steven Schveighoffer

On Mon, 19 Oct 2009 22:37:26 -0400, dsimcha  wrote:

== Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s  
article

dsimcha wrote:
> == Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s  
article

>> dsimcha wrote:
>>> == Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s  
article

 dsimcha wrote:
> Started playing w/ the implementation a little and I see a  
problem.  What

about

> the garbage collector?  There are two possibilities:
 [snip]
> The only possible solutions I see would be to have the GC know  
everything

about
> the LRU cache and evict stale entries (probably slows down GC a  
lot, a

huge PITA
> to implement, couples things that shouldn't be tightly coupled),  
or clear the
> cache every time GC is run (probably would make appending so slow  
as to

> defeat the
> purpose of having the cache).
 I think GC.collect may simply evict the entire cache. The  
collection
 cycle costs so much, the marginal cost of losing cached  
information is

 lost in the noise.
 Andrei
>>> But then you have to copy the whole array again, likely triggering  
another GC if
>>> the array is large.  Then things really get ugly as, for all  
practical purposes,

>>> you've completely done away with the cache.
>> This happens whether or not a cache is in use.
>> Andrei
>
> But the array isn't guaranteed to get reallocated immediately after  
*every* GC
> run.  If you're appending to a huge array, the GC will likely run  
several times

> while you're appending, leading to several unnecessary reallocations.
I don't think I understand this.
1. Request for an append comes that runs out of memory
2. GC runs and clears memory
3. Array is reallocated and the capacity cached.
No?


This is entirely correct.


> Each of
> those unnecessary reallocations will increase the memory footprint of  
your
> program, possibly triggering another GC run and wiping out your cache  
again in

> short order, until, for sufficiently large arrays,
>
> a ~= b;
>
> is almost equivalent to
>
> a = a ~ b;
I don't understand how the cache makes that all worse.
Andrei


The cache doesn't make anything *worse* than with no cache.  The only  
point I'm
trying to make is that, for large arrays, if the GC clears the cache  
every time it
runs, things would start to get *almost as bad as* having no cache  
because the

copy operations become expensive and the GC may run frequently.


The cache can't be "cleared" every time, or else you might as well only  
keep one LRU entry:


int[] twos, threes;

for(int i = 1; i < 1; i++)
{
  twos ~= i * 2;
  threes ~= i * 3;
}

At some point, twos or threes needs an allocation triggering a collection,  
and that clears the cache, making the other array need an allocation,  
clearing the cache, etc.


I'd think you only want to clear the entries affected by the collection.

-Steve


Re: LRU cache for ~=

2009-10-20 Thread Steven Schveighoffer
On Mon, 19 Oct 2009 14:51:32 -0400, Andrei Alexandrescu  
 wrote:


I just wrote this to Sean and Walter and subsequently discussed it with  
Walter. Walter thinks this should work. Does anyone have the time and  
inclination to test this out? It would involve hacking into druntime's  
implementation of ~= (I'm not sure what the function name is). I'd  
really appreciate this; I'm overloaded as it is.


==

In wake of the recent demise of T[new], I was thinking of finding ways  
of making ~= efficient for T[].


Currently ~= is slow because accessing GC.sizeOf(void*) acquires a  
global lock and generally must figure out a lot of things about the  
pointer to make a decision.


Also, ~= is dangerous because it allows slices to stomp over other  
slices.


I was thinking of solving these issues by keeping an LRU (Least Recently  
Used) cache inside the implementation of ~=. The LRU would only have a  
few entries (4-8) and would store the parameters of the last ~= calls,  
and their cached capacities.


So whenever code calls arr ~= b, the LRU is searched first. If the  
system finds "arr" (both bounds) in the LRU, that means the cached  
capacity is correct and can solve the matter without an actual trip to  
the GC at all! Otherwise, do the deed and cache the new slice and the  
new capacity.


This also solves the lack of safety: if you request a growth on an array  
you just grew, it's impossible  to have a valid slice beyond that array.


This LRU would allow us to keep the slice API as it currently is, and  
also at excellent efficiency.


What do you think?



This is a very good idea.  Incidentally, you only need the upper bound  
location, the beginning location is irrelevant, since you don't grow  
down.  What do you do in the case where the memory was recycled?  Does a  
GC collection cycle clean out the cache as well?


This is better than my two previous ideas.  The only drawback I see is if  
you have many threads doing appending, or you are appending more than 8  
arrays at once in a round-robin fashion, you would lose all the benefit  
(although it shouldn't affect correctness).  At that point however, you'd  
have to ask yourself why you aren't using a specialized appender type or  
function.


In response to other's queries about how many LRUs to use, you'd probably  
want one per heap, and you'd want to lock/not lock based on whether the  
heap is thread local or not.


-Steve


Re: static arrays becoming value types

2009-10-20 Thread language_fan
Tue, 20 Oct 2009 10:34:35 -0300, Leandro Lucarella thusly wrote:

> dsimcha, el 20 de octubre a las 02:44 me escribiste:
>> == Quote from Walter Bright (newshou...@digitalmars.com)'s article
>> > Currently, static arrays are (as in C) half-value types and
>> > half-reference types. This tends to cause a series of weird problems
>> > and special cases in the language semantics, such as functions not
>> > being able to return static arrays, and out parameters not being
>> > possible to be static arrays.
>> > Andrei and I agonized over this for some time, and eventually came to
>> > the conclusion that static arrays should become value types. I.e.,
>> >T[3]
>> > should behave much as if it were:
>> >struct ??
>> >{
>> >   T[3];
>> >}
>> > Then it can be returned from a function. In particular,
>> >void foo(T[3] a)
>> > is currently done (as in C) by passing a pointer to the array, and
>> > then with a bit of compiler magic 'a' is rewritten as (*a)[3]. Making
>> > this change would mean that the entire array would be pushed onto the
>> > parameter stack, i.e. a copy of the array, rather than a reference to
>> > it. Making this change would clean up the internal behavior of types.
>> > They'll be more orthogonal and consistent, and templates will work
>> > better. The previous behavior for function parameters can be retained
>> > by making it a ref parameter:
>> > void foo(ref T[3] a)
>> 
>> Vote++.  It's funny, I use static arrays so little that I never
>> realized that they weren't passed by value to functions.  I'd
>> absolutely love to be able to just return static arrays from functions,
>> and often use structs to do that now, but using structs feels like a
>> really ugly hack.
> 
> It would be the poor men tuple for returning (homogeneous) stuff =P

It depends on how you define things. Traditionally tuples are seen as a 
generalization of pairs (2 elements -> n elements). Records, on the other 
hand, are generalization of tuples (simple number index -> named 
elements). You need couple of additional layers of generalization to come 
up with structs (subtyping, member functions, generics etc.)

One nasty thing about D's structs is that they don't have structural 
equivalence relation unlike tuples. So you need to use the same container 
struct type to get the same semantics. To achieve that you would need 
some kind of STuple on standard library level or other kinds of hacks.

What I find unfortunate in D is that your abstractions come in two sizes 
- either you use the modest tiny construct that does not scale elegantly 
or the enormous hammer to crush things down theatretically.


(Another) XML Module Candidate

2009-10-20 Thread Michel Fortin
I've built an XML parsing library in the last few months as an 
experimentation of D2 template capabilities and as an attempt to work 
with ranges. The idea is that if successful enough it could become part 
of Phobos 2. You can use it to do tokenization as an event parser or a 
pull parser or a mix of the two; and there's a small DOM built on top 
of the tokenizer.


It didn't turn out as elegant as I would have hoped but it works and is 
probably  as fast as Tango's pull parser (when used as a tokenizer) but 
I haven't tested. The library certainly is incomplete: well-formness 
checking is partial, it doesn't support documents with an internal 
subset in the Doctype, the DOM that comes with it is very crude and the 
tokenizer will need some small adaptations to work with arbitrary input 
ranges (currently it accepts strings only, but reads them mostly in a 
range-like way).


I don't have much time to improve it right now, so if someone else 
wants to fix the remaining issues and add more polish perhaps it could 
be a great addition to the D standard library.


Here's the current API docs:



And here's the code:


(It's not mentioned anywhere, but I'm willing to put this code under 
the boost license.)


--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: Eliminate "new" for class object creation?

2009-10-20 Thread Andrei Alexandrescu

Leandro Lucarella wrote:

Andrei Alexandrescu, el 19 de octubre a las 22:16 me escribiste:

dsimcha wrote:

== Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s article

Leandro Lucarella wrote:

Jason House, el 19 de octubre a las 22:20 me escribiste:

Bill Baxter Wrote:


On Mon, Oct 19, 2009 at 4:00 PM, Rainer Deyke  wrote:

Andrei Alexandrescu wrote:

I hereby suggest we get rid of new for class object creation. What do
you guys think?

*applause*

'X(x)' and 'new X(x)' have distinct meanings in C++. ?In Java/C#/D, the
'new' is just line noise.

Well, I think "new Foo" is how you create a struct on the heap in D.
So it's not exactly line noise.
I don't mind getting rid of new, but there better be a good way to
allocate structs on the heap.  And it better not require me to do an
import just to be able to call the allocation function.

I like the Foo.new syntax myself.

--bb

Actually, new can also be used for creating classes on the stack...
scope T t = new T();

Damn! This is getting confusing. It seems like allocation should be
revised altogether :)

Scope will go (and this time I'm not kidding). It's very unsafe.
Andrei

But we need a reasonable way of allocating class instances on the stack as an
optimization.  Scope provides a nice way to do that.  In general, I'm sick of
hearing about safety.  D is a close-to-the-metal systems language.  The 
programmer
has to be given control.  In general I think we're going wy off the deep 
edge
trying to make D too safe lately at the expense of convenience and performance.

No problem. You will be able to use InSitu!T. It is much better to
confine unsafe features to libraries instead of putting them in the
language.

{
auto foo = InSitu!(Foo)(args);
// use foo
...
// foo goes away
}




Why not Scoped!T ? I think the purpose for this that the lifetime of the
object is bounded to the scope, right? I think is hard to figure that out
from InSitu!T than Scoped!T.




It's not a useless discussions, names are important. Scoped is more 
evocative for in-function definition, whereas InPlace/InSitu are (at 
least to me) more evocative when inside a class.


class A {
   InPlace!B member;
}


Andrei


Re: static arrays becoming value types

2009-10-20 Thread Leandro Lucarella
dsimcha, el 20 de octubre a las 02:44 me escribiste:
> == Quote from Walter Bright (newshou...@digitalmars.com)'s article
> > Currently, static arrays are (as in C) half-value types and
> > half-reference types. This tends to cause a series of weird problems and
> > special cases in the language semantics, such as functions not being
> > able to return static arrays, and out parameters not being possible to
> > be static arrays.
> > Andrei and I agonized over this for some time, and eventually came to
> > the conclusion that static arrays should become value types. I.e.,
> >T[3]
> > should behave much as if it were:
> >struct ??
> >{
> >   T[3];
> >}
> > Then it can be returned from a function. In particular,
> >void foo(T[3] a)
> > is currently done (as in C) by passing a pointer to the array, and then
> > with a bit of compiler magic 'a' is rewritten as (*a)[3]. Making this
> > change would mean that the entire array would be pushed onto the
> > parameter stack, i.e. a copy of the array, rather than a reference to it.
> > Making this change would clean up the internal behavior of types.
> > They'll be more orthogonal and consistent, and templates will work better.
> > The previous behavior for function parameters can be retained by making
> > it a ref parameter:
> > void foo(ref T[3] a)
> 
> Vote++.  It's funny, I use static arrays so little that I never realized that 
> they
> weren't passed by value to functions.  I'd absolutely love to be able to just
> return static arrays from functions, and often use structs to do that now, but
> using structs feels like a really ugly hack.

It would be the poor men tuple for returning (homogeneous) stuff =P

-- 
Leandro Lucarella (AKA luca) http://llucax.com.ar/
--
GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)
--
La máquina de la moneda, mirá como te queda!
-- Sidharta Kiwi


Re: Eliminate "new" for class object creation?

2009-10-20 Thread Leandro Lucarella
Andrei Alexandrescu, el 19 de octubre a las 22:16 me escribiste:
> dsimcha wrote:
> >== Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s article
> >>Leandro Lucarella wrote:
> >>>Jason House, el 19 de octubre a las 22:20 me escribiste:
> Bill Baxter Wrote:
> 
> >On Mon, Oct 19, 2009 at 4:00 PM, Rainer Deyke  
> >wrote:
> >>Andrei Alexandrescu wrote:
> >>>I hereby suggest we get rid of new for class object creation. What do
> >>>you guys think?
> >>*applause*
> >>
> >>'X(x)' and 'new X(x)' have distinct meanings in C++. ?In Java/C#/D, the
> >>'new' is just line noise.
> >Well, I think "new Foo" is how you create a struct on the heap in D.
> >So it's not exactly line noise.
> >I don't mind getting rid of new, but there better be a good way to
> >allocate structs on the heap.  And it better not require me to do an
> >import just to be able to call the allocation function.
> >
> >I like the Foo.new syntax myself.
> >
> >--bb
> Actually, new can also be used for creating classes on the stack...
> scope T t = new T();
> >>>Damn! This is getting confusing. It seems like allocation should be
> >>>revised altogether :)
> >>Scope will go (and this time I'm not kidding). It's very unsafe.
> >>Andrei
> >
> >But we need a reasonable way of allocating class instances on the stack as an
> >optimization.  Scope provides a nice way to do that.  In general, I'm sick of
> >hearing about safety.  D is a close-to-the-metal systems language.  The 
> >programmer
> >has to be given control.  In general I think we're going wy off the deep 
> >edge
> >trying to make D too safe lately at the expense of convenience and 
> >performance.
> 
> No problem. You will be able to use InSitu!T. It is much better to
> confine unsafe features to libraries instead of putting them in the
> language.
> 
> {
> auto foo = InSitu!(Foo)(args);
> // use foo
> ...
> // foo goes away
> }



Why not Scoped!T ? I think the purpose for this that the lifetime of the
object is bounded to the scope, right? I think is hard to figure that out
from InSitu!T than Scoped!T.



-- 
Leandro Lucarella (AKA luca) http://llucax.com.ar/
--
GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)
--
- Mire, don Inodoro! Una paloma con un anillo en la pata! Debe ser
  mensajera y cayó aquí!
- Y... si no es mensajera es coqueta... o casada.
-- Mendieta e Inodoro Pereyra


Re: static arrays becoming value types

2009-10-20 Thread dsimcha
== Quote from bearophile (bearophileh...@lycos.com)'s article
> Walter Bright:
> > The previous behavior for function parameters can be retained by making
> > it a ref parameter:
> > void foo(ref T[3] a)
> If I have generic code, like a templated function, that accepts both a dynamic
and a static array, the function call will change its performance signature
according to the type (if I don't add a "ref" the pass of a dynamic array will 
be
O(1) while passing a fixed-size array will be O(n)).

Here's a way around that:  To pass a static array by reference to a templated
function that was written with generic ranges in mind, just slice it to make it 
a
dynamic array:

float[3] foo;
pragma(msg, typeof(foo[]).stringof);  // float[]


Re: T[new] misgivings

2009-10-20 Thread Steven Schveighoffer
On Tue, 20 Oct 2009 09:09:46 -0400, Andrei Alexandrescu  
 wrote:



Steven Schveighoffer wrote:
On Fri, 16 Oct 2009 05:49:12 -0400, Walter Bright  
 wrote:



Don wrote:

There are two sensible options:


I see the question as, is T[new] a value type or a reference type? I  
see it as a reference type, and so assignment should act like a  
reference assignment, not a value assignment.
 Andrei says you think arrays are like slices with some extra  
functionality, but slices are *not* a reference type, they are  
hybrids.  Do you think T[new] arrays should be fully reference types?  
(I do)


I might have misrepresented his position. We both think T[new] is a  
reference type, and it was implemented that way in the now defunct  
feature.



Yeah, I haven't looked at the newsgroup since Thursday, and I had 500 new  
messages to read.  Sorry for responding to this dead thread :)


-Steve


Re: The demise of T[new]

2009-10-20 Thread Steven Schveighoffer
On Sun, 18 Oct 2009 17:05:39 -0400, Walter Bright  
 wrote:


The purpose of T[new] was to solve the problems T[] had with passing T[]  
to a function and then the function resizes the T[]. What happens with  
the original?


The solution we came up with was to create a third array type, T[new],  
which was a reference type.


Andrei had the idea that T[new] could be dispensed with by making a  
"builder" library type to handle creating arrays by doing things like  
appending, and then delivering a finished T[] type. This is similar to  
what std.outbuffer and std.array.Appender do, they just need a bit of  
refining.


The .length property of T[] would then become an rvalue only, not an  
lvalue, and ~= would no longer be allowed for T[].


We both feel that this would simplify D, make it more flexible, and  
remove some awkward corner cases like the inability to say a.length++.


What do you think?


At the risk of sounding like bearophile -- I've proposed 2 solutions in  
the past for this that *don't* involve creating a T[new] type.


1. Store the allocated length in the GC structure, then only allow  
appending when the length of the array being appended matches the  
allocated length.


2. Store the allocated length at the beginning of the array, and use a bit  
in the array length to determine if it starts at the beginning of the  
block.


The first solution has space concerns, and the second has lots more  
concerns, but can help in the case of having to do a GC lookup to  
determine if a slice can be appended (you'd still have to lock the GC to  
do an actual append or realloc).  I prefer the first solution over the  
second.


I like the current behavior *except* for appending.  Most of the time it  
does what you want, and the syntax is beautiful.


In regards to disallowing x ~= y, I'd propose you at least make it  
equivalent to x = x ~ y instead of removing it.


-Steve


Re: Revamped concurrency API (Don can you contact Bartosz ?)

2009-10-20 Thread Leandro Lucarella
Don, el 20 de octubre a las 09:00 me escribiste:
> >>>Unfortunately, I have undone some of my changes trying to
> >>>bypass the bug, so at the moment I don't even have the buggy
> >>>version, but it can be reconstructed. We can discuss it
> >>>off-line, if you want. Use my email address with -nospam
> >>>removed.
> >>
> >>Bartosz
> >>
> >>I think that Don is the best person to contact you. I will try
> >>to contact him.
> >>
> >>Nick B
> >
> >Don, are you able to contact Bartosz, re the details of this test case.
> >
> >Nick B
> 
> Bartosz has sent it to me. I can reproduce the error. It's my top
> priority, but it'll take a while -- it's nasty.

Do you have a bugzilla # so we can keep track of it?

Thanks.

-- 
Leandro Lucarella (AKA luca) http://llucax.com.ar/
--
GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)
--
HACIA NEUQUEN: EL JUEVES SALDRA CARAVANA CON PERROS
DESDE CAPITAL EN APOYO AL CACHORRO CONDENADO A MUERTE
-- Crónica TV


Re: T[new] misgivings

2009-10-20 Thread Andrei Alexandrescu

Steven Schveighoffer wrote:
On Fri, 16 Oct 2009 05:49:12 -0400, Walter Bright 
 wrote:



Don wrote:

There are two sensible options:


I see the question as, is T[new] a value type or a reference type? I 
see it as a reference type, and so assignment should act like a 
reference assignment, not a value assignment.


Andrei says you think arrays are like slices with some extra 
functionality, but slices are *not* a reference type, they are hybrids.  
Do you think T[new] arrays should be fully reference types? (I do)


I might have misrepresented his position. We both think T[new] is a 
reference type, and it was implemented that way in the now defunct feature.



Andrei


Re: Revamping associative arrays

2009-10-20 Thread Steven Schveighoffer
On Sat, 17 Oct 2009 14:28:51 -0400, Andrei Alexandrescu  
 wrote:


Associative arrays are today quite problematic because they don't offer  
any true iteration. Furthermore, the .keys and .values properties create  
new arrays, which is wasteful.


Another issue with associative arrays is that ++a[k] is hacked, which  
reflects a grave language limitation. That needs to be replaced with a  
true facility.


Any other issues with AAs that you want to see fixed, and ideas guiding  
a redesign?


Do not require opCmp for AAs.  There are some good hashmap implementations  
do not require using a tree for collisions.  This would also eliminate at  
least one pointer in the element struct.


It also causes some problems with using arbitrary classes as keys.  It is  
easy to define a default hash and default opEquals for a class, but it is  
difficult to define a default opCmp.  In fact, the default opCmp in object  
simply throws an exception, making it a nuisance to have the compiler  
allow using a class as a key, and then throwing an exception at runtime  
when you use it.


Removing the requirement for opCmp would also eliminate the requirement  
for opCmp to be in object (it currently by default throws an exception),  
so it could be in an interface instead.


-Steve


Re: Revamping associative arrays

2009-10-20 Thread Steven Schveighoffer

On Sat, 17 Oct 2009 14:58:08 -0400, BCS  wrote:


Hello Chris Nicholson-Sauls,


Andrei Alexandrescu wrote:


Associative arrays are today quite problematic because they don't
offer any true iteration. Furthermore, the .keys and .values
properties create new arrays, which is wasteful.
 Another issue with associative arrays is that ++a[k] is hacked, which
reflects a grave language limitation. That needs to be replaced with
a true facility.
 Any other issues with AAs that you want to see fixed, and ideas
guiding a redesign?
 Thanks,
 Andrei


Idea: the .keys and .values properties, rather than creating arrays,
could create iterable ranges with the smallest possible footprint,
internally walking the tree structure.



what will this do?

foreach(key; aa.keys)
   if(Test(key))
  aa.remove(key);



http://www.dsource.org/projects/dcollections/docs/current/dcollections.model.Iterator.html

Search for Purgeable.

I always hated that limitation of not being able to remove elements while  
iterating.  It's one of the things I always despised about Java and C#  
iteration compared to C++.


-Steve


Re: T[new] misgivings

2009-10-20 Thread Steven Schveighoffer
On Fri, 16 Oct 2009 05:49:12 -0400, Walter Bright  
 wrote:



Don wrote:

There are two sensible options:


I see the question as, is T[new] a value type or a reference type? I see  
it as a reference type, and so assignment should act like a reference  
assignment, not a value assignment.


Andrei says you think arrays are like slices with some extra  
functionality, but slices are *not* a reference type, they are hybrids.   
Do you think T[new] arrays should be fully reference types? (I do)


Otherwise, if you keep the "length is a value type" semantic, you get the  
same crappy appending behavior we have now.


-Steve


Re: Communicating between in and out contracts

2009-10-20 Thread Michel Fortin
On 2009-10-20 08:16:01 -0400, "Steven Schveighoffer" 
 said:


Incidentally, shouldn't all access to the object in the in contract be  
const by default anyways?


Hum, access to everything (including global variables, arguments), not 
just the object, should be const in a contract. That might be harder to 
implement though.



--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: 64-bit

2009-10-20 Thread Tomas Lindquist Olsen
On Mon, Oct 19, 2009 at 10:26 PM, Nick Sabalausky  wrote:
> "Fawzi Mohamed"  wrote in message
> news:hbhi5q$1gq...@digitalmars.com...
>> On 2009-10-18 20:01:26 +0200, language_fan  said:
>>
>>> Sun, 18 Oct 2009 16:35:53 +0200, Fawzi Mohamed thusly wrote:

 on x86 the 64 bit extension added registers, that makes it faster, even
 if as you correctly point out a priori just using 64 bit pointers is
 just a drawback unless you have lot of memory.
>>>
>>> That is very silly claim. First, you need to have use for all those extra
>>> registers to obtain any performance benefits. This is nearly not always
>>> the case.
>> Probably you don't know x86 architecture well, it is register starved for
>> modern standards, also with the 64 bit new instruction were added, on x86
>> the 64 bit change was not "add 64-bit pointers" but it was let's try to
>> fix some major shortcomings of x86.
>> These enhancements are available only in 64 bit mode (to keep backward
>> compatibility).
>>
>> I know for a fact that my code runs faster in 64 bit mode (or you can say
>> my compiler optimizes it better), and I am not the only one: for sure
>> apple converted basically all its applications to 64 bit on snow leopard
>> (that is focusing on speed), so that they are slower :P.
>>
>
> I'll certainly agree with you on 64-bit x86 likely being faster than 32-bit,
> but Apple is bad example. Apple, at it's cor...erm..."heart", is a hardware
> company. That's where they make their money. If software runs efficiently,
> then their newer hardware becomes a tougher sell (And Jobs himself has never
> been anything more than a salesman, only with far more control over his
> company than salesmen usually have). It's not surprising that for years,
> every version of iTunes has kept growing noticably more bloated than the
> last, despite having very little extra.
>
>
>


It's interesting how Apple is doing a lot to better performance then.
With things like OpenCL and LLVM.


Re: Communicating between in and out contracts

2009-10-20 Thread Steven Schveighoffer
On Sun, 18 Oct 2009 03:44:39 -0400, Rainer Deyke   
wrote:



Andrei Alexandrescu wrote:

Rainer Deyke wrote:
The expression may mutate stuff.


It shouldn't.  It's an error if it does, just like it's an error for an
assertion or post/precondition to have any side effects.

It would be nice if the compiler could catch this error, but failing
that, 'old' expressions are still no worse than assertions in this  
respect.


I'm coming into this a little late, but what Rainer is saying makes sense  
to me.


Would it help to force any access to the object to be treated as if the  
object is const?  i.e.:


old(this.x)

would be interpreted as:

(cast(const(typeof(this))this).x

and cached in the input contract section.

It seems straightforward that Rainer's solution eliminates the boilerplate  
code of caching values available in the in contract, and if you force  
const access, prevents calling functions which might mutate the state of  
the object.  But it uses the correct contract -- this object is not  
mutable for this call only.  I agree pure is too restrictive, because then  
the object must be immutable, no?


Incidentally, shouldn't all access to the object in the in contract be  
const by default anyways?


-Steve


Re: Eliminate "new" for class object creation?

2009-10-20 Thread Andrei Alexandrescu

Max Samukha wrote:

On Tue, 20 Oct 2009 18:12:39 +0800, Lionello Lunesu
 wrote:


On 20-10-2009 6:38, Andrei Alexandrescu wrote:

I hereby suggest we get rid of new for class object creation. What do
you guys think?

I don't agree with this one.

There's extra cost involved, and the added keyword makes that clear. 
Also, somebody mentioned using 'new' to allocate structs on the heap; 
I've never actually done that, but it sounds like using 'new' would be 
the perfect way to do just that.


L.


I don't think the extra cost should be emphasized with 'new' every
time you instantiate a class. For example, in C#, they use 'new' for
creating structs on stack (apparently to make them consistent with
classes, in a silly way).

I think the rarer cases when a class instance is allocated in-place (a
struct on heap) can be handled by the library.

BTW, why "in-situ" is better in this context than the more common
"in-place"? Would be nice to know.


The term originated with this:

class A {
InSitu!B b;
...
}

meaning that B is embedded inside A. But I guess InPlace is just as good.


Andrei


Re: Eliminate "new" for class object creation?

2009-10-20 Thread Andrei Alexandrescu

Lionello Lunesu wrote:

On 20-10-2009 6:38, Andrei Alexandrescu wrote:

I hereby suggest we get rid of new for class object creation. What do
you guys think?


I don't agree with this one.

There's extra cost involved, and the added keyword makes that clear. 


That's actually one problem: a struct constructor could also execute 
arbitrary amounts of code, so "new" is not as informative as it might be.


Also, somebody mentioned using 'new' to allocate structs on the heap; 
I've never actually done that, but it sounds like using 'new' would be 
the perfect way to do just that.


Yah, I guess I'll drop it.


Andrei


Re: Eliminate "new" for class object creation?

2009-10-20 Thread Yigal Chripun
Chris Nicholson-Sauls Wrote:

> Andrei Alexandrescu wrote:
> > I'm having a hard time justifying that you use
> > 
> > new X(args)
> > 
> > to create a class object, and
> > 
> > X(args)
> > 
> > to create a struct object. I wrote this:
> > 
> > 
> > The syntactic  difference between  the expression creating  a @struct@
> > object---Test(@\meta{args}@)@---and the  expression creating a @class@
> > object---\cc{new Test(}\meta{args}@)@---may be  jarring at first. \dee
> > could have dropped the @new@  keyword entirely, but that @new@ reminds
> > the programmer that an object allocation (i.e., nontrivial work) takes
> > place.
> > ===
> > 
> > I'm unhappy about that explanation because the distinction is indeed 
> > very weak. The constructor of a struct could also do unbounded amounts 
> > of work, so what gives?
> > 
> > I hereby suggest we get rid of new for class object creation. What do 
> > you guys think?
> > 
> > 
> > Andrei
> 
> What would become the equivalent of, for example:
>   new uint[][][](4, 3, 8)
> 
> I can live with having to define API's for custom allocation strategies of 
> classes and 
> structures, rather than being able to hijack a language expression (the way 
> one can with 
> 'new'/'delete'), but what about the non-class new'ables?
> 
> However, if we really must toss the 'new' keyword out the window, I reiterate 
> my support 
> for a 'T new(T,A...)(A a)' in the runtime.
> 
> -- Chris Nicholson-Sauls

slightly ugly but: 
auto arr = (Array!Array!Array!int).new(4, 3, 8);
  //OR 
auto arr = MArray!(int)(4, 3, 8);





Re: Eliminate "new" for class object creation?

2009-10-20 Thread Max Samukha
On Tue, 20 Oct 2009 18:12:39 +0800, Lionello Lunesu
 wrote:

>On 20-10-2009 6:38, Andrei Alexandrescu wrote:
>> I hereby suggest we get rid of new for class object creation. What do
>> you guys think?
>
>I don't agree with this one.
>
>There's extra cost involved, and the added keyword makes that clear. 
>Also, somebody mentioned using 'new' to allocate structs on the heap; 
>I've never actually done that, but it sounds like using 'new' would be 
>the perfect way to do just that.
>
>L.

I don't think the extra cost should be emphasized with 'new' every
time you instantiate a class. For example, in C#, they use 'new' for
creating structs on stack (apparently to make them consistent with
classes, in a silly way).

I think the rarer cases when a class instance is allocated in-place (a
struct on heap) can be handled by the library.

BTW, why "in-situ" is better in this context than the more common
"in-place"? Would be nice to know.


Access Violation after declaration second objet of the same type

2009-10-20 Thread Zarathustra
Sorry for long code, but I can't find the cause the problem.

// module window
module window;

private import base;
private import structs;

private static import user32;
private static import kernel32;
private static import gdi32;

private:
extern (Windows) dword wndProc(ptr o_hwnd, dword o_msg, dword o_wparam, dword 
o_lparam){

  alias user32.EWindowMessage WM;
  Window* l_wnd = (cast(Window*)user32.getWindowLong(o_hwnd, 0x00));
  
  if(o_msg == WM.NCCREATE){
user32.setWindowLong(o_hwnd, 0x00, *(cast(dword*)o_lparam));
  }
  else if(l_wnd is null){
return 0x00;
  }
  
  return (cast(Window*)user32.getWindowLong(o_hwnd, 0x00)).wndProc(o_hwnd, 
o_msg, o_wparam, o_lparam);
}
extern void d_assert_msg(dword, void*, dword, void*);
public:
class Window{
  private const ptr handle;
  private static WINWndClassEx wndClass;

  void onMouseDown(MouseEventArgs o_mea){
user32.messageBox(null, cast(wstr)"INSIDE", cast(wstr)"msg", 0x00);
  }
  
  static this(){

wndClass.size  = 0x0030;
wndClass.style = 0x0003;
wndClass.wndProc   = cast(ptr)&.wndProc;
wndClass.clsExtraBytes = 0x;
wndClass.wndExtraBytes = 0x0004;
wndClass.hInstance = kernel32.getModuleHandle(null);
wndClass.hIcon = user32.loadIcon(null, 0x7F00);
wndClass.hCursor   = user32.loadCursor(null, 0x7F00);
wndClass.hbrBackground = gdi32.getStockObject(0x);
wndClass.menuName  = null;
wndClass.className = cast(wstr)"clsname";
wndClass.hIconSm   = user32.loadIcon(null, 0x7F00);


if(!user32.registerClassEx(cast(ptr)&wndClass)){
  user32.messageBox(null, 
user32.translateErrorCode(kernel32.getLastError()), cast(wstr)"error", 
0x);
  assert(false, "window class registering failed");
}
  }
  
  this(){
handle = user32.createWindowEx(
  0,
  cast(wstr)"clsname",
  cast(wstr)"",
  0x00CF,
  0x,
  0x,
  0x0280,
  0x01E0,
  null,
  null,
  kernel32.getModuleHandle(null),
  cast(ptr)&this
);

if(handle is null){
  user32.messageBox(null, 
user32.translateErrorCode(kernel32.getLastError()), cast(wstr)"error", 
0x);
  assert(false, "window creating failed");
}
  }

  public void run(){
WINMsg msg;

user32.showWindow(handle, 0x000A);
user32.updateWindow(handle);

while(user32.getMessage(cast(ptr)&msg, null, 0, 0)){
  user32.translateMessage(cast(ptr)&msg);
  user32.dispatchMessage(cast(ptr)&msg);
}
  }

  private dword wndProc(ptr o_hwnd, dword o_msg, dword o_wparam, dword 
o_lparam){

alias user32.EWindowMessage WM;
alias user32.EMouseKey  WK;

switch(o_msg){

  case WM.DESTROY:
user32.postQuitMessage(0x00);
  break;

  case WM.LBUTTONDOWN:
MouseEventArgs l_mea;
l_mea.button = MouseButton.LEFT;
l_mea.location.x = loword(o_lparam);
l_mea.location.y = hiword(o_lparam);
user32.messageBox(null, cast(wstr)"BEFORE", cast(wstr)"msg", 0x00);
onMouseDown(l_mea);
  break;
  
  default: return user32.defWindowProc(o_hwnd, o_msg, o_wparam, o_lparam);
}
return 0;
  }
}

/* the following works correctly when wnd1.onMouseDown */
// main1
void main(){
  
  Window wnd1 = new Window();
  Window wnd2;

  wnd1.run();   
}

/* the following fails when wnd1.onMouseDown */
// main2
void main(){
  
  Window wnd1 = new Window();
  Window wnd2 = new Window(); // allocator

  wnd1.run();   
}

Inside ctor of Window class I don't see nothig wrog!


Re: LRU cache for ~=

2009-10-20 Thread Don

Andrei Alexandrescu wrote:

Rainer Deyke wrote:

Andrei Alexandrescu wrote:

One surprising (but safe) behavior that remains with slices is this:

void fun(int[] a) {
   a[0] = 0;
   a ~= 42;
   a[0] = 42;
}

The caller may or may not see 42 in the first slot after the call.


Your definition of "safe" is clearly not aligned with mine.



What's yours?


"SafeD is easy to learn and it keeps the programmers away from undefined 
behaviors." -- safed.html.


The behaviour you quoted is undefined behaviour, therefore it's not safe 
according to the only SafeD definition in the spec.


Re: Eliminate "new" for class object creation?

2009-10-20 Thread Lionello Lunesu

On 20-10-2009 6:38, Andrei Alexandrescu wrote:

I hereby suggest we get rid of new for class object creation. What do
you guys think?


I don't agree with this one.

There's extra cost involved, and the added keyword makes that clear. 
Also, somebody mentioned using 'new' to allocate structs on the heap; 
I've never actually done that, but it sounds like using 'new' would be 
the perfect way to do just that.


L.


Re: Revamped concurrency API

2009-10-20 Thread Tim Matthews

Andrei Alexandrescu wrote:



Speaking of switch, I have tried to convince Walter to require either a 
break; or a goto case xxx; at the end of each snippet inside a switch. I 
was surprised by his answer: "but I use fall through all the time!" :o)


I personally think requiring a goto case xxx; is more robust in presence 
of code maintenance because its semantics is invariant to code moves.



Andrei


First of all goto case is without a doubt safer and more robust but 
please leave as much of D as possible to remain compatible with C.


Since D has objects a lot of code can be polymorphic through the 
classes/interfaces that C didn't have.


C's design is trust the programmer, provide full power. It is unsafe 
agreed but by design. Breaking compatibility between D1, D2 etc may be 
an issue but if you loose the C then you lose what defines D.


Re: static arrays becoming value types

2009-10-20 Thread Max Samukha
On Mon, 19 Oct 2009 18:50:46 -0700, Walter Bright
 wrote:

>Currently, static arrays are (as in C) half-value types and 
>half-reference types. This tends to cause a series of weird problems and 
>special cases in the language semantics, such as functions not being 
>able to return static arrays, and out parameters not being possible to 
>be static arrays.
>
>Andrei and I agonized over this for some time, and eventually came to 
>the conclusion that static arrays should become value types. I.e.,
>
>   T[3]
>
>should behave much as if it were:
>
>   struct ??
>   {
>  T[3];
>   }
>
>Then it can be returned from a function. In particular,
>
>   void foo(T[3] a)
>
>is currently done (as in C) by passing a pointer to the array, and then 
>with a bit of compiler magic 'a' is rewritten as (*a)[3]. Making this 
>change would mean that the entire array would be pushed onto the 
>parameter stack, i.e. a copy of the array, rather than a reference to it.
>
>Making this change would clean up the internal behavior of types. 
>They'll be more orthogonal and consistent, and templates will work better.
>
>The previous behavior for function parameters can be retained by making 
>it a ref parameter:
>
>void foo(ref T[3] a)

Hooah!

I guess their .init value won't be fixed to be consistent with other
types?


Re: LRU cache for ~=

2009-10-20 Thread Fawzi Mohamed

On 2009-10-20 03:41:59 +0200, Walter Bright  said:


Denis Koroskin wrote:

Safe as in SafeD (i.e. no memory corruption) :)


Right. The problems with other definitions of safe is they are too ill-defined.


no (or as little as possible) undefined behaviour comes to mind.
Initialization of values in D follows that.
Slice appending does not (and will remain so also with LRU cache).
LRU cache improves the current status quo, but is still a cludge: 
undefined behavious in some corner cases, undefined optimization 
behaviour...
It improves things, but cannot be trusted in general, thus a clean 
library type is still needed imho.


Fawzi



Re: dmd support for IDEs + network GUI

2009-10-20 Thread Nick Sabalausky
"Adam D. Ruppe"  wrote in message 
news:mailman.208.1255923114.20261.digitalmar...@puremagic.com...
> On Mon, Oct 12, 2009 at 09:06:38PM -0400, Nick Sabalausky wrote:
>> Excellent! Sounds exactly like what I had in mind. I'll definately want 
>> to
>> keep an eye on this. Any webpage or svn or anything yet?
>
> I wrote up some of a webpage for it over the weekend:
>
> http://arsdnet.net/dws/
>
> I haven't had a chance to clean up my code yet, so it isn't posted, but
> there's some overview text there, including some implementation details 
> that
> I haven't discussed yet here, but the document still has a long way to go.
>
> But there it is, becoming more organized than anything I've written on it
> before.
>

Great!

> Thanks for your interest and to the rest of the group for letting me
> go off topic like this!
>

The offtopics are some of the most interesting bits! 




Re: static arrays becoming value types

2009-10-20 Thread Don

bearophile wrote:

Don:


I think this change is mandatory. We need it for SIMD operations.<


Why? Why the compiler can't optimize things and perform SIMD operations with 
the fixed-sized array semantics of D1? (I ask this for LDC too, that's mostly 
D1 still).

Bye,
bearophile

Because they are passed by reference. It certainly can't do it on D1:

float dot(float[4] x) {
   x[3] = 4.5; // surprise!
   return 0;
}

void main()
{
  float[4] a;
  a[3]= 0;
  float x = dot(a);
  assert(a[3]==0); // FAILS!
}


Re: static arrays becoming value types

2009-10-20 Thread bearophile
Don:

>I think this change is mandatory. We need it for SIMD operations.<

Why? Why the compiler can't optimize things and perform SIMD operations with 
the fixed-sized array semantics of D1? (I ask this for LDC too, that's mostly 
D1 still).

Bye,
bearophile


Re: static arrays becoming value types

2009-10-20 Thread Kagamin
Walter Bright Wrote:

> Andrei and I agonized over this for some time, and eventually came to 
> the conclusion that static arrays should become value types.

Nothing to agonize about really (except for C compatibility), they're value 
types and their behavior must be consistent.


Re: static arrays becoming value types

2009-10-20 Thread Kagamin
Jason House Wrote:

> I've never heard the argument why they should be value types.

Weren't they value types from the start? That's surprise. What do you think is 
memory layout of such array: int[3][] ? And what is memory layout of int[] ?


Re: static arrays becoming value types

2009-10-20 Thread Ary Borenszweig

Walter Bright wrote:
Currently, static arrays are (as in C) half-value types and 
half-reference types. This tends to cause a series of weird problems and 
special cases in the language semantics, such as functions not being 
able to return static arrays, and out parameters not being possible to 
be static arrays.


Andrei and I agonized over this for some time, and eventually came to 
the conclusion that static arrays should become value types. I.e.,


  T[3]

should behave much as if it were:

  struct ??
  {
 T[3];
  }

Then it can be returned from a function. In particular,

  void foo(T[3] a)

is currently done (as in C) by passing a pointer to the array, and then 
with a bit of compiler magic 'a' is rewritten as (*a)[3]. Making this 
change would mean that the entire array would be pushed onto the 
parameter stack, i.e. a copy of the array, rather than a reference to it.


Making this change would clean up the internal behavior of types. 
They'll be more orthogonal and consistent, and templates will work better.


The previous behavior for function parameters can be retained by making 
it a ref parameter:


   void foo(ref T[3] a)


I don't know why people are agreeing about this. At least I don't 
understand what the problem is with static arrays. You say:


"Currently, static arrays are (as in C) half-value types and 
half-reference types. This tends to cause a series of weird problems and 
special cases in the language semantics, such as functions not being 
able to return static arrays, and out parameters not being possible to 
be static arrays."


But WHY??? What's the specific problem? I understand that passing things 
by value would solve this, but will hurt performance and it's not in 
sync with "arrays are passed by reference".


Re: Eliminate "new" for class object creation?

2009-10-20 Thread Kagamin
Andrei Alexandrescu Wrote:

> I'm having a hard time justifying that you use
> 
> new X(args)
> 
> to create a class object, and
> 
> X(args)
> 
> to create a struct object. I wrote this:
> 
> 
> The syntactic  difference between  the expression creating  a @struct@
> object---Test(@\meta{args}@)@---and the  expression creating a @class@
> object---\cc{new Test(}\meta{args}@)@---may be  jarring at first. \dee
> could have dropped the @new@  keyword entirely, but that @new@ reminds
> the programmer that an object allocation (i.e., nontrivial work) takes
> place.
> ===

That's struct literal, not struct object. Struct object is on the left hand 
side. Struct literal calling the constructor looks more like a hack, C++ looks 
more consistent in this aspect. And you can create a struct object with new 
operator.


Re: d3 ?

2009-10-20 Thread dolive
Don дµ½:

> dolive wrote:
> > will appear d3 ? What are the tasks ?  it's not backward compatible 
> > with D2 ? What major changes ?
> > 
> > when to stop adding new content of d2 ?
> > 
> > thank you very much to all
> > 
> > 
> > dolive
> > 
> 
> http://www.prowiki.org/wiki4d/wiki.cgi?LanguageDevel

thank you very much !

dolive



Re: Eliminate "new" for class object creation?

2009-10-20 Thread Chris Nicholson-Sauls

Andrei Alexandrescu wrote:

I'm having a hard time justifying that you use

new X(args)

to create a class object, and

X(args)

to create a struct object. I wrote this:


The syntactic  difference between  the expression creating  a @struct@
object---Test(@\meta{args}@)@---and the  expression creating a @class@
object---\cc{new Test(}\meta{args}@)@---may be  jarring at first. \dee
could have dropped the @new@  keyword entirely, but that @new@ reminds
the programmer that an object allocation (i.e., nontrivial work) takes
place.
===

I'm unhappy about that explanation because the distinction is indeed 
very weak. The constructor of a struct could also do unbounded amounts 
of work, so what gives?


I hereby suggest we get rid of new for class object creation. What do 
you guys think?



Andrei


What would become the equivalent of, for example:
new uint[][][](4, 3, 8)

I can live with having to define API's for custom allocation strategies of classes and 
structures, rather than being able to hijack a language expression (the way one can with 
'new'/'delete'), but what about the non-class new'ables?


However, if we really must toss the 'new' keyword out the window, I reiterate my support 
for a 'T new(T,A...)(A a)' in the runtime.


-- Chris Nicholson-Sauls


Re: d3 ?

2009-10-20 Thread dolive
Bill Baxter дµ½:

> On Mon, Oct 19, 2009 at 2:26 PM, Jason House
>  wrote:
> > dolive Wrote:
> >
> >> will appear d3 ? What are the tasks ?  it's not backward compatible
> >> with D2 ? What major changes ?
> >
> > My understanding is that there will be a significant gap between the 
> > finalization of D2 and the start of D3. Bartosz's ownership scheme may be 
> > part of D3.
> >
> >
> >> when to stop adding new content of d2 ?
> >
> > Earlier this year, I thought there would be no new content now. My wild 
> > guess is early next year.
> 
> FWIW, I got a very apologetic notice from Amazon the other day that
> TDPL was delayed and now expected March 2010.   So certainly D2 should
> be frozen by then.  Of course having it frozen before getting book out
> was the goal, and an admirable goal.  But y'know it is possible to
> write books about languages that are moving targets.  Look at how many
> zillions of C# books there covering all the different .NET framework
> versions.  In fact I bet publishers like that, because it means
> they'll get some number of people to buy basically the same book all
> over again, just to get some small updates.
> 
> --bb

thanks to all

 D3 long will it take to complete ?



dolive


Re: d3 ?

2009-10-20 Thread Don

dolive wrote:
will appear d3 ? What are the tasks ?  it's not backward compatible 
with D2 ? What major changes ?


when to stop adding new content of d2 ?

thank you very much to all


dolive



http://www.prowiki.org/wiki4d/wiki.cgi?LanguageDevel


Re: static arrays becoming value types

2009-10-20 Thread Don

Walter Bright wrote:
Currently, static arrays are (as in C) half-value types and 
half-reference types. This tends to cause a series of weird problems and 
special cases in the language semantics, such as functions not being 
able to return static arrays, and out parameters not being possible to 
be static arrays.


Andrei and I agonized over this for some time, and eventually came to 
the conclusion that static arrays should become value types. I.e.,


  T[3]

should behave much as if it were:

  struct ??
  {
 T[3];
  }

Then it can be returned from a function. In particular,

  void foo(T[3] a)

is currently done (as in C) by passing a pointer to the array, and then 
with a bit of compiler magic 'a' is rewritten as (*a)[3]. Making this 
change would mean that the entire array would be pushed onto the 
parameter stack, i.e. a copy of the array, rather than a reference to it.


Making this change would clean up the internal behavior of types. 
They'll be more orthogonal and consistent, and templates will work better.


The previous behavior for function parameters can be retained by making 
it a ref parameter:


   void foo(ref T[3] a)


I think this change is mandatory. We need it for SIMD operations. It 
will allow us to implement efficient vectors.


Re: Revamped concurrency API (Don can you contact Bartosz ?)

2009-10-20 Thread Don

Nick B wrote:

Nick B wrote:

Bartosz Milewski wrote:

Nick B Wrote:

Could you give us _any_ kind of test case (even if it's enormous)?

Bartosz - are you able to provide a test case as requested by Don ?
Then it might be possible, to get this bug fixed.

Nick B.


I can send you the files I have checked out.
The problem was in core.thread. I tried to implement a struct Tid 
(thread ID) with reference-counting semantics and deterministic 
destruction. It passed all the tests, but when it was used in one 
particular place in druntime it produced incorrect assembly. Even the 
slightest change made the bug disappear, so I wasn't able to 
reproduce it under controlled conditions.


Unfortunately, I have undone some of my changes trying to bypass the 
bug, so at the moment I don't even have the buggy version, but it can 
be reconstructed. We can discuss it off-line, if you want. Use my 
email address with -nospam removed.


Bartosz

I think that Don is the best person to contact you. I will try to 
contact him.


Nick B


Don, are you able to contact Bartosz, re the details of this test case.

Nick B


Bartosz has sent it to me. I can reproduce the error. It's my top 
priority, but it'll take a while -- it's nasty.


  1   2   >