Re: Should reduce take range as first argument?

2012-10-04 Thread monarch_dodra

On Monday, 14 May 2012 at 21:33:19 UTC, Andrei Alexandrescu wrote:


Yah, reduce was not designed for the future benefit of UFCS. (I 
recall take() was redefined towards that vision, and it was a 
good move.)


We can actually deprecate the current order and accept both by 
inserting the appropriate template constraints. We change the 
documentation and examples to reflect the new order, and we 
leave a note saying that the old order is deprecated. We can 
leave the deprecated version in place for a long time. Thoughts?



Andrei


[Resurrecting old thread]

I've into somebody else having this issue in .Learn very 
recently. I was *going* to propose this change myself, but 
happened on this thread directly. I think it is a good idea.


Looks like nobody tackled this since. Is it OK if I go ahead and 
implement this?


Just ot be clear, the goal is to support both:
reduce(range, seed) and reduce(seed, range)
Then we make reduce(seed, range) deprecated
Then we remove it.

I'd rather have a green light here, then force the discussion on 
a pull request... :D


parse and skipWhite

2012-10-04 Thread monarch_dodra
A couple weeks ago, an issue came up regarding parse's behavior 
in regards to skipping leading ws.


http://d.puremagic.com/issues/show_bug.cgi?id=8729
https://github.com/D-Programming-Language/phobos/pull/817
https://github.com/D-Programming-Language/phobos/pull/828

The gist of the conversations is that the current behavior "do 
not skip leading whitespaces" was not completely enforced 
(because parse!double did skip'em) this meant 2 things:
1) parse!double needed to be "fixed" to behave like the others. 
(new pull 833)

2) parse's behavior was put into question:
2).1) Parse should be changed to skip leading ws.
2).2) Parse should keep NOT skipping leading ws

The conversation concluded towards 2.2: Do not skip WS.

Another proposal was made though, to introduce a "skipWhite" 
function that would take and return by reference, and also work 
on ranges, two things "std.String.stripLeft" does not do. This 
would allow this syntaxes:

string ss = ...
double d = ss.skipWhite().parse!double();
while(!ss.skipWhite().empty)
ss.parse!double().writeln();

I proposed this in (currently closed) pr 827 
https://github.com/D-Programming-Language/phobos/pull/827


MY ISSUE THAT I WOULD LIKE DISCUSSED HERE IS:
The introduction of "skipWhite" would make "permanent" turn to 
the "do not skip ws" behavior: If this function exists, then 
surely, it is because parse does not skip ws.


If we don't introduce it though, it means that if it turns out we 
DO want to make parse skip ws, then we won't have that useless 
function bothering us.


So yeah, what are your thought, do you want to see "skipWhite" in 
std.conv? Or do you think it would be better to just do without 
it? Keep in mind, it *is* convenient though! :D


Re: D3 suggestion: rename "range" to "sequence"

2012-10-04 Thread Russel Winder
On Thu, 2012-10-04 at 05:05 +0200, Alex Rønne Petersen wrote:
[…]
> Look, not to sound dismissive, but D3 is not a thing and likely will 
> never be.

D v3 will come if it comes. Implying D3 will never happen is, I believe,
the wrong sort of impression to put out onto the interwebs; it implies
that D v2 is a dead end in the evolutionary stakes.  So for the moment D
v2 is what there is and is what is being evolved.  Subtly different
intentions and expectations, more open-ended.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: parse and skipWhite

2012-10-04 Thread Sönke Ludwig
Am 10/4/2012 10:48 AM, schrieb monarch_dodra:
> A couple weeks ago, an issue came up regarding parse's behavior in
> regards to skipping leading ws.
> 
> http://d.puremagic.com/issues/show_bug.cgi?id=8729
> https://github.com/D-Programming-Language/phobos/pull/817
> https://github.com/D-Programming-Language/phobos/pull/828
> 
> The gist of the conversations is that the current behavior "do not skip
> leading whitespaces" was not completely enforced (because parse!double
> did skip'em) this meant 2 things:
> 1) parse!double needed to be "fixed" to behave like the others. (new
> pull 833)
> 2) parse's behavior was put into question:
> 2).1) Parse should be changed to skip leading ws.
> 2).2) Parse should keep NOT skipping leading ws
> 
> The conversation concluded towards 2.2: Do not skip WS.
> 
> Another proposal was made though, to introduce a "skipWhite" function
> that would take and return by reference, and also work on ranges, two
> things "std.String.stripLeft" does not do. This would allow this syntaxes:
> string ss = ...
> double d = ss.skipWhite().parse!double();
> while(!ss.skipWhite().empty)
> ss.parse!double().writeln();
> 
> I proposed this in (currently closed) pr 827
> https://github.com/D-Programming-Language/phobos/pull/827
> 
> MY ISSUE THAT I WOULD LIKE DISCUSSED HERE IS:
> The introduction of "skipWhite" would make "permanent" turn to the "do
> not skip ws" behavior: If this function exists, then surely, it is
> because parse does not skip ws.
> 
> If we don't introduce it though, it means that if it turns out we DO
> want to make parse skip ws, then we won't have that useless function
> bothering us.
> 
> So yeah, what are your thought, do you want to see "skipWhite" in
> std.conv? Or do you think it would be better to just do without it? Keep
> in mind, it *is* convenient though! :D

I'm not sure if std.conv is the best place for skipWhite() - after all
it doesn't convert anything, but would definitely prefer the "parse
doesn't skip" solution. It avoids silent failures in case white space is
actually not intended. It also naturally doesn't make assumptions on
what kind of white space is considered.


Re: Proposal: clean up semantics of array literals vs string literals

2012-10-04 Thread Don Clugston

On 02/10/12 17:14, Andrei Alexandrescu wrote:

On 10/2/12 7:11 AM, Don Clugston wrote:

The problem
---

String literals in D are a little bit magical; they have a trailing \0.

[snip]

I don't mean to be Debbie Downer on this because I reckon it addresses
an issue that some have, although I never do. With that warning, a few
candid opinions follow.

First, I think zero-terminated strings shouldn't be needed frequently
enough in D code to make this necessary.


[snip]

You're missing the point, a bit. The zero-terminator is only one symptom 
of the underlying problem: string literals and array literals have the 
same type but different semantics.

The other symptoms are:
* the implicit .dup that happens with array literals, but not string 
literals.
This is a silent performance killer. It's probably the most common 
performance bug we find in our code, and it's completely ungreppable.


* string literals are polysemous with width (c, w, d) but array literals 
are not (they are polysemous with constness).

For example,
"abc" ~ 'ü'
is legal, but
['a', 'b', 'c'] ~ 'ü'
is not.
This has nothing to do with the zero terminator.



Re: Proposal: clean up semantics of array literals vs string literals

2012-10-04 Thread Bernard Helyer
On Tuesday, 2 October 2012 at 15:14:10 UTC, Andrei Alexandrescu 
wrote:
First, I think zero-terminated strings shouldn't be needed 
frequently enough in D code to make this necessary.


My experience has been much different. Interfacing with C occurs
in nearly every D program I write, and I usually end up passing
a string literal. Anecdotes!




Re: Proposal: clean up semantics of array literals vs string literals

2012-10-04 Thread Jakob Ovrum

On Thursday, 4 October 2012 at 07:57:16 UTC, Bernard Helyer wrote:
On Tuesday, 2 October 2012 at 15:14:10 UTC, Andrei Alexandrescu 
wrote:
First, I think zero-terminated strings shouldn't be needed 
frequently enough in D code to make this necessary.


My experience has been much different. Interfacing with C occurs
in nearly every D program I write, and I usually end up passing
a string literal. Anecdotes!


Agreed. I'm always happy when I find that the particular C API I 
am working with supports passing strings as a pointer/length pair 
:)


Anyway, toStringz (and the wchar and dchar equivalents in 
std.utf) needs to be fixed regardless - it currently does a 
dangerous optimization if the string is immutable, otherwise it 
unconditionally concatenates. We cannot rely on strings being GC 
allocated based on mutability. Memory is outside the scope of the 
D type system - we cannot make assumptions about memory based on 
types.




Re: It seems pure ain't so pure after all

2012-10-04 Thread Tommi

On Tuesday, 2 October 2012 at 01:00:25 UTC, Walter Bright wrote:


Since all you need to do to guarantee compile time evaluation 
is use it in a context that requires CTFE, which are exactly 
the cases where you'd care that it was CTFE'd, I just don't see 
much utility here.


I suppose the most common use case would be efficient struct 
literals which are essentially value types but have non-trivial 
constructors.


struct Law
{
ulong _encodedId;

this(string state, int year) @aggressive_ctfe
{
// non-trivial constructor sets _encodedId
// ...
}
}

Policy policy = getPolicy();

if( policy.isLegalAccordingTo(Law("Kentucky", 1898)) )
{
// ...
}

I think the function attribute would be the most convenient 
solution.



Note that it is also impossible in the general case for the 
compiler to guarantee that a specific function is CTFE'able for 
all arguments that are also CTFE'able.


I'll have to take your word for it for not knowing enough 
(anything) about the subject.


Re: Idea: Introduce zero-terminated string specifier

2012-10-04 Thread Paulo Pinto

On Tuesday, 2 October 2012 at 13:07:46 UTC, deadalnix wrote:

Le 01/10/2012 22:33, Vladimir Panteleev a écrit :

On Monday, 1 October 2012 at 12:12:52 UTC, deadalnix wrote:

Le 01/10/2012 13:29, Vladimir Panteleev a écrit :

On Monday, 1 October 2012 at 10:56:36 UTC, deadalnix wrote:

How does to!string know that the string is 0 terminated ?


By convention (it doesn't).


It is unsafe as hell oO


Forcing the programmer to put strlen calls everywhere in his 
code is not

any safer.


I make the library safer. If the programmer manipulate unsafe 
construct (like c strings) it is up to the programmer to ensure 
safety, not the lib.


Thrusting the programmer is what brought upon us the wrath of 
security exploits via buffer overflows.


--
Paulo


T.init and @disable this

2012-10-04 Thread monarch_dodra

I'm trying to find out the exact semantics of

@disable this();

It is not well documented, and the fact that it is (supposedly) 
buggy makes it really confusing.


My understanding is that it "merely" makes it illegal to default 
initialization your type: You, the developer, have to specify the 
initial value.


//
T t; //initializer required for type
//
Which means, you, the developper, must explicitly choose an 
initial value.


However, DOES or DOES NOT this remain legal?
//
T t = T.init; //Fine: You chose the initializer T.init
//

Keep in mind it is not possible to make "T.init" itself 
disappear, because nothing can be constructed if T.init is not 
first memcopied onto the object, before calling any constructor 
proper.


I think this should be legal, because you, the developer, is 
asking for it, just the same way one can write "T t = void".


Making it illegal would pretty much make T unmoveable, 
un-emplaceable, un-initializeable on un-initialized memmory, and 
would probably break more than one function/trait which uses 
"T.init"


Feedback?


Re: Proposal: clean up semantics of array literals vs string literals

2012-10-04 Thread Bernard Helyer

On Tuesday, 2 October 2012 at 14:03:36 UTC, monarch_dodra wrote:
If you want 0 termination, then make it explicit, that's my 
opinion.


That ship has long since sailed. You'll break code in an
incredibly dangerous way if you were to change it now.


Re: T.init and @disable this

2012-10-04 Thread Jonathan M Davis
On Thursday, October 04, 2012 10:18:14 monarch_dodra wrote:
> Making it illegal would pretty much make T unmoveable,
> un-emplaceable, un-initializeable on un-initialized memmory,

That's kind of the point. If that's not what you want, don't disable init.

> and
> would probably break more than one function/trait which uses
> "T.init"

Which is a definite downside to the whole disabling init idea.

As for T.init and constructors, it should be perfectly possible to initialize 
the object to what T.init would be prior to construction without making T.init 
available at all.

- Jonathan M Davis


Re: T.init and @disable this

2012-10-04 Thread monarch_dodra
On Thursday, 4 October 2012 at 09:25:01 UTC, Jonathan M Davis 
wrote:

On Thursday, October 04, 2012 10:18:14 monarch_dodra wrote:

Making it illegal would pretty much make T unmoveable,
un-emplaceable, un-initializeable on un-initialized memmory,


That's kind of the point. If that's not what you want, don't 
disable init.


Hum... I had never actually thought of it that way!


and
would probably break more than one function/trait which uses
"T.init"


Which is a definite downside to the whole disabling init idea.

As for T.init and constructors, it should be perfectly possible 
to initialize
the object to what T.init would be prior to construction 
without making T.init

available at all.

- Jonathan M Davis


Good point too.


"instanceOf" trait for conditional implementations

2012-10-04 Thread monarch_dodra
One of the issues I've been running into more or less frequently 
lately is the inability to extract an instance of a type when 
trying to do conditional implementation specification.


Let me explain myself. Using T.init is a no go, because:
a) T might be "@disabled this()", making T.init illegal syntax 
(in theory, currently "buggy")
b) T.init is NOT an LValue, making code such is "is(typeof(T.init 
= 5))" invalid
c) You can try to use "T t = void", but you may also run into 
problems:

c)1) If T is immutable, that's illegal.
c)2) The compiler may complain if you use t, due to access to 
uninitialized.


This makes it a pain in the ass, as shown in this thread:
http://forum.dlang.org/thread/mailman.224.1348358069.5162.digitalmars-d-le...@puremagic.com

Or this pull request:
https://github.com/D-Programming-Language/phobos/pull/832

The current implementation for "isAssignable" is
//
template isAssignable(Lhs, Rhs)
{
enum bool isAssignable = is(typeof({
Lhs l = void;
void f(Rhs r) { l = r; }
return l;
}));
}
//

The code is correct, but when you have to jump through that many 
loops, you have to admit there is probably something wrong.


Imagine you are writting a template "Foo" that only works if it 
is legal to pass an instance of T to a function Bar. Are you 
*really* going to write the same thing as above, inside a single 
conditional if?!


//**
I'd like to propose an "instanceOf(T)" traits template, that 
would return an LValue instance of T. It would be used (strictly) 
for evaluating conditional implementations, or for the 
implementation of traits types.


//
template instanceOf(T)
{
static if (is(typeof({T t;})))
T instanceOf;
else
T instanceOf = void;
}
//

Now, watch this
//
template isAssignable(T, U)
{
enum bool isAssignable = is(typeof(instanceOf!T = 
instanceOf!U));

}

struct S
{
@disable this();
}

void main()
{
static assert( isAssignable!(int, int));
static assert( isAssignable!(int, immutable(int)));
static assert(!isAssignable!(immutable(int), int));
static assert( isAssignable!(S, immutable(S))); //Tricky test 
BTW

}
//
See? Easy peasy.

And this is just a "simple" test case: assign-ability. There are 
a bunch of other traits templates which would benefit here.


And that's the "tip of the iceberg": There are a TON of 
algorithms that use .init in their implementation restrictions: 
"if (Range.init.front ...)"


instanceOf would be a convenient way to support any type, 
regardless of construct-ability (or lack thereof).


//
That's my idea anyways, they've been consistently destroyed 
recently, so I don't mind again if you think it is a bad idea, or 
the approach is wrong.


I *do* think that being able to extract an instance of a type 
without worrying about how to actually acquire one...


Re: Will the D GC be awesome?

2012-10-04 Thread Alex Rønne Petersen

On 04-10-2012 08:50, Jacob Carlborg wrote:

On 2012-10-04 01:33, Alex Rønne Petersen wrote:


Use tuples. Multiple return values (as far as ABI goes) are impractical
because every major compiler back end (GCC, LLVM, ...) would have to be
adjusted for every architecture.


Why can't it just be syntax sugar for returning a struct?



I agree that it should be, FWIW. The problem is that some people really 
expect the ABI to be altered, which is unrealistic.


--
Alex Rønne Petersen
a...@lycus.org
http://lycus.org


Re: Will the D GC be awesome?

2012-10-04 Thread Alex Rønne Petersen

On 04-10-2012 08:49, Jacob Carlborg wrote:

On 2012-10-04 00:01, DypthroposTheImposter wrote:

  Did that hook thing to let peoples write custom GC ever make it in?


Yes, it's pluggable at link time. Here's an example of a stub
implementation:

http://www.dsource.org/projects/tango/browser/trunk/tango/core/rt/gc/stub

It's for Tango but the runtimes are basically the same.



More relevant to D2: 
https://github.com/D-Programming-Language/druntime/tree/master/src/gcstub


(Though admittedly nobody has built it for a while - so, disclaimer: 
there may be some silly build errors if you try to build it, but they 
should be easy to fix.)


--
Alex Rønne Petersen
a...@lycus.org
http://lycus.org


Re: Idea: Introduce zero-terminated string specifier

2012-10-04 Thread Regan Heath
On Thu, 04 Oct 2012 01:05:14 +0100, Steven Schveighoffer  
 wrote:


On Wed, 03 Oct 2012 08:37:14 -0400, Regan Heath   
wrote:


On Tue, 02 Oct 2012 21:44:11 +0100, Steven Schveighoffer  
 wrote:
In fact, a better solution would be to define a C string type (other  
than char *), and just pretend those system calls return that.  Then  
support that C string type in writef.


-Steve


:D
http://comments.gmane.org/gmane.comp.lang.d.general/97793



Almost what I was thinking.

:)

Though, at that point, I don't think we need a special specifier for  
writef.  %s works.


True.

However, looking at the vast reach of these changes, I wonder if it's  
worth it.  That's a lot of prototypes to C functions that have to  
change, and a large compiler change (treating string literals as CString  
instead of char *), just so C strings print out with writef.


That's not the only motivation.  The change brings more type safety in  
general and should help to catch bugs, like for example the common one  
made by people just starting out with D (from a C/C++ background).



Not to mention code that will certainly break...


Some code will definitely stop compiling, but it's debatable as to whether  
this code is not already "broken" to some degree.. it's likely not as  
safe/robust as it could be.


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: References in D

2012-10-04 Thread "Alex Burton"
On Saturday, 15 September 2012 at 21:30:03 UTC, Walter Bright 
wrote:

On 9/15/2012 5:39 AM, Henning Pohl wrote:
The way D is dealing with classes reminds me of pointers 
because you can null
them. C++'s references cannot (of course you can do some nasty 
casting).


Doing null references in C++ is simple:

int *p = NULL;
int& r = *p;

r = 3; // crash



IMHO int * p = NULL is a violation of the type system and should 
not compile.

NULL can in no way be considered a pointer to an int.

In the same way this should fail:
Class A
{

}
A a;

Low level programmers might know that references are implemented 
in the microprocessor as memory locations holding addresses of 
other memory locations, but high level programmers should not 
need to know this.


A separate special syntax should be used by low level code in D.
In the vast majority of code, having nullable references is a 
source of bugs.


Passing null to a function expecting a reference/pointer to 
something is equivalent to passing a random number and is the 
same as mixing a biycle into a recipe asking for a cup of sugar.


In cases where you really want to pass a value that could be a 
reference to something or could be null, use a special type that 
allows this.
A clever library writer might be able to implement such a type 
using their low level knowledge of pointers but the rest of us 
should be protected from it.


Re: A study on immutability usage

2012-10-04 Thread Russel Winder
On Thu, 2012-10-04 at 03:49 +0200, Jesse Phillips wrote:
[…]
> So, you are saying that these examples are exhibiting the same 
> problems because they are based on the same design?
> 
> I don't see how that would invalidate the results. That is, I 
> don't see the relevance here.

My comment relates to Jeff's point that what you find is determined by
what you are looking for.  Also that what people write is determined by
what they have been told is good and so the use of GoF patterns is what
will be found now as then. No commentary was intended on any previous
posts in the thread.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: References in D

2012-10-04 Thread bearophile

Timon Gehr:

To quote (loosely) Mr. Walter Bright from another discussion: 
how many

current bugs in dmd are related to default null references?


More than zero.


A >0 frequency of bugs caused by something can't be enough to 
justify a language feature. You need a "high enough" frequency :-)


--

Alex Burton:


Doing null references in C++ is simple:

int *p = NULL;
int& r = *p;

r = 3; // crash



IMHO int * p = NULL is a violation of the type system and 
should not compile.

NULL can in no way be considered a pointer to an int.


I don't agree. int* is a raw pointer, and a raw pointer is 
allowed to contain a null, so the first line is OK.


The problem is in the second line: in a better designed language 
this line needs to be a compile-time error, because p can be 
null, while r can't be null:


int& r = *p;

The language has to force you to initialize the reference with 
something that is valid.


Bye,
bearophile


std.concurrency and fibers

2012-10-04 Thread Alex Rønne Petersen

Hi,

We currently have std.concurrency as a message-passing mechanism. We 
encourage people to use it instead of OS threads, which is great. 
However, what is *not* great is that spawned tasks correspond 1:1 to OS 
threads. This is not even remotely scalable for Erlang-style 
concurrency. There's a fairly simple way to fix that: Fibers.


The only problem with adding fiber support to std.concurrency is that 
the interface is just not flexible enough. The current interface is 
completely and entirely tied to the notion of threads (contrary to what 
its module description says).


Now, I see a number of ways we can fix this:

A) We completely get rid of the notion of threads and instead simply 
speak of 'tasks'. This trivially allows us to use threads, fibers, 
whatever to back the module. I personally think this is the best way to 
build a message-passing abstraction because it gives enough transparency 
to *actually* distribute tasks across machines without things breaking.
B) We make the module capable of backing tasks with both threads and 
fibers, and expose an interface that allows the user to choose what kind 
of task is spawned. I'm *not* convinced this is a good approach because 
it's extremely error-prone (imagine doing a thread-based receive inside 
a fiber-based task!).
C) We just swap out threads with fibers and document that the module 
uses fibers. See my comments in A for why I'm not sure this is a good idea.


All of these are going to break code in one way or another - that's 
unavoidable. But we really need to make std.concurrency grow up; other 
languages (Erlang, Rust, Go, ...) have had micro-threads (in some form) 
for years, and if we want D to be seriously usable for large-scale 
concurrency, we need to have them too.


Thoughts? Other ideas?

--
Alex Rønne Petersen
a...@lycus.org
http://lycus.org


Re: "instanceOf" trait for conditional implementations

2012-10-04 Thread monarch_dodra

On Thursday, 4 October 2012 at 09:43:08 UTC, monarch_dodra wrote:

[SNIP]


You know what? Forget I said anything? Using lambdas is an 
incredibly good workaround for all the problems stated above, 
making the initial point moot.


Lamba if:

template isAssignable(T, U)
{
enum bool isAssignable =
is(typeof((ref T t, ref U u) => {t = u;}));
}

This allow testing on t and u, without ever having to 
construct/acquire them. Neat-o!!!


Re: Will the D GC be awesome?

2012-10-04 Thread Jacob Carlborg

On 2012-10-04 12:58, Alex Rønne Petersen wrote:


More relevant to D2:
https://github.com/D-Programming-Language/druntime/tree/master/src/gcstub

(Though admittedly nobody has built it for a while - so, disclaimer:
there may be some silly build errors if you try to build it, but they
should be easy to fix.)


There it is, I've been looking for the corresponding one in druntie.

--
/Jacob Carlborg


Re: std.concurrency and fibers

2012-10-04 Thread Timon Gehr

On 10/04/2012 01:32 PM, Alex Rønne Petersen wrote:

Hi,

We currently have std.concurrency as a message-passing mechanism. We
encourage people to use it instead of OS threads, which is great.
However, what is *not* great is that spawned tasks correspond 1:1 to OS
threads. This is not even remotely scalable for Erlang-style
concurrency. There's a fairly simple way to fix that: Fibers.

The only problem with adding fiber support to std.concurrency is that
the interface is just not flexible enough. The current interface is
completely and entirely tied to the notion of threads (contrary to what
its module description says).

Now, I see a number of ways we can fix this:

A) We completely get rid of the notion of threads and instead simply
speak of 'tasks'. This trivially allows us to use threads, fibers,
whatever to back the module. I personally think this is the best way to
build a message-passing abstraction because it gives enough transparency
to *actually* distribute tasks across machines without things breaking.
B) We make the module capable of backing tasks with both threads and
fibers, and expose an interface that allows the user to choose what kind
of task is spawned. I'm *not* convinced this is a good approach because
it's extremely error-prone (imagine doing a thread-based receive inside
a fiber-based task!).
C) We just swap out threads with fibers and document that the module
uses fibers. See my comments in A for why I'm not sure this is a good idea.

All of these are going to break code in one way or another - that's
unavoidable. But we really need to make std.concurrency grow up; other
languages (Erlang, Rust, Go, ...) have had micro-threads (in some form)
for years, and if we want D to be seriously usable for large-scale
concurrency, we need to have them too.

Thoughts? Other ideas?



+1, but what about TLS?


Re: "instanceOf" trait for conditional implementations

2012-10-04 Thread so

On Thursday, 4 October 2012 at 09:43:08 UTC, monarch_dodra wrote:


The current implementation for "isAssignable" is
//
template isAssignable(Lhs, Rhs)
{
enum bool isAssignable = is(typeof({
Lhs l = void;
void f(Rhs r) { l = r; }
return l;
}));
}


OT - Is there any reason for disabling UFCS for this?

template isAssignable(Lhs, Rhs)
{
enum bool isAssignable =
{
Lhs l = void;
void f(Rhs r) { l = r; }
return l;
}.typeof.is;
}


Re: std.concurrency and fibers

2012-10-04 Thread Alex Rønne Petersen

On 04-10-2012 14:11, Timon Gehr wrote:

On 10/04/2012 01:32 PM, Alex Rønne Petersen wrote:

Hi,

We currently have std.concurrency as a message-passing mechanism. We
encourage people to use it instead of OS threads, which is great.
However, what is *not* great is that spawned tasks correspond 1:1 to OS
threads. This is not even remotely scalable for Erlang-style
concurrency. There's a fairly simple way to fix that: Fibers.

The only problem with adding fiber support to std.concurrency is that
the interface is just not flexible enough. The current interface is
completely and entirely tied to the notion of threads (contrary to what
its module description says).

Now, I see a number of ways we can fix this:

A) We completely get rid of the notion of threads and instead simply
speak of 'tasks'. This trivially allows us to use threads, fibers,
whatever to back the module. I personally think this is the best way to
build a message-passing abstraction because it gives enough transparency
to *actually* distribute tasks across machines without things breaking.
B) We make the module capable of backing tasks with both threads and
fibers, and expose an interface that allows the user to choose what kind
of task is spawned. I'm *not* convinced this is a good approach because
it's extremely error-prone (imagine doing a thread-based receive inside
a fiber-based task!).
C) We just swap out threads with fibers and document that the module
uses fibers. See my comments in A for why I'm not sure this is a good
idea.

All of these are going to break code in one way or another - that's
unavoidable. But we really need to make std.concurrency grow up; other
languages (Erlang, Rust, Go, ...) have had micro-threads (in some form)
for years, and if we want D to be seriously usable for large-scale
concurrency, we need to have them too.

Thoughts? Other ideas?



+1, but what about TLS?


I think that no matter what we do, we have to simply say "don't do that" 
to thread-local state (it would break in distributed scenarios too, for 
instance).


Instead, I think we should do what the Rust folks did: Use *task*-local 
state and leave it up to std.concurrency to figure out how to deal with 
it. It won't be as 'seamless' as TLS variables in D of course, but I 
think it's good enough in practice.


--
Alex Rønne Petersen
a...@lycus.org
http://lycus.org


Re: std.concurrency and fibers

2012-10-04 Thread Timon Gehr

On 10/04/2012 02:22 PM, Alex Rønne Petersen wrote:

On 04-10-2012 14:11, Timon Gehr wrote:

On 10/04/2012 01:32 PM, Alex Rønne Petersen wrote:

Hi,

We currently have std.concurrency as a message-passing mechanism. We
encourage people to use it instead of OS threads, which is great.
However, what is *not* great is that spawned tasks correspond 1:1 to OS
threads. This is not even remotely scalable for Erlang-style
concurrency. There's a fairly simple way to fix that: Fibers.

The only problem with adding fiber support to std.concurrency is that
the interface is just not flexible enough. The current interface is
completely and entirely tied to the notion of threads (contrary to what
its module description says).

Now, I see a number of ways we can fix this:

A) We completely get rid of the notion of threads and instead simply
speak of 'tasks'. This trivially allows us to use threads, fibers,
whatever to back the module. I personally think this is the best way to
build a message-passing abstraction because it gives enough transparency
to *actually* distribute tasks across machines without things breaking.
B) We make the module capable of backing tasks with both threads and
fibers, and expose an interface that allows the user to choose what kind
of task is spawned. I'm *not* convinced this is a good approach because
it's extremely error-prone (imagine doing a thread-based receive inside
a fiber-based task!).
C) We just swap out threads with fibers and document that the module
uses fibers. See my comments in A for why I'm not sure this is a good
idea.

All of these are going to break code in one way or another - that's
unavoidable. But we really need to make std.concurrency grow up; other
languages (Erlang, Rust, Go, ...) have had micro-threads (in some form)
for years, and if we want D to be seriously usable for large-scale
concurrency, we need to have them too.

Thoughts? Other ideas?



+1, but what about TLS?


I think that no matter what we do, we have to simply say "don't do that"
to thread-local state (it would break in distributed scenarios too, for
instance).

Instead, I think we should do what the Rust folks did: Use *task*-local
state and leave it up to std.concurrency to figure out how to deal with
it. It won't be as 'seamless' as TLS variables in D of course, but I
think it's good enough in practice.



If it is not seamless, we have failed. IMO the runtime should expose an
interface for allocating TLS, switching between TLS instances and
destroying TLS.

What about the stack? Allocating a fixed-size stack per task is costly
and Walter opposes dynamic stack growth.


Re: "instanceOf" trait for conditional implementations

2012-10-04 Thread monarch_dodra

On Thursday, 4 October 2012 at 12:48:51 UTC, so wrote:
On Thursday, 4 October 2012 at 09:43:08 UTC, monarch_dodra 
wrote:



The current implementation for "isAssignable" is
//
template isAssignable(Lhs, Rhs)
{
   enum bool isAssignable = is(typeof({
   Lhs l = void;
   void f(Rhs r) { l = r; }
   return l;
   }));
}


OT - Is there any reason for disabling UFCS for this?

template isAssignable(Lhs, Rhs)
{
enum bool isAssignable =
{
Lhs l = void;
void f(Rhs r) { l = r; }
return l;
}.typeof.is;
}


Because typeof is a keyword, and is is a declaration.

There are two open ER to allow typeof to be used as a property: 
While not strictly EFCS, it would be convenient.


Regarding is, There are no open requests. But I guess it would be 
fun to write

if( T.is(U) ).

But where would it end? Make if UFCS too?
T.is(U).if()

IMO, I can see typeof being a property, but not is.


Re: Will the D GC be awesome?

2012-10-04 Thread Jacob Carlborg

On 2012-10-04 12:59, Alex Rønne Petersen wrote:


I agree that it should be, FWIW. The problem is that some people really
expect the ABI to be altered, which is unrealistic.


Is there an advantage of altering the ABI?

--
/Jacob Carlborg


Re: Will the D GC be awesome?

2012-10-04 Thread Alex Rønne Petersen

On 04-10-2012 14:26, Jacob Carlborg wrote:

On 2012-10-04 12:59, Alex Rønne Petersen wrote:


I agree that it should be, FWIW. The problem is that some people really
expect the ABI to be altered, which is unrealistic.


Is there an advantage of altering the ABI?



Presumably speed; returning small structs in registers will be faster 
than doing so on the stack.


But I don't agree that the vast complexity of altering well-established 
ABIs for multiple architectures is worth that speed gain.


--
Alex Rønne Petersen
a...@lycus.org
http://lycus.org


Re: References in D

2012-10-04 Thread Timon Gehr

On 10/04/2012 01:38 PM, bearophile wrote:

Timon Gehr:


To quote (loosely) Mr. Walter Bright from another discussion: how many
current bugs in dmd are related to default null references?


More than zero.


A >0 frequency of bugs caused by something can't be enough to justify a
language feature. You need a "high enough" frequency :-)



A correct program contains no errors of any frequency.

Your claim only holds for errors related to the programmer failing to
create a program that parses as he intended it to.

No program errors are 'caused' by invalid references.


--

Alex Burton:
...


(please do not destroy the threading)


Re: std.concurrency and fibers

2012-10-04 Thread Alex Rønne Petersen

On 04-10-2012 14:48, Timon Gehr wrote:

On 10/04/2012 02:22 PM, Alex Rønne Petersen wrote:

On 04-10-2012 14:11, Timon Gehr wrote:

On 10/04/2012 01:32 PM, Alex Rønne Petersen wrote:

Hi,

We currently have std.concurrency as a message-passing mechanism. We
encourage people to use it instead of OS threads, which is great.
However, what is *not* great is that spawned tasks correspond 1:1 to OS
threads. This is not even remotely scalable for Erlang-style
concurrency. There's a fairly simple way to fix that: Fibers.

The only problem with adding fiber support to std.concurrency is that
the interface is just not flexible enough. The current interface is
completely and entirely tied to the notion of threads (contrary to what
its module description says).

Now, I see a number of ways we can fix this:

A) We completely get rid of the notion of threads and instead simply
speak of 'tasks'. This trivially allows us to use threads, fibers,
whatever to back the module. I personally think this is the best way to
build a message-passing abstraction because it gives enough
transparency
to *actually* distribute tasks across machines without things breaking.
B) We make the module capable of backing tasks with both threads and
fibers, and expose an interface that allows the user to choose what
kind
of task is spawned. I'm *not* convinced this is a good approach because
it's extremely error-prone (imagine doing a thread-based receive inside
a fiber-based task!).
C) We just swap out threads with fibers and document that the module
uses fibers. See my comments in A for why I'm not sure this is a good
idea.

All of these are going to break code in one way or another - that's
unavoidable. But we really need to make std.concurrency grow up; other
languages (Erlang, Rust, Go, ...) have had micro-threads (in some form)
for years, and if we want D to be seriously usable for large-scale
concurrency, we need to have them too.

Thoughts? Other ideas?



+1, but what about TLS?


I think that no matter what we do, we have to simply say "don't do that"
to thread-local state (it would break in distributed scenarios too, for
instance).

Instead, I think we should do what the Rust folks did: Use *task*-local
state and leave it up to std.concurrency to figure out how to deal with
it. It won't be as 'seamless' as TLS variables in D of course, but I
think it's good enough in practice.



If it is not seamless, we have failed. IMO the runtime should expose an
interface for allocating TLS, switching between TLS instances and
destroying TLS.


I suppose it could be done.

But keep in mind the side-effects of an approach like this: Some 
thread-local variables (for instance, think 'chunk' inside emplace) 
would break (or at least behave very weirdly) if you switch the *entire* 
TLS context when entering a task.


Sure, we could use the runtime interface for TLS switching only for 
task-local state, but then we're back to square one with it not being 
seamless.




What about the stack? Allocating a fixed-size stack per task is costly
and Walter opposes dynamic stack growth.


Yeah, I never understood why. It's essential for functional-style code 
running in constrained tasks. It's not just about conserving memory; 
it's to make recursion feasible.


In any case, fibers currently allocate PAGE_SIZE * 4 bytes for stacks.

--
Alex Rønne Petersen
a...@lycus.org
http://lycus.org


Re: Will the D GC be awesome?

2012-10-04 Thread Jacob Carlborg

On 2012-10-04 14:36, Alex Rønne Petersen wrote:


Presumably speed; returning small structs in registers will be faster
than doing so on the stack.


Are sturcts currently always returned on the stack?


But I don't agree that the vast complexity of altering well-established
ABIs for multiple architectures is worth that speed gain.


I agree.

--
/Jacob Carlborg


Re: Will the D GC be awesome?

2012-10-04 Thread Alex Rønne Petersen

On 04-10-2012 15:21, Piotr Szturmaj wrote:

Jacob Carlborg wrote:

On 2012-10-04 14:36, Alex Rønne Petersen wrote:


Presumably speed; returning small structs in registers will be faster
than doing so on the stack.


Are sturcts currently always returned on the stack?


From: http://dlang.org/abi.html, for Windows x86 extern(D):

* 1, 2 and 4 byte structs are returned in EAX.
* 8 byte structs are returned in EDX,EAX, where EDX gets the most
significant half.
* For other struct sizes, the return value is stored through a hidden
pointer passed as an argument to the function.


I strongly advise ignoring the D calling convention. Only DMD implements 
it and nowhere else than on Windows for 32-bit x86.


Instead, refer to the Windows and System V x86 ABIs.

--
Alex Rønne Petersen
a...@lycus.org
http://lycus.org


Re: Will the D GC be awesome?

2012-10-04 Thread Piotr Szturmaj

Jacob Carlborg wrote:

On 2012-10-04 14:36, Alex Rønne Petersen wrote:


Presumably speed; returning small structs in registers will be faster
than doing so on the stack.


Are sturcts currently always returned on the stack?


From: http://dlang.org/abi.html, for Windows x86 extern(D):

* 1, 2 and 4 byte structs are returned in EAX.
* 8 byte structs are returned in EDX,EAX, where EDX gets the most 
significant half.
* For other struct sizes, the return value is stored through a hidden 
pointer passed as an argument to the function.


Re: Will the D GC be awesome?

2012-10-04 Thread Alex Rønne Petersen

On 04-10-2012 15:06, Jacob Carlborg wrote:

On 2012-10-04 14:36, Alex Rønne Petersen wrote:


Presumably speed; returning small structs in registers will be faster
than doing so on the stack.


Are sturcts currently always returned on the stack?


As always, it depends on the arch, but on 32-bit x86: Yes. On 64-bit 
x86: Yes, if the struct size is larger than 8 bytes (otherwise it's 
returned in RAX).





But I don't agree that the vast complexity of altering well-established
ABIs for multiple architectures is worth that speed gain.


I agree.



--
Alex Rønne Petersen
a...@lycus.org
http://lycus.org


Re: Will the D GC be awesome?

2012-10-04 Thread Tommi
I wonder if tuple should automatically expand (flatten) into a 
list of arguments not only when used with template parameters, 
but with regular functions as well. This so, that it would enable 
composition of tuple returning functions with functions that take 
in individual arguments instead of a tuple, enabling that:


Tuple!(int, int) getTuple()
{
//...
}

void fun(int arg1, int arg2)
{
//...
}

void main()
{
fun( getTuple() );
}


Re: openMP

2012-10-04 Thread David Nadlinger

On Wednesday, 3 October 2012 at 23:02:25 UTC, dsimcha wrote:
So the "process which creates the future" is a Task that 
executes in a different thread than the caller?  And an 
alternative way that a value might become available in the 
future is e.g. if it's being retrieved from some slow I/O 
process like a database or network?


Yes.


For example, let's say you are writing a function which 
computes a complex database query from its parameters and then 
submits it to your query manager/connection pool/… for 
asynchronous execution. You cannot use std.parallelism.Task in 
this case, because there is no way of expressing the process 
which retrieves the result as a delegate running inside a 
TaskPool.


Ok, I'm confused here.  Why can't the process that retrieves 
the result be expressed as a delegate running in a TaskPool or 
a new thread?


Because you already have a system in place for managing these 
tasks, which is separate from std.parallelism. A reason for this 
could be that you are using a third-party library like libevent. 
Another could be that the type of workload requires additional 
problem knowledge of the scheduler so that different tasks don't 
tread on each others's toes (for example communicating with some 
servers via a pool of sockets, where you can handle several 
concurrent requests to different servers, but can't have two task 
read/write to the same socket at the same time, because you'd 
just send garbage).


Really, this issue is just about extensibility and/or 
flexibility. The design of std.parallelism.Task assumes that all 
values which "becomes available at some point in the future" are 
the product of a process for which a TaskPool is a suitable 
scheduler. C++ has std::future separate from std::promise, C# has 
Task vs. TaskCompletionSource, etc.



The second problem with std.parallelism.Task is that your only 
choice is polling (or blocking, for that matter). Yes, 
callbacks are a hairy thing to do if you can't be sure what 
thread they are executed on, but not having them severely 
limits the power of your abstraction, especially if you are 
dealing with non-CPU-bound tasks (as many of today's "modern" 
use cases are).


I'm a little confused about how the callbacks would be used 
here.
 Is the idea that some callback would be called when the task 
is finished?  Would it be called in the worker thread or the 
thread that submitted the task to the pool?  Can you provide a 
use case?


Maybe using the word "callback" was a bit misleading, but it 
callback would be invoked on the worker thread (or by whoever 
invokes the hypothetical Future.complete() method).


Probably most trivial use case would be to set a condition 
variable in it in order to implement a waitAny(Task[]) method, 
which waits until the first of a set of tasks is completed. Ever 
wanted to wait on multiple condition variables? Or used select() 
with multiple sockets? This is what I mean.


For more advanced/application-level use cases, just look at any 
use of ContinueWith in C#. std::future::then() is also proposed 
for C++, see e.g. 
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3327.pdf.


I didn't really read the the N3327 paper in detail, but from a 
brief look it seems to be a nice summary of what you might want 
to do with tasks/asynchronous results – I think you could find 
it an interesting read.


David


Re: T.init and @disable this

2012-10-04 Thread Andrej Mitrovic
On 10/4/12, monarch_dodra  wrote:
> I'm trying to find out the exact semantics of
>
> @disable this();

Also this still works:

struct Foo
{
@disable this();
}

void main()
{
Foo foo = Foo();
}

I really don't know why though.. Isn't this a bug? A workaround in
user-code is defining a disabled static opCall:

struct Foo
{
@disable this();
@disable static void opCall();
}


Re: "instanceOf" trait for conditional implementations

2012-10-04 Thread kenji hara
2012/10/4 monarch_dodra :
> One of the issues I've been running into more or less frequently lately is
> the inability to extract an instance of a type when trying to do conditional
> implementation specification.
>
> Let me explain myself. Using T.init is a no go, because:
> a) T might be "@disabled this()", making T.init illegal syntax (in theory,
> currently "buggy")

I think T.init should always legal.

> b) T.init is NOT an LValue, making code such is "is(typeof(T.init = 5))"
> invalid

> c) You can try to use "T t = void", but you may also run into problems:
> c)1) If T is immutable, that's illegal.
> c)2) The compiler may complain if you use t, due to access to uninitialized.

IMO, this is just a compiler bug. If a variable has VoidInitializer,
it should always become a runtime value.

Kenji Hara


Re: T.init and @disable this

2012-10-04 Thread Maxim Fomin
On Thursday, 4 October 2012 at 17:37:58 UTC, Andrej Mitrovic 
wrote:

On 10/4/12, monarch_dodra  wrote:

I'm trying to find out the exact semantics of

@disable this();


Also this still works:

struct Foo
{
@disable this();
}

void main()
{
Foo foo = Foo();
}

I really don't know why though.. Isn't this a bug? A workaround 
in

user-code is defining a disabled static opCall:

struct Foo
{
@disable this();
@disable static void opCall();
}


http://d.puremagic.com/issues/show_bug.cgi?id=8703



Re: "instanceOf" trait for conditional implementations

2012-10-04 Thread David Nadlinger

On Thursday, 4 October 2012 at 17:44:48 UTC, kenji hara wrote:
a) T might be "@disabled this()", making T.init illegal syntax 
(in theory,

currently "buggy")


I think T.init should always legal.


What would be the semantics of T.init for a type with @disable 
this()? It isn't a valid value, for sure...


David


Re: References in D

2012-10-04 Thread Jonathan M Davis
On Thursday, October 04, 2012 13:14:00 Alex Burton, @gmail.com wrote:
> On Saturday, 15 September 2012 at 21:30:03 UTC, Walter Bright
> 
> wrote:
> > On 9/15/2012 5:39 AM, Henning Pohl wrote:
> >> The way D is dealing with classes reminds me of pointers
> >> because you can null
> >> them. C++'s references cannot (of course you can do some nasty
> >> casting).
> > 
> > Doing null references in C++ is simple:
> > 
> > int *p = NULL;
> > int& r = *p;
> > 
> > r = 3; // crash
> 
> IMHO int * p = NULL is a violation of the type system and should
> not compile.
> NULL can in no way be considered a pointer to an int.

Um. What? It's perfectly legal for pointers to be null. The fact that *p 
doesn't blow up is a bit annoying, but it makes sense from an implementation 
standpoint and doesn't really cost you anything other than a bit of locality 
between the bug and the crash.

> In the same way this should fail:
> Class A
> {
> 
> }
> A a;

And why would this fail? It's also perfectly legal.

- Jonathan M Davis


Re: "instanceOf" trait for conditional implementations

2012-10-04 Thread Simen Kjaeraas

On 2012-37-04 11:10, monarch_dodra  wrote:

One of the issues I've been running into more or less frequently lately  
is the inability to extract an instance of a type when trying to do  
conditional implementation specification.


Let me explain myself. Using T.init is a no go, because:
a) T might be "@disabled this()", making T.init illegal syntax (in  
theory, currently "buggy")
b) T.init is NOT an LValue, making code such is "is(typeof(T.init = 5))"  
invalid

c) You can try to use "T t = void", but you may also run into problems:
c)1) If T is immutable, that's illegal.
c)2) The compiler may complain if you use t, due to access to  
uninitialized.

[snip: Good stuff]

I like this, and have had the exact same thought.

There is one thing I don't like, and that is your instanceOf could be
used to create an uninitialized T. In the interest of that (and brevity)
let me present my solution:

@property T instanceOf( T )( );

It works great in static if, and fails at link-time. The error message is
however far from perfect:
 Error 42: Symbol Undefined  
_D3foo24__T10instanceOfTS3foo1SZ10instanceOfFNdZS3foo1S


But I think that is better than giving developers a tool for instantiating
types that are not meant to be instantiated.

--
Simen


Re: Will the D GC be awesome?

2012-10-04 Thread Timon Gehr

On 10/04/2012 05:16 PM, Tommi wrote:

I wonder if tuple should automatically expand (flatten) into a list of
arguments not only when used with template parameters, but with regular
functions as well. This so, that it would enable composition of tuple
returning functions with functions that take in individual arguments
instead of a tuple, enabling that:

Tuple!(int, int) getTuple()
{
 //...
}

void fun(int arg1, int arg2)
{
 //...
}

void main()
{
 fun( getTuple() );
}


No, it should not.

void main(){
fun(getTuple().expand);
}


Re: Will the D GC be awesome?

2012-10-04 Thread Simen Kjaeraas

On 2012-40-04 00:10, Tommi  wrote:


(tuples automatically expand if needed)


False. Typetuples do, but those cannot be returned from functions.

--
Simen


Re: Will the D GC be awesome?

2012-10-04 Thread Simen Kjaeraas

On 2012-27-04 07:10, Walter Bright  wrote:


* OpCmp returning an int is fugly I r sad


How else would you return a 3 state value?


enum Comparison {
Before = -1,
Same = 0,
After = 1,
Unordered = NaN,
}

I'm not saying it should be done, but it would be more readable
(and more cmoplex to write).

--
Simen


Re: openMP

2012-10-04 Thread dsimcha
Ok, I think I see where you're coming from here.  I've replied to 
some points below just to make sure and discuss possible 
solutions.


On Thursday, 4 October 2012 at 16:07:35 UTC, David Nadlinger 
wrote:

On Wednesday, 3 October 2012 at 23:02:25 UTC, dsimcha wrote:


Because you already have a system in place for managing these 
tasks, which is separate from std.parallelism. A reason for 
this could be that you are using a third-party library like 
libevent. Another could be that the type of workload requires 
additional problem knowledge of the scheduler so that different 
tasks don't tread on each others's toes (for example 
communicating with some servers via a pool of sockets, where 
you can handle several concurrent requests to different 
servers, but can't have two task read/write to the same socket 
at the same time, because you'd just send garbage).


Really, this issue is just about extensibility and/or 
flexibility. The design of std.parallelism.Task assumes that 
all values which "becomes available at some point in the 
future" are the product of a process for which a TaskPool is a 
suitable scheduler. C++ has std::future separate from 
std::promise, C# has Task vs. TaskCompletionSource, etc.


I'll look into these when I have more time, but I guess what it 
boils down to is the need to separate the **abstraction** of 
something that returns a value later (I'll call that 
**abstraction** futures) from the **implementation** provided by 
std.parallelism (I'll call this **implementation** tasks), which 
was designed only with CPU-bound tasks and multicore in mind.


On the other hand, I like std.parallelism's simplicity for 
handling its charter of CPU-bound problems and multicore 
parallelism.  Perhaps the solution is to define another Phobos 
module that models the **abstraction** of futures and provide an 
adapter of some kind to make std.parallelism tasks, which are a 
much lower-level concept, fit this model.  I don't think the 
**general abstraction** of a future should be defined in 
std.parallelism, though.  std.parallelism includes 
parallelism-oriented things besides tasks, e.g. parallel map, 
reduce, foreach.  Including a more abstract model of values that 
become available later would make its charter too unfocused.




Maybe using the word "callback" was a bit misleading, but it 
callback would be invoked on the worker thread (or by whoever 
invokes the hypothetical Future.complete() method).


Probably most trivial use case would be to set a condition 
variable in it in order to implement a waitAny(Task[]) method, 
which waits until the first of a set of tasks is completed. 
Ever wanted to wait on multiple condition variables? Or used 
select() with multiple sockets? This is what I mean.


Well, implementing something like ContinueWith or Future.complete 
for std.parallelism tasks would be trivial, and I see how waitAny 
could easily be implemented in terms of this.  I'm not sure I 
want to define an API for this in std.parallelism, though, until 
we have something like a std.future and the **abstraction** of a 
future is better-defined.




For more advanced/application-level use cases, just look at any 
use of ContinueWith in C#. std::future::then() is also proposed 
for C++, see e.g. 
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3327.pdf.


I didn't really read the the N3327 paper in detail, but from a 
brief look it seems to be a nice summary of what you might want 
to do with tasks/asynchronous results – I think you could 
find it an interesting read.


I don't have time to look at these right now, but I'll definitely 
look at them sometime soon.  Thanks for the info.




Re: Will the D GC be awesome?

2012-10-04 Thread Tommi

On Thursday, 4 October 2012 at 18:12:09 UTC, Timon Gehr wrote:


void main(){
fun(getTuple().expand);
}


Great, that works for me. It would be probably confusing if they 
tuples expanded automatically; non-obvious if you'd be passing 
one argument or multiple.


Re: T.init and @disable this

2012-10-04 Thread kenji hara
2012/10/4 monarch_dodra :
> I'm trying to find out the exact semantics of
>
> @disable this();
>
> It is not well documented, and the fact that it is (supposedly) buggy makes
> it really confusing.
>
> My understanding is that it "merely" makes it illegal to default
> initialization your type: You, the developer, have to specify the initial
> value.
>
> //
> T t; //initializer required for type
> //
> Which means, you, the developper, must explicitly choose an initial value.
>
> However, DOES or DOES NOT this remain legal?
> //
> T t = T.init; //Fine: You chose the initializer T.init
> //
>
> Keep in mind it is not possible to make "T.init" itself disappear, because
> nothing can be constructed if T.init is not first memcopied onto the object,
> before calling any constructor proper.
>
> I think this should be legal, because you, the developer, is asking for it,
> just the same way one can write "T t = void".

I think that T.init is legal even if T has just only @disable this()
constructor. If not,
> Making it illegal would pretty much make T unmoveable, un-emplaceable,
> un-initializeable on un-initialized memmory, and would probably break more
> than one function/trait which uses "T.init"

But, I also agree that T.init _sometimes_ *unsafe*.
1) If T has @disable this(), T.init will returns an object which just
initialized (== the value itself is never undefined), but not
constructed (might be logically invalid object).
2) If T is nested struct, it's frame pointer is always null. It might
cause access violation by its member function call.

I came up with just now: The use of such unsafe T.init should be
allowed only inside @system/@trusted functions.
I think it is acceptable limitation.

Thoughts?

Kenji Hara


Re: Will the D GC be awesome?

2012-10-04 Thread Simen Kjaeraas

On 2012-10-04, 20:56, Tommi wrote:


On Thursday, 4 October 2012 at 18:12:09 UTC, Timon Gehr wrote:


void main(){
fun(getTuple().expand);
}


Great, that works for me. It would be probably confusing if they tuples  
expanded automatically; non-obvious if you'd be passing one argument or  
multiple.


There's another reason:

void foo(T)(T, int, float);
void foo(T)(string, T);

Tuple!(int, float) getTuple();

foo("Hello", getTuple()); // What to call?

And:

void baz(T...)(T t);

baz(getTuple()) // Expand or not?


And while this is a constructed example, there is also the matter of
exponential possibilities for the overload system (Oh, so that didn't
work, maybe if I expand *this* tuple? No. *That* tuple? ...)

--
Simen


Re: Will the D GC be awesome?

2012-10-04 Thread renoX
On Wednesday, 3 October 2012 at 21:31:52 UTC, 
DypthroposTheImposter wrote:
  D is pretty cool, perhaps someday I can use it instead of C++ 
and have cool shit like fast build times, modules, no more 
bloody headers, sane templates, CTFE, UFCS etc


 But can the D GC ever be made:

1. precise
2. able to scale to large-ish data set(2gig+)
3. No long stalls(anything over a couple millisecond(<3))



This figure is quite meaningless: if I split a collection phase 
in several 2ms portion, it would be conformat yet the user would 
still see long stalls: you need to define both a period and a 
maximum period of time usable by the GC in this period.




Q. Curious, would it be compacting?



Add VM-aware GC (http://lambda-the-ultimate.org/node/2391) and 
you'll have also my ideal but non existant GC.


That said I know two free languages with a "real time"  GC: 
SuperCollider and Nimrod.




 If not then I'm stuck not using it much--

Which leaves me with structs, and lets just say D struct are 
not impressive--



* Oh and on a totally unrelated note, D needs Multiple return 
values. Lua has it, it's awesome.



Agreed here.

Regards,
renoX


D doesn't want to be left out does it?

* OpCmp returning an int is fugly I r sad

* why is haskell so much shorter syntax, can D get that nice 
syntax plss


STAB!




Re: std.concurrency and fibers

2012-10-04 Thread Dmitry Olshansky

On 04-Oct-12 15:32, Alex Rønne Petersen wrote:

Hi,

We currently have std.concurrency as a message-passing mechanism. We
encourage people to use it instead of OS threads, which is great.
However, what is *not* great is that spawned tasks correspond 1:1 to OS
threads. This is not even remotely scalable for Erlang-style
concurrency. There's a fairly simple way to fix that: Fibers.

The only problem with adding fiber support to std.concurrency is that
the interface is just not flexible enough. The current interface is
completely and entirely tied to the notion of threads (contrary to what
its module description says).

Now, I see a number of ways we can fix this:

A) We completely get rid of the notion of threads and instead simply
speak of 'tasks'. This trivially allows us to use threads, fibers,
whatever to back the module. I personally think this is the best way to
build a message-passing abstraction because it gives enough transparency
to *actually* distribute tasks across machines without things breaking.


Cool, but currently it's a leaky abstraction. For instance if task is 
implemented with fibers static variables will be shared among threads.
Essentially I think Fibers need TLS (or rather FLS) synced with language 
'static' keyword. Otherwise the whole TLS by default is a useless chunk 
of machinery.



B) We make the module capable of backing tasks with both  threads and
fibers, and expose an interface that allows the user to choose what kind
of task is spawned. I'm *not* convinced this is a good approach because
it's extremely error-prone (imagine doing a thread-based receive inside
a fiber-based task!).

Bleh.


C) We just swap out threads with fibers and document that the module
uses fibers. See my comments in A for why I'm not sure this is a good idea.
Seems a lot like A but with task defined to be a fiber. I'd prefer this. 
However then it needs a user-defined policy for distributing fibers 
across real threads (pools). Btw A is full of this too.



All of these are going to break code in one way or another - that's
unavoidable. But we really need to make std.concurrency grow up; other
languages (Erlang, Rust, Go, ...) have had micro-threads (in some form)
for years, and if we want D to be seriously usable for large-scale
concurrency, we need to have them too.

Thoughts? Other ideas?


+1

--
Dmitry Olshansky


Re: std.concurrency and fibers

2012-10-04 Thread Dmitry Olshansky

On 04-Oct-12 16:48, Timon Gehr wrote:

On 10/04/2012 02:22 PM, Alex Rønne Petersen wrote:

On 04-10-2012 14:11, Timon Gehr wrote:

[snip]


I think that no matter what we do, we have to simply say "don't do that"
to thread-local state (it would break in distributed scenarios too, for
instance).

Instead, I think we should do what the Rust folks did: Use *task*-local
state and leave it up to std.concurrency to figure out how to deal with
it. It won't be as 'seamless' as TLS variables in D of course, but I
think it's good enough in practice.



If it is not seamless, we have failed. IMO the runtime should expose an
interface for allocating TLS, switching between TLS instances and
destroying TLS.


Agreed.


What about the stack? Allocating a fixed-size stack per task is costly
and Walter opposes dynamic stack growth.


Allocating a fixed-size stack is costly only in terms of virtual address 
space. Then running out of address space is of concern on 32-bits only. 
On 64 bits you may as well allocate 1 Gb per task it will only get 
reserved if it's used.


--
Dmitry Olshansky


Re: std.concurrency and fibers

2012-10-04 Thread Jonathan M Davis
On Thursday, October 04, 2012 13:32:01 Alex Rønne Petersen wrote:
> Thoughts? Other ideas?

std.concurrency is supposed to be designed such that it can be used for more 
than just threads (e.g. sending messages across the network), so if it needs 
to be adjusted to accomodate that, then we should do so, but we need to be 
careful to do it in a way that minimizes code breakage as much as reasonably 
possible.

- Jonathan M Davis


Re: "instanceOf" trait for conditional implementations

2012-10-04 Thread monarch_dodra

On Thursday, 4 October 2012 at 17:57:51 UTC, Simen Kjaeraas wrote:

[SNIP]

@property T instanceOf( T )( );

[SNIP]


Awesome! It is much more robust too! I still don't understand how
*my* instanceOf!(immutable(S)) works :/

I'd just change it to:
@property ref T instanceOf( T )( );
So that instanceOf acts as an LValue.

Having a link error is great too. A compile error would be best,
but unachievable actually, so still pretty good.


Feature request: extending comma operator's functionality

2012-10-04 Thread Tommi
Could you change it so that expressions, that are separated by 
commas and inside an if-clause, would have visibility to the 
variable defined in the first expression? Easier to show than to 
explain:


int getInt()
{
return 11;
}

void main()
{
if (int n = getInt(), n > 10) // undefined identifier n
{
//...
}

if (int n = getInt(), ++n, n > 10) // undefined identifier n
{
//...
}

if (int n = getInt(), getInt() > 10) // OK
{
//...
}
}

That would make it possible to define variables in the smallest 
possible scope, and not pollute the namespace of the enclosing 
scope.


Re: Feature request: extending comma operator's functionality

2012-10-04 Thread monarch_dodra

On Thursday, 4 October 2012 at 21:17:51 UTC, Tommi wrote:
Could you change it so that expressions, that are separated by 
commas and inside an if-clause, would have visibility to the 
variable defined in the first expression? Easier to show than 
to explain:

[SNIP]


A language change sounds excessive for something that simple 
blocks could fix:


int getInt()
{
return 11;
}

void main()
{
{
int n = getInt();
if (n > 10) // OK
{
//...
}
}
{
int n = getInt(); ++n;
if (n > 10) // OK
{
//...
}
}
{
int n = getInt();
if (getInt() > 10) // OK
{
//...
}
}
}

Been doing this in C++ for a while actually.


Re: Feature request: extending comma operator's functionality

2012-10-04 Thread Jonathan M Davis
On Thursday, October 04, 2012 23:11:58 Tommi wrote:
> Could you change it so that expressions, that are separated by
> commas and inside an if-clause, would have visibility to the
> variable defined in the first expression? Easier to show than to
> explain:
> 
> int getInt()
> {
> return 11;
> }
> 
> void main()
> {
> if (int n = getInt(), n > 10) // undefined identifier n
> {
> //...
> }
> 
> if (int n = getInt(), ++n, n > 10) // undefined identifier n
> {
> //...
> }
> 
> if (int n = getInt(), getInt() > 10) // OK
> {
> //...
> }
> }
> 
> That would make it possible to define variables in the smallest
> possible scope, and not pollute the namespace of the enclosing
> scope.

If you want to restrict the scope of a variable, you can simply use another 
set of braces to create a new scope. It might be more verbose than desirable, 
but it works just fine. e.g.

{
 int n = getInt();
 if(n > 10)
 {
 ...
 }
}

As it stands, there's a good chance that the comma operator is actually going 
to be _removed_ from the language (aside from specific use cases such as inside 
for loops). So, I don't think that there's much chance of it being expanded at 
all.

- Jonathan M Davis


Re: Feature request: extending comma operator's functionality

2012-10-04 Thread Tommi
On Thursday, 4 October 2012 at 21:32:34 UTC, Jonathan M Davis 
wrote:


If you want to restrict the scope of a variable, you can simply 
use another set of braces to create a new scope. It might be 
more verbose than desirable, but it works just fine. e.g.


{
 int n = getInt();
 if(n > 10)
 {
 ...
 }
}


But if there are else-if clauses, then you end up polluting your 
namespace, and notice how the syntax of your workaround 
deteriorates exponentially:


The extended if-clause syntax:
--

if (byte n = fun1(), n > 10)
{
//...
}
else if (int n = fun2(), n > 100)
{
//...
}
else if (ulong n = fun3(), n > 1000)
{
//...
}


The workaround syntax:
--

{
byte n1 = fun1();
if (n1 > 10)
{
//...
}
else
{
int n2 = fun2();
if (n2 > 100)
{
//...
}
else
{
ulong n3 = fun3();
if (n3 > 1000)
{
//...
}
}
}
}


As it stands, there's a good chance that the comma operator is 
actually going to be _removed_ from the language (aside from 
specific use cases such as inside for loops). So, I don't think

that there's much chance of it being expanded at all.


I don't see a problem there. I mean, if the comma operator is 
kept in specific cases like inside for loop, why not keep (and 
expand it's use) it in this specific case of if-clause.


Re: Feature request: extending comma operator's functionality

2012-10-04 Thread David Nadlinger

On Thursday, 4 October 2012 at 21:17:51 UTC, Tommi wrote:
Could you change it so that expressions, that are separated by 
commas and inside an if-clause, would have visibility to the 
variable defined in the first expression?


Yes, a language designed could make that choice. But not, it 
certainly won't be considered for D unless it can be shown that 
the change solves a real problem with the current syntax. And how 
often have you really encountered big syntactic headaches because 
of not having something like this available?


David


Re: Feature request: extending comma operator's functionality

2012-10-04 Thread Tommi
On Thursday, 4 October 2012 at 22:28:24 UTC, David Nadlinger 
wrote:


Yes, a language designed could make that choice. But not, it 
certainly won't be considered for D unless it can be shown that 
the change solves a real problem with the current syntax.


But I'm not suggesting any kind of change in syntax. This syntax 
in D currently works (as long as expr2 is convertible to bool):


if (Type var = expr1, expr2)
{
//...
}

What I'm suggesting is, I think, quite reasonable: make it so 
that 'var' is visible to 'expr2'.


Re: Feature request: extending comma operator's functionality

2012-10-04 Thread Adam D. Ruppe
On Thursday, 4 October 2012 at 22:28:24 UTC, David Nadlinger 
wrote:
how often have you really encountered big syntactic headaches 
because of not having something like this available?


I do somewhat regularly. The if(auto x = y()) { use x } is pretty 
convenient but being limited only to the bool check is kinda weak.


Re: openMP

2012-10-04 Thread dsimcha
On Thursday, 4 October 2012 at 16:07:35 UTC, David Nadlinger 
wrote:
For more advanced/application-level use cases, just look at any 
use of ContinueWith in C#. std::future::then() is also proposed 
for C++, see e.g. 
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3327.pdf.


I didn't really read the the N3327 paper in detail, but from a 
brief look it seems to be a nice summary of what you might want 
to do with tasks/asynchronous results – I think you could 
find it an interesting read.


David


Thanks for posting this.  It was an incredibly useful read for 
me!  Given that the code I write is generally compute-intensive, 
not I/O intensive, I'd never given much thought to the value of 
futures in I/O intensive code before this discussion.  I stand by 
what I said before:  Someone (not me because I'm not intimately 
familiar with the use cases; you might be qualified) should write 
a std.future module for Phobos that properly models the 
**abstraction** of a future.  It's only tangentially relevant to 
std.parallelism's charter, which includes both a special case of 
futures that's useful to SMP parallelism and other parallel 
computing constructs.  Then, we should define an adapter that 
allows std.parallelism Tasks to be modeled more abstractly as 
futures when necessary, once we've nailed down what the future 
**abstraction** should look like.


Re: Feature request: extending comma operator's functionality

2012-10-04 Thread Tommi

On Thursday, 4 October 2012 at 22:36:47 UTC, Tommi wrote:
But I'm not suggesting any kind of change in syntax. This 
syntax in D currently works (as long as expr2 is convertible to 
bool):


if (Type var = expr1, expr2)
{
//...
}

What I'm suggesting is, I think, quite reasonable: make it so 
that 'var' is visible to 'expr2'.


Uh... I was actually wrong. What that syntax really does is this:

if (Type var = cast(Type)expr2)
{
//...
}

Didn't see that coming. But I think it might be a bug, because 
assignment expression has precedence over sequencing expression, 
that is, expressions separated by commas.


Re: Feature request: extending comma operator's functionality

2012-10-04 Thread Tommi

On Thursday, 4 October 2012 at 23:16:53 UTC, Tommi wrote:


Didn't see that coming. But I think it might be a bug, because 
assignment expression has precedence over sequencing 
expression, that is, expressions separated by commas.


Although that's not an assignment expression, but a variable 
definition. I think the following should be a bug then 
(currently):


if (int val = 123, true)
{
//...
}

Because the following is a bug:

int val = 123, true; // Error: no identifier for declarator int
 // Error: semicolon expected, not 'true'


Re: std.concurrency and fibers

2012-10-04 Thread Sean Kelly
On Oct 4, 2012, at 4:32 AM, Alex Rønne Petersen  wrote:

> Hi,
> 
> We currently have std.concurrency as a message-passing mechanism. We 
> encourage people to use it instead of OS threads, which is great. However, 
> what is *not* great is that spawned tasks correspond 1:1 to OS threads. This 
> is not even remotely scalable for Erlang-style concurrency. There's a fairly 
> simple way to fix that: Fibers.
> 
> The only problem with adding fiber support to std.concurrency is that the 
> interface is just not flexible enough. The current interface is completely 
> and entirely tied to the notion of threads (contrary to what its module 
> description says).

How is the interface tied to the notion of threads?  I had hoped to design it 
with the underlying concurrency mechanism completely abstracted.  The most 
significant reason that fibers aren't used behind the scenes today is because 
the default storage class of static data is thread-local, and this would really 
have to be made fiber-local.  I'm reasonably certain this could be done and 
have considered going so far as to make the main thread in D a fiber, but the 
implementation is definitely non-trivial and will probably be slower than the 
built-in TLS mechanism as well.  So consider the current std.concurrency 
implementation to be a prototype.  I'd also like to add interprocess messaging, 
but that will be another big task.

Re: std.concurrency and fibers

2012-10-04 Thread Sean Kelly
On Oct 4, 2012, at 5:48 AM, Timon Gehr  wrote:
> 
> What about the stack? Allocating a fixed-size stack per task is costly
> and Walter opposes dynamic stack growth.

This is another reason I've been delaying using fibers.  The correct approach 
is probably to go the distance by reserving a large block, committing only a 
portion, and commit the rest dynamically as needed.  The current fiber 
implementation does have a guard page in some cases, but doesn't go so far as 
to reserve/commit portions of a larger stack space.

Re: Feature request: extending comma operator's functionality

2012-10-04 Thread Jonathan M Davis
On Thursday, October 04, 2012 23:56:15 Tommi wrote:
> > As it stands, there's a good chance that the comma operator is
> > actually going to be _removed_ from the language (aside from
> > specific use cases such as inside for loops). So, I don't think
> > that there's much chance of it being expanded at all.
> 
> I don't see a problem there. I mean, if the comma operator is
> kept in specific cases like inside for loop, why not keep (and
> expand it's use) it in this specific case of if-clause.

You will have a hard sell with _anything_ involving commas other than tuples. 
Most people consider anything like the comma operator to be evil (or at least 
very undesirable).

- Jonathan M Davis


Re: Feature request: extending comma operator's functionality

2012-10-04 Thread Tommi
Maybe we forget about commas then, and extend if-clauses so that 
you can properly define variables at the beginning of it. 
Separated by semicolons.


string name;

if (string street = nextStreet();
int number = nextNumber();
auto person = new Person(name);

person.livesAt(number, street))
{
// use street, number, and person
}


Re: Feature request: extending comma operator's functionality

2012-10-04 Thread Jonathan M Davis
On Friday, October 05, 2012 00:36:22 Adam D. Ruppe wrote:
> On Thursday, 4 October 2012 at 22:28:24 UTC, David Nadlinger
> 
> wrote:
> > how often have you really encountered big syntactic headaches
> > because of not having something like this available?
> 
> I do somewhat regularly. The if(auto x = y()) { use x } is pretty
> convenient but being limited only to the bool check is kinda weak.

Yeah. It would definitely be useful to be able to do like you do with a for 
loop with an if, but in that case, I'd probably suggest just making it look 
like it looks like with for.

if(auto x = y(); x != 42)
{}

That would be really cool, but I expect that it would be hard to talk Walter 
into it.

- Jonathan M Davis


Re: Feature request: extending comma operator's functionality

2012-10-04 Thread bearophile

Tommi:

Maybe we forget about commas then, and extend if-clauses so 
that you can properly define variables at the beginning of it. 
Separated by semicolons.


Regarding definition of variables in D language constructs, there 
is one situation where sometimes I find D not handy. This code 
can't work:


do {
  const x = ...;
} while (predicate(x));


You need to use:

T x;
do {
  x = ...;
} while (predicate(x));

Bye,
bearophile


Re: Feature request: extending comma operator's functionality

2012-10-04 Thread Jonathan M Davis
On Friday, October 05, 2012 02:08:14 bearophile wrote:
> Tommi:
> > Maybe we forget about commas then, and extend if-clauses so
> > that you can properly define variables at the beginning of it.
> > Separated by semicolons.
> 
> Regarding definition of variables in D language constructs, there
> is one situation where sometimes I find D not handy. This code
> can't work:
> 
> do {
> const x = ...;
> } while (predicate(x));
> 
> 
> You need to use:
> 
> T x;
> do {
> x = ...;
> } while (predicate(x));

Yeah. That comes from C/C++ (and is the same in Java and C#, I believe). I 
don't know why it works that way. It's definitely annoying.

Of course, changing it at this point would change the semantics in a 
potentially code-breaking manner in that if the condition relies on any 
variables local to the loop having been destroyed, then its behavior will 
change. That's probably an insanely uncommon situation though - enough so that 
I'd be all for changing the semantics to have the scope exited _after_ the 
test is done (assuming that there's not a solid technical reason to keep it 
as-is). But I have no idea how possible it is to talk Walter into that sort of 
change.

- Jonathan M Davis


Re: Feature request: extending comma operator's functionality

2012-10-04 Thread timotheecour

Is the plan to deprecate comma operator for chaining expressions?
I would love to see more syntactic sugar to support tuples, and 
comma operator would be the best fit for that purpose.


eg:

import std.typecons;
auto fun(){
return tuple(1,"abc");
//1) ideally, we should be able to write:
//return (1,"abc");
//with same semantics (and no need to import std.typecons)
}

//at the call site: currently:
auto t=fun();
auto a=t[0];
auto b=t[1];

//2) ideally, we should be able to write:
auto (a,b,c)=fun();

//3) or even:
(a,b,c)=fun();


Will it be difficult to implement 2)? (by far the most important 
of 1,2,3)

Is 1) and 3) a good idea?


Re: Feature request: extending comma operator's functionality

2012-10-04 Thread Jonathan M Davis
On Friday, October 05, 2012 02:33:45 timotheecour wrote:
> Is the plan to deprecate comma operator for chaining expressions?

That's all the comma operator does. If it's not chaining expressions, it's not 
the comma operator (e.g. variables declarations do _not_ use the comma 
operator even though they can use commas).

There is a proposal to remove the comma operator, altering for loops so that 
they explicitly support using commas like they currently do but otherwise 
completely removing the comma operator:

http://www.prowiki.org/wiki4d/wiki.cgi?LanguageDevel/DIPs/DIP19

> I would love to see more syntactic sugar to support tuples, and
> comma operator would be the best fit for that purpose.

That's one of the main reasons for the proposal, but nothing has been decided 
yet.

The discussion is here:

http://forum.dlang.org/thread/k3ns2a$1ndc$1...@digitalmars.com

- Jonathan M Davis


Re: Will the D GC be awesome?

2012-10-04 Thread Walter Bright

On 10/3/2012 11:50 PM, Jacob Carlborg wrote:

On 2012-10-04 01:33, Alex Rønne Petersen wrote:

> Use tuples. Multiple return values (as far as ABI goes) are impractical
> because every major compiler back end (GCC, LLVM, ...) would have to be
> adjusted for every architecture.

Why can't it just be syntax sugar for returning a struct?



That's really the only credible way. A tuple should be an anonymous 
struct with the fields being the elements of the tuple.


The main issue for me for having perfect tuples is that the layout of 
fields in a struct is different from the layout of parameters being 
passed on the stack to a function. Ideally,


   struct S { int a; int b; }
   void foo(int p, int q);
   S s;
   foo(s);

should work (setting aside for the moment that they are different 
types). Unfortunately, the variety of function calling ABIs makes this 
impractical.


So tuples in a language like D that must conform to external ABIs is 
that tuples will always have some rough edges.


Re: T.init and @disable this

2012-10-04 Thread deadalnix

Le 04/10/2012 10:18, monarch_dodra a écrit :

I'm trying to find out the exact semantics of

@disable this();

It is not well documented, and the fact that it is (supposedly) buggy
makes it really confusing.

My understanding is that it "merely" makes it illegal to default
initialization your type: You, the developer, have to specify the
initial value.

//
T t; //initializer required for type
//
Which means, you, the developper, must explicitly choose an initial value.

However, DOES or DOES NOT this remain legal?
//
T t = T.init; //Fine: You chose the initializer T.init
//

Keep in mind it is not possible to make "T.init" itself disappear,
because nothing can be constructed if T.init is not first memcopied onto
the object, before calling any constructor proper.

I think this should be legal, because you, the developer, is asking for
it, just the same way one can write "T t = void".

Making it illegal would pretty much make T unmoveable, un-emplaceable,
un-initializeable on un-initialized memmory, and would probably break
more than one function/trait which uses "T.init"

Feedback?


Making T.init unsafe in this case should be enough.


Re: std.concurrency and fibers

2012-10-04 Thread deadalnix

Le 04/10/2012 13:32, Alex Rønne Petersen a écrit :

Hi,

We currently have std.concurrency as a message-passing mechanism. We
encourage people to use it instead of OS threads, which is great.
However, what is *not* great is that spawned tasks correspond 1:1 to OS
threads. This is not even remotely scalable for Erlang-style
concurrency. There's a fairly simple way to fix that: Fibers.

The only problem with adding fiber support to std.concurrency is that
the interface is just not flexible enough. The current interface is
completely and entirely tied to the notion of threads (contrary to what
its module description says).

Now, I see a number of ways we can fix this:

A) We completely get rid of the notion of threads and instead simply
speak of 'tasks'. This trivially allows us to use threads, fibers,
whatever to back the module. I personally think this is the best way to
build a message-passing abstraction because it gives enough transparency
to *actually* distribute tasks across machines without things breaking.
B) We make the module capable of backing tasks with both threads and
fibers, and expose an interface that allows the user to choose what kind
of task is spawned. I'm *not* convinced this is a good approach because
it's extremely error-prone (imagine doing a thread-based receive inside
a fiber-based task!).
C) We just swap out threads with fibers and document that the module
uses fibers. See my comments in A for why I'm not sure this is a good idea.

All of these are going to break code in one way or another - that's
unavoidable. But we really need to make std.concurrency grow up; other
languages (Erlang, Rust, Go, ...) have had micro-threads (in some form)
for years, and if we want D to be seriously usable for large-scale
concurrency, we need to have them too.

Thoughts? Other ideas?



Something I wonder for a while : why not run everything in fibers ?


Re: Will the D GC be awesome?

2012-10-04 Thread timotheecour

Ideally,
   struct S { int a; int b; }
   void foo(int p, int q);
   S s;
   foo(s);

should work (setting aside for the moment that they are 
different types). Unfortunately, the variety of function 
calling ABIs makes this impractical.
So tuples in a language like D that must conform to external 
ABIs is that tuples will always have some rough edges.


Why not simply introduce an "expand" property for structs?

foo(s.expand) //equivalent to foo(s.a,s.b)

That would be the exact analogous of expand for tuples and would 
maintain a sane type system .


Re: References in D

2012-10-04 Thread Alex Burton
On Thursday, 4 October 2012 at 17:55:45 UTC, Jonathan M Davis 
wrote:
On Thursday, October 04, 2012 13:14:00 Alex Burton, @gmail.com 
wrote:

On Saturday, 15 September 2012 at 21:30:03 UTC, Walter Bright

wrote:
> On 9/15/2012 5:39 AM, Henning Pohl wrote:
>> The way D is dealing with classes reminds me of pointers
>> because you can null
>> them. C++'s references cannot (of course you can do some 
>> nasty

>> casting).
> 
> Doing null references in C++ is simple:
> 
> int *p = NULL;

> int& r = *p;
> 
> r = 3; // crash


IMHO int * p = NULL is a violation of the type system and 
should

not compile.
NULL can in no way be considered a pointer to an int.


Um. What? It's perfectly legal for pointers to be null. The 
fact that *p
doesn't blow up is a bit annoying, but it makes sense from an 
implementation
standpoint and doesn't really cost you anything other than a 
bit of locality

between the bug and the crash.


I realise what is currently the case I am making an argument as 
to why I this should be changed (at least for class references in 
D).



In the same way this should fail:
Class A
{

}
A a;


And why would this fail? It's also perfectly legal.


I realise that this is currently legal, I am proposing that it 
shouldn't be.


If I call a method on reference a (perhaps after it has been 
passed around to a different part of the code) I get an acess 
violation / segfault whatever - Undefined behaviour.

On windows you might get stack unwinding, but otherwise not.
Failing with a memory violation is a bad thing - much worse than 
failing with an exception.
If I press a button in an app and it has a memory violation I can 
loose all my work, and potentially leave parts of a system in 
undefined states , locks on things etc.
If I get an exception, and the code is exception safe, the gui 
can indicate that the button doesn't work right now - maybe 
saying why, and the user can continue without loosing all their 
stuff (hopefully not pressing the same button and finding the 
same bug).


Alex



Re: References in D

2012-10-04 Thread Alex Burton
On Wednesday, 3 October 2012 at 17:37:14 UTC, Franciszek Czekała 
wrote:
On Wednesday, 3 October 2012 at 16:33:15 UTC, Simen Kjaeraas 
wrote:

On 2012-10-03, 18:12,  wrote:



They make sure you never pass null to a function that doesn't 
expect null - I'd say that's a nice advantage.




However with D, dereferencing an uninitialized reference is 
well defined - null is not random data: you get a well-defined 
exception and you know you are dealing with unitialized data.


The above statement is incorrect AFAIK:

class A
{
int x;
void foo()
{
x = 10;
}
}

void main()
{
A a;
a.foo();
}

Results in :
Segmentation fault (core dumped)


Re: References in D

2012-10-04 Thread Alex Burton
On Saturday, 15 September 2012 at 17:51:39 UTC, Jonathan M Davis 
wrote:
On Saturday, September 15, 2012 19:35:44 Alex Rønne Petersen 
wrote:
Out of curiosity: Why? How often does your code actually 
accept null as

a valid state of a class reference?


I have no idea. I know that it's a non-negligible amount of the 
time, though
it's certainly true that they normally have values. But null is 
how you
indicate that a reference has no value. The same goes for 
arrays and pointers.
Sometimes it's useful to have null and sometimes it's useful to 
know that a
value can't be null. I confess though that I find it very 
surprising how much
some people push for non-nullable references, since I've never 
really found
null to be a problem. Sure, once in a while, you get a null 
pointer/reference
and something blows up, but that's very rare in my experience, 
so I can't help
but think that people who hit issues with null pointers on a 
regular basis are

doing something wrong.

- Jonathan M Davis


In my experience this sort of attutide is not workable in 
projects with more than one developer.
It all works OK if everyone knows the 'rules' about when to check 
for null and when not to.
Telling team members that find bugs caused by your null 
references that they are doing it wrong and next time should 
check for null is a poor substitute for having the language 
define the rules.


A defensive attitude of checking for null everywhere like I have 
seen in many C++ projects makes the code ugly.


Re: std.concurrency and fibers

2012-10-04 Thread Alex Rønne Petersen

On 05-10-2012 04:14, deadalnix wrote:

Le 04/10/2012 13:32, Alex Rønne Petersen a écrit :

Hi,

We currently have std.concurrency as a message-passing mechanism. We
encourage people to use it instead of OS threads, which is great.
However, what is *not* great is that spawned tasks correspond 1:1 to OS
threads. This is not even remotely scalable for Erlang-style
concurrency. There's a fairly simple way to fix that: Fibers.

The only problem with adding fiber support to std.concurrency is that
the interface is just not flexible enough. The current interface is
completely and entirely tied to the notion of threads (contrary to what
its module description says).

Now, I see a number of ways we can fix this:

A) We completely get rid of the notion of threads and instead simply
speak of 'tasks'. This trivially allows us to use threads, fibers,
whatever to back the module. I personally think this is the best way to
build a message-passing abstraction because it gives enough transparency
to *actually* distribute tasks across machines without things breaking.
B) We make the module capable of backing tasks with both threads and
fibers, and expose an interface that allows the user to choose what kind
of task is spawned. I'm *not* convinced this is a good approach because
it's extremely error-prone (imagine doing a thread-based receive inside
a fiber-based task!).
C) We just swap out threads with fibers and document that the module
uses fibers. See my comments in A for why I'm not sure this is a good
idea.

All of these are going to break code in one way or another - that's
unavoidable. But we really need to make std.concurrency grow up; other
languages (Erlang, Rust, Go, ...) have had micro-threads (in some form)
for years, and if we want D to be seriously usable for large-scale
concurrency, we need to have them too.

Thoughts? Other ideas?



Something I wonder for a while : why not run everything in fibers ?


Because then we definitely need dynamic stack growth wired into both the 
compiler and the runtime.


Not impossible, but there's a *lot* of effort required (and convincing, 
in Walter's case).


--
Alex Rønne Petersen
a...@lycus.org
http://lycus.org


Re: std.concurrency and fibers

2012-10-04 Thread Alex Rønne Petersen

On 05-10-2012 01:30, Sean Kelly wrote:

On Oct 4, 2012, at 4:32 AM, Alex Rønne Petersen  wrote:


Hi,

We currently have std.concurrency as a message-passing mechanism. We encourage 
people to use it instead of OS threads, which is great. However, what is *not* 
great is that spawned tasks correspond 1:1 to OS threads. This is not even 
remotely scalable for Erlang-style concurrency. There's a fairly simple way to 
fix that: Fibers.

The only problem with adding fiber support to std.concurrency is that the 
interface is just not flexible enough. The current interface is completely and 
entirely tied to the notion of threads (contrary to what its module description 
says).


How is the interface tied to the notion of threads?  I had hoped to design it 
with the underlying concurrency mechanism completely abstracted.  The most 
significant reason that fibers aren't used behind the scenes today is because 
the default storage class of static data is thread-local, and this would really 
have to be made fiber-local.  I'm reasonably certain this could be done and 
have considered going so far as to make the main thread in D a fiber, but the 
implementation is definitely non-trivial and will probably be slower than the 
built-in TLS mechanism as well.  So consider the current std.concurrency 
implementation to be a prototype.  I'd also like to add interprocess messaging, 
but that will be another big task.



Mostly in that everything operates on Tids (as opposed to some opaque 
Cid type) and, as you mentioned, TLS. The problem is basically that 
people have gotten used to std.concurrency always using OS threads due 
to subtle things like that from day one.


--
Alex Rønne Petersen
a...@lycus.org
http://lycus.org


Re: std.concurrency and fibers

2012-10-04 Thread Alex Rønne Petersen

On 05-10-2012 01:34, Sean Kelly wrote:

On Oct 4, 2012, at 5:48 AM, Timon Gehr  wrote:


What about the stack? Allocating a fixed-size stack per task is costly
and Walter opposes dynamic stack growth.


This is another reason I've been delaying using fibers.  The correct approach 
is probably to go the distance by reserving a large block, committing only a 
portion, and commit the rest dynamically as needed.  The current fiber 
implementation does have a guard page in some cases, but doesn't go so far as 
to reserve/commit portions of a larger stack space.



I think we'd need compiler support to be able to do it in a reasonable 
way at all. Doing it via OS virtual memory hacks seems like a bad idea 
to me.


--
Alex Rønne Petersen
a...@lycus.org
http://lycus.org


Re: std.concurrency and fibers

2012-10-04 Thread Alex Rønne Petersen

On 04-10-2012 22:04, Dmitry Olshansky wrote:

On 04-Oct-12 15:32, Alex Rønne Petersen wrote:

Hi,

We currently have std.concurrency as a message-passing mechanism. We
encourage people to use it instead of OS threads, which is great.
However, what is *not* great is that spawned tasks correspond 1:1 to OS
threads. This is not even remotely scalable for Erlang-style
concurrency. There's a fairly simple way to fix that: Fibers.

The only problem with adding fiber support to std.concurrency is that
the interface is just not flexible enough. The current interface is
completely and entirely tied to the notion of threads (contrary to what
its module description says).

Now, I see a number of ways we can fix this:

A) We completely get rid of the notion of threads and instead simply
speak of 'tasks'. This trivially allows us to use threads, fibers,
whatever to back the module. I personally think this is the best way to
build a message-passing abstraction because it gives enough transparency
to *actually* distribute tasks across machines without things breaking.


Cool, but currently it's a leaky abstraction. For instance if task is
implemented with fibers static variables will be shared among threads.
Essentially I think Fibers need TLS (or rather FLS) synced with language
'static' keyword. Otherwise the whole TLS by default is a useless chunk
of machinery.


Yeah, it's a problem all right. But we'll need compiler support for this 
stuff in any case.


Can't help but wonder if it's really worth it. It seems to me like a 
simple AA-like API based on the typeid of data would be better -- as in, 
much more generic -- than trying to teach the compiler and runtime how 
to deal with this stuff.


Think something like this:

struct Data
{
int foo;
float bar;
}

void myTask()
{
auto data = Data(42, 42.42f);

TaskStore.save(data);

// work ...

foo();

// work ...
}

void foo()
{
auto data = TaskStore.load!Data();

// work ...
}

I admit, not as seamless as static variables, but a hell of a lot less 
magical.





B) We make the module capable of backing tasks with both  threads and
fibers, and expose an interface that allows the user to choose what kind
of task is spawned. I'm *not* convinced this is a good approach because
it's extremely error-prone (imagine doing a thread-based receive inside
a fiber-based task!).

Bleh.


C) We just swap out threads with fibers and document that the module
uses fibers. See my comments in A for why I'm not sure this is a good
idea.

Seems a lot like A but with task defined to be a fiber. I'd prefer this.
However then it needs a user-defined policy for distributing fibers
across real threads (pools). Btw A is full of this too.


By choosing C we effectively give up any hope of distributed tasks and 
especially if we have a scheduler API. Is that really a good idea in 
this day and age?





All of these are going to break code in one way or another - that's
unavoidable. But we really need to make std.concurrency grow up; other
languages (Erlang, Rust, Go, ...) have had micro-threads (in some form)
for years, and if we want D to be seriously usable for large-scale
concurrency, we need to have them too.

Thoughts? Other ideas?


+1




--
Alex Rønne Petersen
a...@lycus.org
http://lycus.org


Re: References in D

2012-10-04 Thread Jonathan M Davis
On Friday, October 05, 2012 05:42:03 Alex Burton wrote:
> I realise what is currently the case I am making an argument as
> to why I this should be changed (at least for class references in
> D).

This was talking about C++ references, not D, giving an example of how they 
can be null even though most people think that they can't be. int& isn't even 
legal syntax in D. So, you're responding to the wrong post if you want to talk 
about D references. There are plenty of other places in this thread where such 
a response would make sense, but not here.

Regardless, references in D will _never_ be non-nullable. It would break too 
much code to change it now regardless of whether nullable or non-nullable is 
better. At most, you'll get non-nullable references in addition to nullable 
ones at some point in the future, but that's not going to happen anytime soon. 
The solution that has been decided on is to add a wrapper struct to Phobos 
which allows you to treat a reference as non-nullable. It's far too late to 
change how D works with something so core to the language.

- Jonathan M Davis


Unsafe variadic arguments -> array assignment

2012-10-04 Thread H. S. Teoh
This code (rightfully) generates an error:

int[] f(int[] args...) {
return args;
}

However, this code doesn't generate any warning or error:

import std.conv;
import std.stdio;

class C {
real[] val;

this(real[] v...) {
val = v;
}

override string toString() {
return to!string(val);
}
}
C f() {
return new C(1.0);
}
void main() {
auto e = f();
writeln(e);
}

This code may _appear_ to work on some machines, but actually there is a
nasty bug lurking in it: the ctor's arguments are on the call stack, and
'val' is left referencing an array on the stack which has gone out of
scope. When dereferenced later, it will quite likely read garbage
values, because that part of the stack has been overwritten with other
stuff in the interim! On my machine, the output is:

[1.93185]

(It should be [1.0].)

Rewriting the ctor to read as follows fixes the problem:

this(real[] v...) {
val = v.dup;
}

The compiler should not allow this unsafe copying of a variadic argument
list to an object member. Is this a known issue? I'll file a new bug if
not.


T

-- 
ASCII stupid question, getty stupid ANSI.


Re: Will the D GC be awesome?

2012-10-04 Thread Jacob Carlborg

On 2012-10-05 04:57, timotheecour wrote:


Why not simply introduce an "expand" property for structs?

foo(s.expand) //equivalent to foo(s.a,s.b)

That would be the exact analogous of expand for tuples and would
maintain a sane type system .


We already have the .tupleof property:

struct Foo
{
int x;
int y;
}

void foo (int a, int b)
{
writeln(a, " ", b);
}

void main ()
{
auto f = Foo(1, 2);
foo(f.tupleof);
}

This works today and I'm pretty sure it has for a long time.

--
/Jacob Carlborg


Re: References in D

2012-10-04 Thread Alex Burton

On Friday, 5 October 2012 at 04:50:18 UTC, Jonathan M Davis wrote:

On Friday, October 05, 2012 05:42:03 Alex Burton wrote:

I realise what is currently the case I am making an argument as
to why I this should be changed (at least for class references 
in

D).


This was talking about C++ references, not D, giving an example 
of how they
can be null even though most people think that they can't be. 
int& isn't even

legal syntax in D.


I was talking about both. Regardless of whether the int& 
reference or the int * reference was used to assign to memory 
address 0 in Walters example the result is still a crash and will 
be in D with equivalent code using explicit pointers or instances 
of classes (which are always pointers in D).


The crash is a result of two mistakes:

One is the language designer allowing null to be an acceptable 
value for a pointer to an int.
As should be blatently obvious that null is not a pointer to an 
int, but for historical reasons inherited from C (when people 
were just happy to get out of assembly language) it has been 
allowed.


The second mistake is that someone chose to use the language 
feature which clearly makes no sense.

This is bad programming for two reasons:
1) It is logically incorrect to state that 0 is a pointer to 
something.
2) It is a case of using magic numbers in code - an antipattern. 
It is trying to create some universal consensus that the magic 
number 0 means something special. What I am supposed to do with a 
null pointer is not so universal.
Do I construct it ? Do I throw an exception ? Why on earth has 
someone sent me this 0 when my type system specifies I want a 
pointer to an int ?


Regardless, references in D will _never_ be non-nullable. It 
would break too
much code to change it now regardless of whether nullable or 
non-nullable is

better.


I don't think this argument is any more powerful than any of the 
other 'lets remain compatible with C to avoid breakage' ones.


If it were changed there could be compiler errors for 
uninitialised references, and tests for null. These sorts of 
compile time errors are much more preferable than undefined 
behaviour in released code that crashes IMHO.


Alex