Re: D wiki

2012-11-15 Thread Andrej Mitrovic
On 11/15/12, Vladimir Panteleev  wrote:
> I'd love to look into writing a wiki syntax converter to move all
> the content from ProWiki to a new MediaWiki instance

Well even if we don't have that we will have an opportunity to review
and update outdated articles.


Re: Pyd thread

2012-11-15 Thread Maxim Fomin
On Thursday, 15 November 2012 at 02:51:08 UTC, Ellery Newcomer 
wrote:
Just tried building a shared library on linux with dmd (and 
calling it from C).


It works! Holy crap, it even runs my static constructors and 
unittests! I only had to screw with the linking process a 
little bit!


It doesn't work for x64, though. Gives me

/usr/bin/ld: /usr/lib64/dmd/libphobos2.a(object__c_58c.o): 
relocation R_X86_64_32 against `_D10TypeInfo_m6__initZ' can not 
be used when making a shared object; recompile with -fPIC

/usr/lib64/dmd/libphobos2.a: could not read symbols: Bad value
collect2: ld returned 1 exit status
--- errorlevel 1

Though why it doesn't do this for x32 is beyond me. Those 
object files don't appear to be -fPIC either.


You can dynamically link to D shared libraries on linux 
(http://forum.dlang.org/thread/k3vfm9$1tq$1...@digitalmars.com?page=2). 
The message you receive actually means that you cannot make a 
shared library from current version of Phobos and Druntime due to 
how they are compiled. However I saw several people working on 
making druntime shared. The solution is not to put druntime in 
.so file.


I tried to investigate which features do not work with dynamic 
linking (not loading) and found certainly one - it is related to 
not invoking scope(XXX) statements at some circumstances.





Re: D wiki

2012-11-15 Thread Tobias Pankrath
On Thursday, 15 November 2012 at 08:28:09 UTC, Andrej Mitrovic 
wrote:
On 11/15/12, Vladimir Panteleev  
wrote:
I'd love to look into writing a wiki syntax converter to move 
all

the content from ProWiki to a new MediaWiki instance


Well even if we don't have that we will have an opportunity to 
review

and update outdated articles.


We really should do that and remove everything not proven to be 
uptodate.


Re: Compiler bug? (alias sth this; and std.signals)

2012-11-15 Thread Joe

On Wednesday, 14 November 2012 at 09:31:47 UTC, eskimo wrote:
But first it is copied to every generic function that might be 
called on

the way.


Ok, I guess it just doesn't do what I understood it to do (which 
is too bad, but to be expected with a new language). In any case 
you would appear to be correct, as


void main()
{
Foo f;
Observer o = new Observer;
f.prop.connect(&o.watch);
f.prop = 7;
writeln(f.prop.get);
}

works. It just doesn't look like intended.


Re: Something needs to happen with shared, and soon.

2012-11-15 Thread Manu
On 14 November 2012 19:54, Andrei Alexandrescu <
seewebsiteforem...@erdani.org> wrote:

> On 11/14/12 9:31 AM, David Nadlinger wrote:
>
>> On Wednesday, 14 November 2012 at 15:08:35 UTC, Andrei Alexandrescu wrote:
>>
>>> Sorry, I was imprecise. We need to (a) define intrinsics for loading
>>> and storing data with high-level semantics (a short list: acquire,
>>> release, acquire+release, and sequentially-consistent) and THEN (b)
>>> implement the needed code generation appropriately for each
>>> architecture. Indeed on x86 there is little need to insert fence
>>> instructions, BUT there is a definite need for the compiler to prevent
>>> certain reorderings. That's why implementing shared data operations
>>> (whether implicit or explicit) as sheer library code is NOT possible.
>>>
>>
>> Sorry, I didn't see this message of yours before replying (the perils of
>> threaded news readers…).
>>
>> You are right about the fact that we need some degree of compiler
>> support for atomic instructions. My point was that is it already
>> available, otherwise it would have been impossible to implement
>> core.atomic.{atomicLoad, atomicStore} (for DMD inline asm is used, which
>> prohibits compiler code motion).
>>
>
> Yah, the whole point here is that we need something IN THE LANGUAGE
> DEFINITION about atomicLoad and atomicStore. NOT IN THE IMPLEMENTATION.
>
> THIS IS VERY IMPORTANT.


I won't outright disagree, but this seems VERY dangerous to me.

You need to carefully study all popular architectures, and consider that if
the language is made to depend on these primitives, and the architecture
doesn't support it, or support that particular style of implementation
(fairly likely), than D will become incompatible with a huge number of
architectures on that day.

This is a very big deal. I would be scared to see the compiler generate
intrinsic calls to atomic synchronisation primitives. It's almost like
banning architectures from the language.

The Nintendo Wii for instance, not an unpopular machine, only sold 130
million units! Does not have synchronisation instructions in the
architecture (insane, I know, but there it is. I've had to spend time
working around this in the past).
I'm sure it's not unique in this way.

People getting fancy with lock-free/atomic operations will probably wrap it
up in libraries. And they're not globally applicable, atomic memory
operations don't magically solve problems, they require very specific
structures and access patterns around them. I'm just not convinced they
should be intrinsics issued by the language. They're just not as well
standardised as 'int' or 'float'.

Side note: I still think a convenient and fairly practical solution is to
make 'shared' things 'lockable'; where you can lock()/unlock() them, and
assignment to/from shared things is valid (no casting), but a runtime
assert insists that the entity is locked whenever it is accessed.* *It's
simplistic, but it's safe, and it works with the same primitives that
already exist and are proven. Let the programmer mark the lock/unlock
moments, worry about sequencing, etc... at least for the time being. Don't
try and do it automatically (yet).
The broad use cases in D aren't yet known, but making 'shared' useful today
would be valuable.

 Thus, »we«, meaning on a language level, don't need to change anything
>> about the current situations, with the possible exception of adding
>> finer-grained control to core.atomic.MemoryOrder/mysnc [1]. It is the
>> duty of the compiler writers to provide the appropriate means to
>> implement druntime on their code generation infrastructure – and indeed,
>> the situation in DMD could be improved, using inline asm is hitting a
>> fly with a sledgehammer.
>>
>
> That is correct. My point is that compiler implementers would follow some
> specification. That specification would contain informationt hat atomicLoad
> and atomicStore must have special properties that put them apart from any
> other functions.
>
>
>  David
>>
>>
>> [1] I am not sure where the point of diminishing returns is here,
>> although it might make sense to provide the same options as C++11. If I
>> remember correctly, D1/Tango supported a lot more levels of
>> synchronization.
>>
>
> We could start with sequential consistency and then explore riskier/looser
> policies.
>
>
> Andrei
>


Re: Something needs to happen with shared, and soon.

2012-11-15 Thread Manu
On 15 November 2012 04:30, Andrei Alexandrescu <
seewebsiteforem...@erdani.org> wrote:

> On 11/11/12 6:30 PM, Walter Bright wrote:
>
>> 1. ensure single threaded access by aquiring a mutex
>> 2. cast away shared
>> 3. operate on the data
>> 4. cast back to shared
>> 5. release the mutex
>>
>
> This is very different from how I view we should do things (and how we
> actually agreed to do things and how I wrote in TDPL).
>
> I can't believe I need to restart this on a cold cache.


The pattern Walter describes is primitive and useful, I'd like to see
shared assist to that end (see my previous post).
You can endeavour to do any other fancy stuff you like, but until some
distant future when it's actually done, then proven and well supported,
I'll keep doing this.

Not to repeat my prev post... but in reply to Walter's take on it, it would
be interesting if 'shared' just added implicit lock()/unlock() methods to
do the mutex acquisition and then remove the cast requirement, but have the
language runtime assert that the object is locked whenever it is accessed
(this guarantees the safety in a more useful way, the casts are really
annying). I can't imagine a simpler and more immediately useful solution.

In fact, it's a reasonably small step to this being possible with
user-defined attributes. Although attributes have no current mechanism to
add a mutex, and lock/unlock methods to the object being attributed (like
is possible in Java/C#), but maybe it's not a huge leap.


Re: Something needs to happen with shared, and soon.

2012-11-15 Thread Jacob Carlborg

On 2012-11-14 22:06, Walter Bright wrote:


I hate to repeat myself, but:

Thread 1:
 1. create shared object
 2. pass reference to that object to Thread 2
 3. destroy object

Thread 2:
 1. manipulate that object


Why would the object be destroyed if there's still a reference to it? If 
the object is manually destroyed I don't see what threads have to do 
with it since you can do the same thing in a single thread application.


--
/Jacob Carlborg


Re: Compiler bug? (alias sth this; and std.signals)

2012-11-15 Thread eskimo
On Thu, 2012-11-15 at 09:53 +0100, Joe wrote:
> On Wednesday, 14 November 2012 at 09:31:47 UTC, eskimo wrote:
> > But first it is copied to every generic function that might be 
> > called on
> > the way.
> 
> Ok, I guess it just doesn't do what I understood it to do (which 
> is too bad, but to be expected with a new language). In any case 
> you would appear to be correct, as
> 
> void main()
> {
>   Foo f;
>   Observer o = new Observer;
>   f.prop.connect(&o.watch);
>   f.prop = 7;
>   writeln(f.prop.get);
> }
> 
> works. It just doesn't look like intended.

Well if signal had a proper postblit constructor your original way of
doing it would work. It is just not as efficient, but this is a price
you have to pay. The compiler has no way of knowing that you intended to
pass the contained int from the beginning, when you are actually passing
the containing struct.

But, considering that the alias this triggers only when you are issuing
an operation not supported by the struct itself, it is pretty reasonable
behaviour and everything else would be pretty surprising.

Best regards,

Robert




Re: Binary compatibility on Linux

2012-11-15 Thread Jacob Carlborg

On 2012-11-15 08:51, Thomas Koch wrote:


You're right about make. However the Makefiles that one needs today for
Debian packages are so trivial that it's not worth to worry about it. The
most basic debian/rules (which is a Makefile) looks like:

#!/usr/bin/make -f
%:
dh $@

You only need to add additional targets if you want to override default
actions. In that case you usually add simple targets with a few lines.

We could switch from Makefiles to something else but it's simply not worth
the effort.


Well, I simply don't think Makefiles are worth the effort.


But after all you don't need to do the Debian packaging yourself. It's even
a bit infamous if upstream is also the maintainer of the Debian package for
different reasons. Just be a good upstream[2] and find a Debian maintainer
who cares about your software. The same thing for Fedora.


It's not thinking about making the actual Debian package, I was more 
thinking of building the actual software.



[2] wiki.debian.org/UpstreamGuide


I've read that page and from my understanding they prefer to use "make":

"Please don't use SCons"
"Using waf as build system is discouraged"


--
/Jacob Carlborg


Re: Growing a Language (applicable to @attribute design)

2012-11-15 Thread Timon Gehr

On 11/14/2012 11:24 PM, Walter Bright wrote:

On 11/14/2012 3:18 AM, Timon Gehr wrote:

template Foo(alias a){ }
struct S{}

alias S X; // ok
alias int Y;   // ok
mixin Foo!S;   // ok
mixin Foo!int; // not ok

Please fix that. (Everything should be ok.)


Please file a bugzilla for that.



http://d.puremagic.com/issues/show_bug.cgi?id=9029


Re: Something needs to happen with shared, and soon.

2012-11-15 Thread Jacob Carlborg

On 2012-11-15 10:22, Manu wrote:


Not to repeat my prev post... but in reply to Walter's take on it, it
would be interesting if 'shared' just added implicit lock()/unlock()
methods to do the mutex acquisition and then remove the cast
requirement, but have the language runtime assert that the object is
locked whenever it is accessed (this guarantees the safety in a more
useful way, the casts are really annying). I can't imagine a simpler and
more immediately useful solution.


How about implementing a library function, something like this:

shared int i;

lock(i, (x) {
// operate on x
});

* "lock" will acquire a lock
* Cast away shared for "i"
* Call the delegate with the now plain "int"
* Release the lock

http://pastebin.com/tfQ12nJB

--
/Jacob Carlborg


Re: Growing a Language (applicable to @attribute design)

2012-11-15 Thread Joseph Rushton Wakeling

On 11/14/2012 12:06 PM, Simen Kjaeraas wrote:

But the syntax for built-in types is better, in that you don't need to
write:

auto x = int(1);


 suppose I want a size_t?



Re: What's the deal with __thread?

2012-11-15 Thread Don Clugston

On 14/11/12 23:16, Walter Bright wrote:

On 11/14/2012 12:06 PM, Sean Kelly wrote:

On Nov 14, 2012, at 6:26 AM, Don Clugston  wrote:


IIRC it was used prior to 2.030. In the spec, it is in the keyword list,
and it's also listed in the "Migrating to shared" article. That's all.
There are a small number of uses of it in the DMD test suite.

Is it still valid? Is it useful? Or has everyone forgotten that it still
exists?


I think __thread was for explicit TLS before TLS became the default.
I don't
see a continued use for it.



Sean's right.


Good, that's what I thought. Lets remove it from the spec, and deprecate 
it. There is probably no extant code that uses it, outside of the test 
suite.


However, there is one case in the test suite which is unclear to me:

extern(C) __thread int x;

Is there any other way to do this?




Re: What's the deal with __thread?

2012-11-15 Thread Jacob Carlborg

On 2012-11-15 11:28, Don Clugston wrote:


However, there is one case in the test suite which is unclear to me:

extern(C) __thread int x;

Is there any other way to do this?


extern (C) int x;

"extern(C)" doesn't make it global.

--
/Jacob Carlborg


Re: Something needs to happen with shared, and soon.

2012-11-15 Thread Manu
On 15 November 2012 12:14, Jacob Carlborg  wrote:

> On 2012-11-15 10:22, Manu wrote:
>
>  Not to repeat my prev post... but in reply to Walter's take on it, it
>> would be interesting if 'shared' just added implicit lock()/unlock()
>> methods to do the mutex acquisition and then remove the cast
>> requirement, but have the language runtime assert that the object is
>> locked whenever it is accessed (this guarantees the safety in a more
>> useful way, the casts are really annying). I can't imagine a simpler and
>> more immediately useful solution.
>>
>
> How about implementing a library function, something like this:
>
> shared int i;
>
> lock(i, (x) {
> // operate on x
> });
>
> * "lock" will acquire a lock
> * Cast away shared for "i"
> * Call the delegate with the now plain "int"
> * Release the lock
>
> http://pastebin.com/tfQ12nJB


Interesting concept. Nice idea, could certainly be useful, but it doesn't
address the problem as directly as my suggestion.
There are still many problem situations, for instance, any time a template
is involved. The template doesn't know to do that internally, but under my
proposal, you lock it prior to the workload, and then the template works as
expected. Templates won't just break and fail whenever shared is involved,
because assignments would be legal. They'll just assert that the thing is
locked at the time, which is the programmers responsibility to ensure.


Re: Growing a Language (applicable to @attribute design)

2012-11-15 Thread Walter Bright

On 11/15/2012 2:24 AM, Joseph Rushton Wakeling wrote:

On 11/14/2012 12:06 PM, Simen Kjaeraas wrote:

But the syntax for built-in types is better, in that you don't need to
write:

auto x = int(1);


 suppose I want a size_t?



size_t x = 1;


Re: What's the deal with __thread?

2012-11-15 Thread Walter Bright

On 11/15/2012 2:28 AM, Don Clugston wrote:

However, there is one case in the test suite which is unclear to me:

extern(C) __thread int x;

Is there any other way to do this?


extern(C) int x;



D is awesome

2012-11-15 Thread eskimo
Hey guys!

I just wanted to say that D is really really really awesome and I wanted
to thank everyone contributing to it.

I think what D needs the most at the moment is bug fixing so I am very
pleased to read the commit messages:

Fixed bug ...
Fixed bug ...
Fixed bug ...
Fixed bug ...
.

Coming from many different contributors. Also every bug I stumbled upon
until now had already been reported before, which means that D has a
very active and not too small user base already. I like that.

I just wanted to post this, because most of the time people post just
what not works or about possible improvements. But at least from time to
time one should lean back and smile on all the stuff people have
accomplished so far.

So my final words: D is awesome. vibe.d is awesome. Thank you! And of
course lets work together and make it even better. 



Re: I'm back

2012-11-15 Thread Daniel Murphy
"Andrei Alexandrescu"  wrote in message 
news:k81k6s$1qm7$1...@digitalmars.com...
> On 11/14/12 5:30 PM, Daniel Murphy wrote:
>> "Andrei Alexandrescu"  wrote in message
>> news:k80l8p$397$1...@digitalmars.com...
>>> On 11/14/12 7:29 AM, H. S. Teoh wrote:
 But since this isn't going to be fixed properly, then the only solution
 left is to arbitrarily declare transient ranges as not ranges (even
 though the concept of ranges itself has no such implication, and many
 algorithms don't even need such assumptions), and move on. We will just
 have to put up with an inferior implementation of std.algorithm and
 duplicate code when one*does*  need to work with transient ranges. It 
 is
 not a big loss anyway, since one can simply implement one's own library
 to deal with this issue properly.
>>>
>>> What is your answer to my solution?
>>>
>>> transient elements == input range&&  not forward range&&  element type 
>>> has
>>> mutable indirections.
>>>
>>> This is testable by any interested clients, covers a whole lot of 
>>> ground,
>>> and has a good intuition behind it.
>>>
>>>
>>> Andrei
>>
>> Is it just me, or would this still refuse:
>> array(map!"a.dup"(stdin.byLine())) ?
>
> It would accept mapping to!string.
>
> Andrei
>

Is that really good enough?  Keeping ranges simple is important, but so is 
making the obvious solution 'just work'. 




Re: Something needs to happen with shared, and soon.

2012-11-15 Thread Regan Heath
On Thu, 15 Nov 2012 04:33:20 -, Michel Fortin  
 wrote:


On 2012-11-15 02:51:13 +, "Jonathan M Davis"   
said:


I have no idea what we want to do about this situation though.  
Regardless of
what we do with memory barriers and the like, it has no impact on  
whether

casts are required.


Let me restate and extend that idea to atomic operations. Declare a  
variable using the synchronized storage class and it automatically get a  
mutex:


synchronized int i; // declaration

i++; // error, variable shared

synchronized (i)
i++; // fine, variable is thread-local inside synchronized block

Synchronized here is some kind of storage class causing two things: a  
mutex is attached to the variable declaration, and the type of the  
variable is made shared. The variable being shared, you can't access it  
directly. But a synchronized statement will make the variable non-shared  
within its bounds.


Now, if you want a custom mutex class, write it like this:

synchronized(SpinLock) int i;

synchronized(i)
{
// implicit: i.mutexof.lock();
// implicit: scope (exit) i.mutexof.unlock();
i++;
}

If you want to declare the mutex separately, you could do it by  
specifying a variable instead of a type in the variable declaration:


Mutex m;
synchronized(m) int i;

synchronized(i)
{
// implicit: m.lock();
// implicit: scope (exit) m.unlock();
i++;
}

Also, if you have a read-write mutex and only need read access, you  
could declare that you only need read access using const:


synchronized(RWMutex) int i;

synchronized(const i)
{
// implicit: i.mutexof.constLock();
// implicit: scope (exit) i.mutexof.constUnlock();
i++; // error, i is const
}

And finally, if you want to use atomic operations, declare it this way:

synchronized(Atomic) int i;

You can't really synchronize on something protected by Atomic:

	syncronized(i) // cannot make sycnronized block, no lock/unlock method  
in Atomic

{}

But you can call operators on it while synchronized, it works for  
anything implemented by Atomic:


synchronized(i)++; // implicit: Atomic.opUnary!"++"(i);

Because the policy object is associated with the variable declaration,  
when locking the mutex you need direct access to the original variable,  
or an alias to it. Same for performing atomic operations. You can't pass  
a reference to some function and have that function perform the locking.  
If that's a problem it can be avoided by having a way to pass the mutex  
to the function, or by passing an alias to a template.


+1

I suggested something similar as did Sönke:
http://forum.dlang.org/thread/k7orpj$1tt5$1...@digitalmars.com?page=2#post-op.wnnuiio554xghj:40puck.auriga.bhead.co.uk

According to deadalnix the compiler magic I suggested to add the mutex  
isn't possible:

http://forum.dlang.org/thread/k7orpj$1tt5$1...@digitalmars.com?page=3#post-k7qsb5:242gqk:241:40digitalmars.com

Most of our ideas can be implemented with a wrapper template containing  
the sync object (mutex, etc).


So... my feeling is that the best solution for "shared", ignoring the  
memory barrier aspect which I would relegate to a different feature and  
solve a different way, is..


1. Remove the existing mutex from object.
2. Require that all objects passed to synchronized() {} statements  
implement a synchable(*) interface
3. Design a Shared(*) wrapper template/struct that contains a mutex and  
implements synchable(*)
4. Design a Shared(*) base class which contains a mutex and implements  
synchable(*)


Then we design classes which are always shared using the base class and we  
wrap other objects we want to share in Shared() and use them in  
synchronized statements.


This would then relegate any builtin "shared" statement to be solely a  
storage class which makes the object global and not thread local.


(*) names up for debate

R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: I'm back

2012-11-15 Thread Jonathan M Davis
On Thursday, November 15, 2012 22:07:22 Daniel Murphy wrote:
> Is that really good enough?  Keeping ranges simple is important, but so is
> making the obvious solution 'just work'.

std.array.array will never work with ranges with a transient front unless it 
somehow knew when it was and wasn't appropriate to dup, which it's not going 
to know purely by looking at the type of front. The creator of the range would 
have to tell them somehow. And even then, it wouldn't work beyond the built-in 
types, because there's no generic way to dup stuff.

So, either std.array.array will not work directly with byLine or byChunk, or 
byLine and byChunk need to not have transient fronts. If them not working with 
std.array.array is too un-user-friendly, then they need to be changed so that 
they don't have transient fronts, and transient fronts should just be 
considered invalid ranges (though there's no way to actually test for them, so 
anyone who wrote them would still be able to try and use them - they just 
wouldn't work).

- Jonathan M Davis


Re: Something needs to happen with shared, and soon.

2012-11-15 Thread Jonathan M Davis
On Wednesday, November 14, 2012 20:32:35 Andrei Alexandrescu wrote:
> TDPL 13.14 explains that inside synchronized classes, top-level shared
> is automatically lifted.

Then it's doing the casting for you. I suppose that that's an argument that 
using synchronized classes when dealing with shared is the way to go (which 
IIRC TDPL does argue), but that only applies to classes, and there are plenty 
of cases (maybe even the majority) where it's built-in types like arrays or 
AAs which people are trying to share, and synchronized classes won't help them 
there unless they create wrapper types. And explicit casting will be required 
for them. And of course, anyone wanting to use mutexes or synchronized blocks 
will have to use explicit casts regardless of what they're protecting, because 
it won't be inside a synchronized class. So, while synchronized classes make 
dealing with classes nicer, they only handle a very specific portion of  what 
might be used with shared.

In any case, I clearly need to reread TDPL's threading stuff (and maybe the 
whole book). It's been a while since I read it, and I'm getting rusty on the 
details.

By the way, speaking of synchronized classes, as I understand it, they're 
still broken with regards to TDPL in that synchronized is still used on 
functions rather than classes like TDPL describes. So, they aren't currently a 
solution regardless of what the language actual design is supposed to be. 
Obviously, that should be fixed though.

- Jonathan M Davis


Re: Something needs to happen with shared, and soon.

2012-11-15 Thread Jonathan M Davis
On Thursday, November 15, 2012 11:22:30 Manu wrote:
> Not to repeat my prev post... but in reply to Walter's take on it, it would
> be interesting if 'shared' just added implicit lock()/unlock() methods to
> do the mutex acquisition and then remove the cast requirement, but have the
> language runtime assert that the object is locked whenever it is accessed
> (this guarantees the safety in a more useful way, the casts are really
> annying). I can't imagine a simpler and more immediately useful solution.
> 
> In fact, it's a reasonably small step to this being possible with
> user-defined attributes. Although attributes have no current mechanism to
> add a mutex, and lock/unlock methods to the object being attributed (like
> is possible in Java/C#), but maybe it's not a huge leap.

1. It wouldn't stop you from needing to cast away shared at all, because 
without casting away shared, you wouldn't be able to pass it to anything, 
because the types would differ. Even if you were arguing that doing something 
like

void foo(C c) {...}
shared c = new C;
foo(c); //no cast required, lock automatically taken

it wouldn't work, because then foo could wile away a reference to c somewhere, 
and the type system would have no way of knowing that it was a shared variable 
that was being wiled away as opposed to a thread-local one, which means that 
it'll likely generate incorrect code. That can happen with the cast as well, 
but at least in that case, you're forced to be explicit about it, and it's 
automatically @system. If it's done for you, it'll be easy to miss and screw 
up.

2. It's often the case that you need to lock/unlock groups of stuff together 
such that locking specific variables is of often of limited use and would just 
introduce pointless extra locks when dealing with multiple variables. It would 
also increase the risk of deadlocks, because you wouldn't have much - if any - 
control over what order locks were acquired in when dealing with multiple 
shared variables.

- Jonathan M Davis


Re: Something needs to happen with shared, and soon.

2012-11-15 Thread Jonathan M Davis
On Thursday, November 15, 2012 10:22:22 Jacob Carlborg wrote:
> On 2012-11-14 22:06, Walter Bright wrote:
> > I hate to repeat myself, but:
> > 
> > Thread 1:
> >  1. create shared object
> >  2. pass reference to that object to Thread 2
> >  3. destroy object
> > 
> > Thread 2:
> >  1. manipulate that object
> 
> Why would the object be destroyed if there's still a reference to it? If
> the object is manually destroyed I don't see what threads have to do
> with it since you can do the same thing in a single thread application.

Yeah. If the reference passed across were shared, then the runtime should see 
it as having multiple references, and if it's _not_ shared, that means that 
you cast shared away (unsafe, since it's a cast) and passed it across threads 
without making sure that it was the only reference on the original thread. In 
that case, you shot yourself in the foot by using an @system construct 
(casting) and not getting it right. I don't see why the runtime would have to 
worry about that.

Unless the problem is that the object is a value type, so when it goes away on 
the first thread, it _has_ to be destroyed? If that's the case, then it's a 
pointer that was passed across rather than a reference, and then you've 
effectively done the same thing as returning a pointer to a local variable, 
which is @system and again only happens if you're getting @system wrong, which 
the compiler generally doesn't protect you from beyond giving you an error in 
the few cases where it can determine for certain that what you're doing is 
wrong (which is a fairly limited portion of the time).

So, as far as I can see - unless I'm just totally missing something here - 
either you're dealing with shared objects on the heap here, in which case, the 
object shouldn't be destroyed on the first thread unless you do it manually (in 
which case, you're doing something stupid in @system code), or you're dealing 
with passing pointers to shared value types across threads, which is 
essentially the equivalent of escaping a pointer to a local variable (in which 
case, you're doing something stupid in @system code). In either case, it's 
you're doing something stupid in @system code, and I don't see why the runtime 
would have to worry about it. You shot yourself in the foot by incorrectly 
using @system code. If you want protection agains that, then don't use @system 
code.

- Jonathan M Davis


Re: Something needs to happen with shared, and soon.

2012-11-15 Thread Benjamin Thaut

Am 15.11.2012 12:48, schrieb Jonathan M Davis:


Yeah. If the reference passed across were shared, then the runtime should see
it as having multiple references, and if it's _not_ shared, that means that
you cast shared away (unsafe, since it's a cast) and passed it across threads
without making sure that it was the only reference on the original thread. In
that case, you shot yourself in the foot by using an @system construct
(casting) and not getting it right. I don't see why the runtime would have to
worry about that.

Unless the problem is that the object is a value type, so when it goes away on
the first thread, it _has_ to be destroyed? If that's the case, then it's a
pointer that was passed across rather than a reference, and then you've
effectively done the same thing as returning a pointer to a local variable,
which is @system and again only happens if you're getting @system wrong, which
the compiler generally doesn't protect you from beyond giving you an error in
the few cases where it can determine for certain that what you're doing is
wrong (which is a fairly limited portion of the time).

So, as far as I can see - unless I'm just totally missing something here -
either you're dealing with shared objects on the heap here, in which case, the
object shouldn't be destroyed on the first thread unless you do it manually (in
which case, you're doing something stupid in @system code), or you're dealing
with passing pointers to shared value types across threads, which is
essentially the equivalent of escaping a pointer to a local variable (in which
case, you're doing something stupid in @system code). In either case, it's
you're doing something stupid in @system code, and I don't see why the runtime
would have to worry about it. You shot yourself in the foot by incorrectly
using @system code. If you want protection agains that, then don't use @system
code.

- Jonathan M Davis



Thank you, thats exatcly how I'm thinking too. And because of this it 
makes absolutley no sense to me to disallow the destruction of a shared 
struct, if it is allocated on the stack or as a global. If it is 
allocated on the heap you can't destory it manually anyway because 
delete is deprecated.


And for exatcly this reason I wanted a code example from Walter. Because 
just listing a few bullet points does not make a real world use case.


Kind Regards
Benjamin Thaut



Re: I'm back

2012-11-15 Thread jerro


std.array.array will never work with ranges with a transient 
front unless it
somehow knew when it was and wasn't appropriate to dup, which 
it's not going
to know purely by looking at the type of front. The creator of 
the range would
have to tell them somehow. And even then, it wouldn't work 
beyond the built-in

types, because there's no generic way to dup stuff.


Daniel was actually talking about std.byLine.map!"a.dup", which 
is not a transient range, but would be considered transient if we 
did what Andrei suggests.


Re: Something needs to happen with shared, and soon.

2012-11-15 Thread deadalnix

Le 14/11/2012 21:01, Sean Kelly a écrit :

On Nov 14, 2012, at 6:32 AM, Andrei Alexandrescu 
 wrote:


This is a simplification of what should be going on. The 
core.atomic.{atomicLoad, atomicStore} functions must be intrinsics so the 
compiler generate sequentially consistent code with them (i.e. not perform 
certain reorderings). Then there are loads and stores with weaker consistency 
semantics (acquire, release, acquire/release, and consume).


No.  These functions all contain volatile ask blocks.  If the compiler respected the 
"volatile" it would be enough.


It is sufficient for monocore and mostly correct for x86. But isn't enough.

volatile isn't for concurency, but memory mapping.


Re: I'm back

2012-11-15 Thread Jonathan M Davis
On Thursday, November 15, 2012 13:17:12 jerro wrote:
> > std.array.array will never work with ranges with a transient
> > front unless it
> > somehow knew when it was and wasn't appropriate to dup, which
> > it's not going
> > to know purely by looking at the type of front. The creator of
> > the range would
> > have to tell them somehow. And even then, it wouldn't work
> > beyond the built-in
> > types, because there's no generic way to dup stuff.
> 
> Daniel was actually talking about std.byLine.map!"a.dup", which
> is not a transient range, but would be considered transient if we
> did what Andrei suggests.

Well, there's no way around that as far as I can see. Even if all ranges had 
to be explicitly marked as transient or not, map would be in a bind here, 
because it knows nothing about what the function it was given is doing, so it 
has no way of knowing how it affects transience. At minimum, it would be forced 
to mark itself as transient if the original range was (even if the function 
used idup), or it would _always_ be forced to mark it as transient (I'm not 
sure which). The only way out would be if there were a way to tell map 
explicitly to mark the resultant range as having a non-transient front.

By using type deduction like Andrei is suggesting, then we can at least deduce 
that map!"a.idup" has a non-transient front, but the only way that we'd know 
that map!"a.dup" was non-transient was if map were told somehow, and it defined 
an enum that the hasTransientFront trait could examine (i.e. we're back in the 
boat we'd be in if all ranges had to declare whether they were transient or 
not). So, as long as we can have transient fronts, map!"a.dup" is screwed, 
which may or may not be a problem. It's arguably a lot like how we keep having 
to explain why functions don't work with narrow strings because of how narrow 
strings aren't random-access, don't have length, etc. And that's definitely 
annoying, but we can't really fix it.

It's looking like this comes down to either banning ranges with transient 
fronts entirely (and changing how ByLine and ByChunk work), or we're going to 
have to deal with quirks like array(map!"a.dup"(file.byLine())) not working 
whereas array(map!"a.idup"(file.byLine())) does work.

- Jonathan M Davis


Re: Something needs to happen with shared, and soon.

2012-11-15 Thread luka8088

On 15.11.2012 11:52, Manu wrote:

On 15 November 2012 12:14, Jacob Carlborg mailto:d...@me.com>> wrote:

On 2012-11-15 10:22, Manu wrote:

Not to repeat my prev post... but in reply to Walter's take on
it, it
would be interesting if 'shared' just added implicit lock()/unlock()
methods to do the mutex acquisition and then remove the cast
requirement, but have the language runtime assert that the object is
locked whenever it is accessed (this guarantees the safety in a more
useful way, the casts are really annying). I can't imagine a
simpler and
more immediately useful solution.


How about implementing a library function, something like this:

shared int i;

lock(i, (x) {
 // operate on x
});

* "lock" will acquire a lock
* Cast away shared for "i"
* Call the delegate with the now plain "int"
* Release the lock

http://pastebin.com/tfQ12nJB


Interesting concept. Nice idea, could certainly be useful, but it
doesn't address the problem as directly as my suggestion.
There are still many problem situations, for instance, any time a
template is involved. The template doesn't know to do that internally,
but under my proposal, you lock it prior to the workload, and then the
template works as expected. Templates won't just break and fail whenever
shared is involved, because assignments would be legal. They'll just
assert that the thing is locked at the time, which is the programmers
responsibility to ensure.



I managed to make a simple example that works with the current 
implementation:


http://dpaste.dzfl.pl/27b6df62

http://forum.dlang.org/thread/k7orpj$1tt5$1...@digitalmars.com?page=4#post-k7s0gs:241h45:241:40digitalmars.com

It seems to me that solving this shared issue cannot be done purely on a 
compiler basis but will require a runtime support. Actually I don't see 
how it can be done properly without telling "this lock must be locked 
when accessing this variable".


http://dpaste.dzfl.pl/edbd3e10


Re: Something needs to happen with shared, and soon.

2012-11-15 Thread Jonathan M Davis
On Thursday, November 15, 2012 14:32:47 Manu wrote:
> On 15 November 2012 13:38, Jonathan M Davis  wrote:

> I don't really see the difference, other than, as you say, the cast is
> explicit.
> Obviously the possibility for the situation you describe exists, it's
> equally possible with the cast, except this way, the usage pattern is made
> more convenient, the user has a convenient way to control the locks and
> most importantly, it would work with templates.
> That said, this sounds like another perfect application of 'scope'. Perhaps
> only scope parameters can receive a locked, shared thing... that would
> mechanically protect you against escape.

You could make casting away const implicit too, which would make some code 
easier, but it would be a disaster, because the programer wouldn't have a clue 
that it's happening in many cases, and the code would end up being very, very 
wrong. Implicitly casting away shared would put you in the same boat. _Maybe_ 
you could get away with it in very restricted circumstances where both pure 
and scope are being used, but then it becomes so restrictive that it's nearly 
useless anyway. And again, it would be hidden from the programmer, when this 
is something that _needs_ to be explicit. Having implicit locks happen on you 
could really screw with any code trying to do explicit locks, as would be 
needed anyway in all but the most basic cases.

> 2. It's often the case that you need to lock/unlock groups of stuff together
> > such that locking specific variables is of often of limited use and would
> > just
> > introduce pointless extra locks when dealing with multiple variables. It
> > would
> > also increase the risk of deadlocks, because you wouldn't have much - if
> > any -
> > control over what order locks were acquired in when dealing with multiple
> > shared variables.
> 
> Your fear is precisely the state we're in now, except it puts all the work
> on the user to create and use the synchronisation objects, and also to
> assert that things are locked when they are accessed.
> I'm just suggesting some reasonably simple change that would make the
> situation more usable and safer immediately, short of waiting for all these
> fantastic designs being discussed having time to simmer and manifest.

Except that with your suggestion, you're introducing potential deadlocks which 
are outside of the programmer's control, and you're introducing extra overhead 
with those locks (both in terms of memory and in terms of the runtime costs). 
Not to mention, it would probably cause all kinds of issues for something like 
shared int* to have a mutex with it, because then its size is completely 
different from int*. It also would cause even worse problems when that shared 
int* was cast to int* (aside from the size issues), because all of the locking 
that was happening for the shared int* was invisible. If you want automatic 
locks, then use synchronized classes. That's what they're for.

Honestly, I really don't buy into the idea that it makes sense for shared to 
magically make multi-threaded code work without the programmer worrying about 
locks. Making it so that it's well-defined as to what's atomic is great for 
code that has any chance of being lock-free, but it's still up to the 
programmer to understand when locks are and aren't needed and how to use them 
correctly. I don't think that it can possibly work for it to be automatic. 
It's far to easy to introduce deadlocks, and it would only work in the 
simplest of cases anyway, meaning that the programmer needs to understand and 
properly solve the issues anyway. And if the programmer has to understand it 
all to get it right, why bother adding the extra overhead and deadlock 
potential caused by automatically locking anything? D provides some great 
synchronization primitives. People should use them.

I think that the only things that share really needs to be solving are:

1. Indicating to the compiler via the type system that the object is not 
thread-local. This properly segregates shared and unshared code and allows the 
compiler to take advantage of thread locality for optimizations and avoid 
optimizations with shared code that screw up threading (e.g. double-checked 
locking won't work if the compiler does certain optimizations).

2. Making it explicit and well-defined as part of the language which operations 
can assumed to be atomic (even if it that set of operations is very small, 
having it be well-defined is valuable).

3. Ensuring sequential consistency so that it's possible to do lock-free code 
when atomic operations permit it and so that there are fewer weird issues due 
to undefined behavior.

- Jonathan M Davis


Re: I'm back

2012-11-15 Thread eskimo
On Wed, 2012-11-14 at 18:31 -0800, Andrei Alexandrescu wrote:
> > array(map!"a.dup"(stdin.byLine()))

As it seems there is a good way of handling ranges with transient front
for algorithms that need a persistent front?

Why not simply document any transient range to be transient (should be
anyway) and add the little hint to map. Also note that some algorithms
might not work as expected with transient fronts. In addition, at least
the algorithms in phobos should state in their documentation whether
they rely on non transient front or not.

To me it seems that ranges and algorithms already offer a solution to
the problem.

The other way round it would of course be better (safe behaviour the
default, fast the option) but as a matter of fact there is no real
unsafe behaviour, it just might be unexpected if you don't know what you
are doing.

On the other hand if an algorithm depends unnecessarily on non transient
fronts it should be fixed. If there are many algorithms which can be
more efficient with the dependency on non transient front, we could
simply provide a second module, called std.transalgorithm (or something)
offering dedicated algorithms for transient fronts. (So people don't
have to role their own)

I think this is a very clean and straight forward solution. If you want
something that simply works you just use map!"a.dup" ( or whatever you
need to copy your elements) and don't care. If you want performance then
you would have to check what algorithms to use and have a look at
std.transalgorithm.

My apologies if someone else already suggested something like this, I
haven't read all the threads about this topic entirely.  



Re: Something needs to happen with shared, and soon.

2012-11-15 Thread deadalnix

Le 14/11/2012 23:21, Andrei Alexandrescu a écrit :

On 11/14/12 12:00 PM, Sean Kelly wrote:

On Nov 14, 2012, at 6:16 AM, Andrei
Alexandrescu wrote:


On 11/14/12 1:20 AM, Walter Bright wrote:

On 11/13/2012 11:37 PM, Jacob Carlborg wrote:

If the compiler should/does not add memory barriers, then is there a
reason for
having it built into the language? Can a library solution be enough?


Memory barriers can certainly be added using library functions.


The compiler must understand the semantics of barriers such as e.g.
it doesn't hoist code above an acquire barrier or below a release
barrier.


That was the point of the now deprecated "volatile" statement. I still
don't entirely understand why it was deprecated.


Because it's better to associate volatility with data than with code.



Happy to see I'm not alone on that one.

Plus, volatile and sequential consistency are 2 different beast. 
Volatile means no register promotion and no load/store reordering. It is 
required, but not sufficient for concurrency.


Re: Something needs to happen with shared, and soon.

2012-11-15 Thread deadalnix

Le 14/11/2012 22:09, Walter Bright a écrit :

On 11/14/2012 7:08 AM, Andrei Alexandrescu wrote:

On 11/14/12 6:39 AM, Alex Rønne Petersen wrote:

On 14-11-2012 15:14, Andrei Alexandrescu wrote:

On 11/14/12 1:19 AM, Walter Bright wrote:

On 11/13/2012 11:56 PM, Jonathan M Davis wrote:

Being able to have double-checked locking work would be valuable, and
having
memory barriers would reduce race condition weirdness when locks
aren't used
properly, so I think that it would be desirable to have memory
barriers.


I'm not saying "memory barriers are bad". I'm saying that having the
compiler blindly insert them for shared reads/writes is far from the
right way to do it.


Let's not hasten. That works for Java and C#, and is allowed in C++.

Andrei




I need some clarification here: By memory barrier, do you mean x86's
mfence, sfence, and lfence?


Sorry, I was imprecise. We need to (a) define intrinsics for loading
and storing
data with high-level semantics (a short list: acquire, release,
acquire+release,
and sequentially-consistent) and THEN (b) implement the needed code
generation
appropriately for each architecture. Indeed on x86 there is little
need to
insert fence instructions, BUT there is a definite need for the
compiler to
prevent certain reorderings. That's why implementing shared data
operations
(whether implicit or explicit) as sheer library code is NOT possible.


Because as Walter said, inserting those blindly when unnecessary can
lead to terrible performance because it practically murders
pipelining.


I think at this point we need to develop a better understanding of
what's going
on before issuing assessments.


Yes. And also, I agree that having something typed as "shared" must
prevent the compiler from reordering them. But that's separate from
inserting memory barriers.



I'm sorry but that is dumb.

What is the point of ensuring that the compiler does not reorder 
load/stores if the CPU is allowed to do so ?


Re: Something needs to happen with shared, and soon.

2012-11-15 Thread deadalnix

Le 15/11/2012 10:08, Manu a écrit :

The Nintendo Wii for instance, not an unpopular machine, only sold 130
million units! Does not have synchronisation instructions in the
architecture (insane, I know, but there it is. I've had to spend time
working around this in the past).
I'm sure it's not unique in this way.



Can you elaborate on that ?


Re: Something needs to happen with shared, and soon.

2012-11-15 Thread Sönke Ludwig
Am 15.11.2012 05:32, schrieb Andrei Alexandrescu:
> On 11/14/12 7:24 PM, Jonathan M Davis wrote:
>> On Thursday, November 15, 2012 03:51:13 Jonathan M Davis wrote:
>>> I have no idea what we want to do about this situation though. Regardless of
>>> what we do with memory barriers and the like, it has no impact on whether
>>> casts are required. And I think that introducing the shared equivalent of
>>> const would be a huge mistake, because then most code would end up being
>>> written using that attribute, meaning that all code essentially has to be
>>> treated as shared from the standpoint of compiler optimizations. It would
>>> almost be the same as making everything shared by default again. So, as far
>>> as I can see, casting is what we're forced to do.
>>
>> Actually, I think that what it comes down to is that shared works nicely when
>> you have a type which is designed to be shared, and it encapsulates 
>> everything
>> that it needs. Where it starts requiring casting is when you need to pass it
>> to other stuff.
>>
>> - Jonathan M Davis
> 
> TDPL 13.14 explains that inside synchronized classes, top-level shared is 
> automatically lifted.
> 
> Andrei

There are three problems I currently see with this:

 - It's not actually implemented
 - It's not safe because unshared references can be escaped or dragged in
 - Synchronized classes provide no way to avoid the automatic locking in 
certain methods, but often
it is necessary to have more fine-grained control for efficiency reasons, or to 
avoid dead-locks



Re: I'm back

2012-11-15 Thread monarch_dodra
On Thursday, 15 November 2012 at 12:57:24 UTC, Jonathan M Davis 
wrote:
It's looking like this comes down to either banning ranges with 
transient
fronts entirely (and changing how ByLine and ByChunk work), or 
we're going to
have to deal with quirks like array(map!"a.dup"(file.byLine())) 
not working

whereas array(map!"a.idup"(file.byLine())) does work.

- Jonathan M Davis


I still say this could be "simply" solved the same way we solved 
the "size_t" indexing problem: Only ranges that have 
non-transient elements are guaranteed supported by phobos 
algorithms/functions.


Everything else: Use at your own risk.



Re: D is awesome

2012-11-15 Thread nazriel

On Thursday, 15 November 2012 at 11:07:05 UTC, eskimo wrote:

Hey guys!

I just wanted to say that D is really really really awesome and 
I wanted

to thank everyone contributing to it.

I think what D needs the most at the moment is bug fixing so I 
am very

pleased to read the commit messages:

Fixed bug ...
Fixed bug ...
Fixed bug ...
Fixed bug ...
.

Coming from many different contributors. Also every bug I 
stumbled upon
until now had already been reported before, which means that D 
has a

very active and not too small user base already. I like that.

I just wanted to post this, because most of the time people 
post just
what not works or about possible improvements. But at least 
from time to

time one should lean back and smile on all the stuff people have
accomplished so far.

So my final words: D is awesome. vibe.d is awesome. Thank you! 
And of

course lets work together and make it even better.


I wish I could click [Like it] ;)


Re: Compiler bug? (alias sth this; and std.signals)

2012-11-15 Thread Joe

On Thursday, 15 November 2012 at 09:37:55 UTC, eskimo wrote:


But, considering that the alias this triggers only when you are 
issuing
an operation not supported by the struct itself, it is pretty 
reasonable

behaviour and everything else would be pretty surprising.

Best regards,

Robert


I wonder though why it works at all then, because without the
alias the string conversion *is* supported and produces
"Property(7)".


Re: Something needs to happen with shared, and soon.

2012-11-15 Thread Manu
On 15 November 2012 15:00, Jonathan M Davis  wrote:

> On Thursday, November 15, 2012 14:32:47 Manu wrote:
> > On 15 November 2012 13:38, Jonathan M Davis  wrote:
>
> > I don't really see the difference, other than, as you say, the cast is
> > explicit.
> > Obviously the possibility for the situation you describe exists, it's
> > equally possible with the cast, except this way, the usage pattern is
> made
> > more convenient, the user has a convenient way to control the locks and
> > most importantly, it would work with templates.
> > That said, this sounds like another perfect application of 'scope'.
> Perhaps
> > only scope parameters can receive a locked, shared thing... that would
> > mechanically protect you against escape.
>
> You could make casting away const implicit too, which would make some code
> easier, but it would be a disaster, because the programer wouldn't have a
> clue
> that it's happening in many cases, and the code would end up being very,
> very
> wrong. Implicitly casting away shared would put you in the same boat.


... no, they're not even the same thing. const things can not be changed.
Shared things are still mutable things, and perfectly compatible with other
non-shared mutable things, they just have some access control requirements.

_Maybe_ you could get away with it in very restricted circumstances where
> both pure
> and scope are being used, but then it becomes so restrictive that it's
> nearly
> useless anyway. And again, it would be hidden from the programmer, when
> this
> is something that _needs_ to be explicit. Having implicit locks happen on
> you
> could really screw with any code trying to do explicit locks, as would be
> needed anyway in all but the most basic cases.
>

I think you must have misunderstood my suggestion, I certainly didn't
suggest locking would be implicit.
All locks would be explicit, all I suggested is that shared things would
gain an associated mutex, and an implicit assert that said mutex is locked
whenever it is accessed, rather than deny assignment between
shared/unshared things.

You could use lock methods, or a nice alternative would be to submit them
to some sort of synchronised scope like luka illustrates.

I'm of the opinion that for the time being, explicit lock control is
mandatory (anything else is a distant dream), and atomic primitives may not
be relied upon.

> 2. It's often the case that you need to lock/unlock groups of stuff
> together
> > > such that locking specific variables is of often of limited use and
> would
> > > just
> > > introduce pointless extra locks when dealing with multiple variables.
> It
> > > would
> > > also increase the risk of deadlocks, because you wouldn't have much -
> if
> > > any -
> > > control over what order locks were acquired in when dealing with
> multiple
> > > shared variables.
> >
> > Your fear is precisely the state we're in now, except it puts all the
> work
> > on the user to create and use the synchronisation objects, and also to
> > assert that things are locked when they are accessed.
> > I'm just suggesting some reasonably simple change that would make the
> > situation more usable and safer immediately, short of waiting for all
> these
> > fantastic designs being discussed having time to simmer and manifest.
>
> Except that with your suggestion, you're introducing potential deadlocks
> which
> are outside of the programmer's control, and you're introducing extra
> overhead
> with those locks (both in terms of memory and in terms of the runtime
> costs).
> Not to mention, it would probably cause all kinds of issues for something
> like
> shared int* to have a mutex with it, because then its size is completely
> different from int*. It also would cause even worse problems when that
> shared
> int* was cast to int* (aside from the size issues), because all of the
> locking
> that was happening for the shared int* was invisible. If you want automatic
> locks, then use synchronized classes. That's what they're for.
>
> Honestly, I really don't buy into the idea that it makes sense for shared
> to
> magically make multi-threaded code work without the programmer worrying
> about
> locks. Making it so that it's well-defined as to what's atomic is great for
> code that has any chance of being lock-free, but it's still up to the
> programmer to understand when locks are and aren't needed and how to use
> them
> correctly. I don't think that it can possibly work for it to be automatic.
> It's far to easy to introduce deadlocks, and it would only work in the
> simplest of cases anyway, meaning that the programmer needs to understand
> and
> properly solve the issues anyway. And if the programmer has to understand
> it
> all to get it right, why bother adding the extra overhead and deadlock
> potential caused by automatically locking anything? D provides some great
> synchronization primitives. People should use them.
>

To all above:
You've completely misunderstood my suggestion. It's basically 

Re: DConf 2013 on kickstarter.com: we're live!

2012-11-15 Thread Joseph Rushton Wakeling

On 10/22/2012 07:25 PM, Andrei Alexandrescu wrote:

Please pledge your support and encourage your friends to do the same. Hope to
see you in 2013!


About that t-shirt thing -- is Kickstarter really accurate to say "US only"?  Or 
can you enter an EU address and pay shipping charges?


Re: What's the deal with __thread?

2012-11-15 Thread Don Clugston

On 15/11/12 11:54, Walter Bright wrote:

On 11/15/2012 2:28 AM, Don Clugston wrote:

However, there is one case in the test suite which is unclear to me:

extern(C) __thread int x;

Is there any other way to do this?


extern(C) int x;



What about extern(C) variables which are not thread local?
(which I think would be the normal case).
Then from a C header,

extern(C) int x;

must become:

extern(C) __gshared int x;

in D. It's a very rare case, I guess, but it's one of those situations 
where D code silently has different behaviour from identical C code.


Re: [RFC] A huge problem with Github diff

2012-11-15 Thread Alex Rønne Petersen

On 15-11-2012 08:35, Thomas Koch wrote:

Andrei Alexandrescu wrote:

On 11/14/12 12:36 PM, Andrej Mitrovic wrote:

On 11/14/12, Alex Rønne Petersen  wrote:

Or we could switch to Phabricator for our entire review process which
has an absolutely awesome side-by-side diff and is generally a fantastic
tool for distributed-style software projects.

See my email to dmd-internals:
http://lists.puremagic.com/pipermail/dmd-internals/2012-

October/004900.html


I don't see what's awesome about it


Everything? :o)


Yes, from the featurelist Phabricator looks pretty awesome. And I'm
suffering again about an interesting piece of software written in the
language of my nightmares: PHP

Of course that's only my personal windmill I'm fighting. I just wanted to
mention Gerrit Code Review:
http://en.wikipedia.org/wiki/Gerrit_%28software%29

I'm in the process of packaging Gerrit for Debian, but this won't be ready
before 2013. You can of course install Gerrit with upstreams .war file.

Regards, Thomas Koch



Pick your poison: PHP or Java. ;)

*flees*

--
Alex Rønne Petersen
a...@lycus.org
http://lycus.org


Re: What's the deal with __thread?

2012-11-15 Thread Alex Rønne Petersen

On 15-11-2012 15:42, Don Clugston wrote:

On 15/11/12 11:54, Walter Bright wrote:

On 11/15/2012 2:28 AM, Don Clugston wrote:

However, there is one case in the test suite which is unclear to me:

extern(C) __thread int x;

Is there any other way to do this?


extern(C) int x;



What about extern(C) variables which are not thread local?
(which I think would be the normal case).
Then from a C header,

extern(C) int x;

must become:

extern(C) __gshared int x;

in D. It's a very rare case, I guess, but it's one of those situations
where D code silently has different behaviour from identical C code.


I think most people are aware of this 'quirk' from what I've seen in 
binding projects, so it's probably not a big deal.


--
Alex Rønne Petersen
a...@lycus.org
http://lycus.org


Re: Compiler bug? (alias sth this; and std.signals)

2012-11-15 Thread eskimo
:-) Indeed, that is the only thing that surprised me too (but not as
much as in another language, because of D's capabilities). The solution
I think is this overload found in std.format of formatValue:

void formatValue(Writer, T, Char)(Writer w, auto ref T val, ref
FormatSpec!Char f)
if ((is(T == struct) || is(T == union)) && (hasToString!(T, Char) || !
isBuiltinType!T) && !is(T == enum))

-> Its body implements the generic print for structs. It either calls
the structs toString() method if available or if it is a range it uses
formatRange() otherwise it prints its type name with its contained
values. 
But as you can see the templates requirement states !isBuiltinType!T, so
in case of your alias this to an int, it won't be used. So the
implementer of this method most likely had taken into account the
possibility of an alias this to a built in type.

Btw., I love D's readability, it was really easy to find this and to
understand what it does.

Best regards,

Robert

On Thu, 2012-11-15 at 15:11 +0100, Joe wrote:

> I wonder though why it works at all then, because without the
> alias the string conversion *is* supported and produces
> "Property(7)".




Re: DConf 2013 on kickstarter.com: we're live!

2012-11-15 Thread Andrei Alexandrescu

On 11/15/12 6:39 AM, Joseph Rushton Wakeling wrote:

On 10/22/2012 07:25 PM, Andrei Alexandrescu wrote:

Please pledge your support and encourage your friends to do the same.
Hope to
see you in 2013!


About that t-shirt thing -- is Kickstarter really accurate to say "US
only"? Or can you enter an EU address and pay shipping charges?


I think you can, but I'm not sure. Anyhow we can arrange something - 
feel free to contribute $50 for "no reward" and then contact me to get 
the T-shirt.


Andrei


Re: Something needs to happen with shared, and soon.

2012-11-15 Thread Andrei Alexandrescu

On 11/15/12 1:08 AM, Manu wrote:

On 14 November 2012 19:54, Andrei Alexandrescu
mailto:seewebsiteforem...@erdani.org>>
wrote:
Yah, the whole point here is that we need something IN THE LANGUAGE
DEFINITION about atomicLoad and atomicStore. NOT IN THE IMPLEMENTATION.

THIS IS VERY IMPORTANT.


I won't outright disagree, but this seems VERY dangerous to me.

You need to carefully study all popular architectures, and consider that
if the language is made to depend on these primitives, and the
architecture doesn't support it, or support that particular style of
implementation (fairly likely), than D will become incompatible with a
huge number of architectures on that day.


All contemporary languages that are serious about concurrency support 
atomic primitives one way or another. We must too. There's no two ways 
about it.


[snip]

Side note: I still think a convenient and fairly practical solution is
to make 'shared' things 'lockable'; where you can lock()/unlock() them,
and assignment to/from shared things is valid (no casting), but a
runtime assert insists that the entity is locked whenever it is
accessed.


This (IIUC) is conflating mutex-based synchronization with memory models 
and atomic operations. I suggest we postpone anything related to that 
for the sake of staying focused.



Andrei


A working way to improve the "shared" situation

2012-11-15 Thread Sönke Ludwig
After working a bit more on it (accompanied by a bad flu with 40 °C fever, so 
hopefully it's not all
wrong in reality), I got a library approach that allows to use shared objects 
in a (statically
checked) safe and comfortable way. As a bonus, it also introduces an 
isolated/unique type that can
be safely moved between threads and converts safely to immutable (and mutable).

It would be really nice to get a discussion going to see if this or something 
similar should be
included in Phobos and which (if any) language extensions, that could help (or 
replace) such an
approach, are realistic to get implemented in the short term (e.g. Walter 
suggested
__unique(expression) to statically verify that an expression yields a value 
with no mutable aliasing
to the outside).

But first a rough description of the proposed system - there are three basic 
ingredients:

 - ScopedRef!T:

   wraps a type allowing only operations that are guaranteed to not leak any 
references in or out.
   This type is non-copyable but allows reference-like access to a value. In 
contrast to 'scope' it
   works recursively and also works on return values in addition to function 
parameters.

 - Isolated!T:

   Statically ensures that any contained aliasing is either immutable or is 
only reachable through
   the Isolated!T itself (*strong isolation*). This allows safe passing between 
threads and safe
   conversion to immutable. A less strict mode also allows shared aliasing 
(*weak isolation*).
   Implicit conversion to immutable is not possible for weakly isolated values, 
but they can still
   safely be moved between threads and accessed without locking or similar 
means. As such they
   provide a natural bridge between the shared and the thread local world. 
Isolated!T is
   non-copyable, but can be move()d between variables.

 - ScopedLock!T:

   Provides scoped access to shared objects. It will lock the object's mutex 
and provide access to
   its non-shared methods and fields. A convenience function lock() is used to 
construct a
   ScopedLock!T, which is also non-copyable. The type T must be weakly 
isolated, because otherwise
   it cannot be guaranteed that there are no shared references that are not 
also marked with
   'shared'.

The operations done on either of these three wrappers are forced to be (weakly) 
pure and may not
have parameters or return types that could leak references (neither /to/ nor 
/from/ the outside).

It solves a number of common usage patterns, not only removing the need for 
casts, but also
statically verifying the correctness of the code. The following example shows 
it in action. Apart
from the pure annotations ('pure:' would help), nothing else is necessary.

---
import stdx.typecons;

class Item {
private double m_value;
this(double value) pure { m_value = value; }
@property double value() const pure { return m_value; }
}

class Manager {
private {
string m_name;
Isolated!(Item) m_ownedItem;
Isolated!(shared(Item)[]) m_items;
}

this(string name) pure
{
m_name = name;
auto itm = makeIsolated!Item(3.5);
// _move_ itm to m_ownedItem
m_ownedItem = itm;
// itm is now empty
}

void addItem(shared(Item) item) pure { m_items ~= item; }

double getTotalValue()
const pure {
double sum = 0;

// lock() is required to access shared objects
foreach( ref itm; m_items ) sum += itm.lock().value;

// owned objects can be accessed without locking
sum += m_ownedItem.value;

return sum;
}
}

void main()
{
import std.stdio;

auto man = new shared(Manager)("My manager");
{ // doing multiple method calls during a single lock is no problem
auto l = man.lock();
l.addItem(new shared(Item)(1.5));
l.addItem(new shared(Item)(0.5));
}

writefln("Total value: %s", man.lock().getTotalValue());
}
---

This all works quite well and is able to come close to what the C# system that 
I linked some days
ago (*) is able to do. Notably, ScopedRef!T allows to directly modify isolated 
objects without
having to implement the recovery rules that the paper mentions. It cannot 
capture all those cases,
but is good enough in most cases. Note that there are a lot of small details 
that I left out, but
just to hopefully better get the general idea across.

There are still some open points where I think small language changes are 
needed to make this
bullet-proof:

 - It would be nice to be able to disallow 'auto var = 
somethingThatReturnsScopedRef();'. Copying
   can nicely be disabled using '@disable this(this)', but initializing a 
variable can't. This
   opens up a possible whole:

   ---
   Isolated!MyType myvalue = ...;
   ScopedRef!int fi

Re: Something needs to happen with shared, and soon.

2012-11-15 Thread Dmitry Olshansky

11/15/2012 1:06 AM, Walter Bright пишет:

On 11/14/2012 3:14 AM, Benjamin Thaut wrote:

A small code example which would break as soon as we allow destructing
of shared
value types would really be nice.


I hate to repeat myself, but:

Thread 1:
 1. create shared object
 2. pass reference to that object to Thread 2
 3. destroy object

Thread 2:
 1. manipulate that object


Ain't structs typically copied anyway?

Reference would imply pointer then. If the struct is on the stack (weird 
but could be) then the thread that created it destroys the object once. 
The thing is as unsafe as escaping a pointer is.


Personally I think that shared stuff allocated on the stack is 
here-be-dragons @system code in any case.


Otherwise it's GC's responsibility to destroy heap allocated struct when 
there are no references to it.


What's so puzzling about it?

BTW currently GC-allocated structs are not having their destructor 
called at all. The bug is however _minor_ ...


http://d.puremagic.com/issues/show_bug.cgi?id=2834

--
Dmitry Olshansky


Re: Something needs to happen with shared, and soon.

2012-11-15 Thread Mehrdad
Would it be useful if 'shared' in D did something like 'volatile' 
in C++ (as in, Andrei's article on volatile-correctness)?

http://www.drdobbs.com/cpp/volatile-the-multithreaded-programmers-b/184403766


Re: Something needs to happen with shared, and soon.

2012-11-15 Thread Dmitry Olshansky

11/15/2012 8:33 AM, Michel Fortin пишет:


If you want to declare the mutex separately, you could do it by
specifying a variable instead of a type in the variable declaration:

 Mutex m;
 synchronized(m) int i;

 synchronized(i)
 {
 // implicit: m.lock();
 // implicit: scope (exit) m.unlock();
 i++;
 }


While the rest of proposal was more or less fine. I don't get why we 
need escape control of mutex at all - in any case it just opens a 
possibility to shout yourself in the foot.


I'd say:
"Need direct access to mutex? - Go on with the manual way it's still 
right there (and scope(exit) for that matter)".


Another problem is that somebody clever can escape reference to unlocked 
'i' inside of synchronized to somewhere else.


But anyway we can make it in the library right about now.

synchronized T ---> Synchronized!T
synchronized(i){ ... } --->

i.access((x){
//will lock & cast away shared T inside of it
...
});

I fail to see what it doesn't solve (aside of syntactic sugar).

The key point is that Synchronized!T is otherwise an opaque type.
We could pack a few other simple primitives like 'load', 'store' etc. 
All of them will go through lock-unlock.


Even escaping a reference can be solved by passing inside of 'access'
a proxy of T. It could even asserts that the lock is in indeed locked.

Same goes about Atomic!T. Though the set of primitives is quite limited 
depending on T.
(I thought that built-in shared(T) is already atomic though so no need 
to reinvent this wheel)


It's time we finally agree that 'shared' qualifier is an assembly 
language of multi-threading based on sharing. It just needs some safe 
patterns in the library.


That and clarifying explicitly what guarantees (aside from being well.. 
being shared) it provides w.r.t. memory model.


Until reaching this thread I was under impression that shared means:
- globally visible
- atomic operations for stuff that fits in one word
- sequentially consistent guarantee
- any other forms of access are disallowed except via casts

--
Dmitry Olshansky


Re: function overload on full signature?

2012-11-15 Thread foobar

On Wednesday, 14 November 2012 at 19:12:59 UTC, Timon Gehr wrote:

On 11/14/2012 06:43 PM, foobar wrote:

On Tuesday, 13 November 2012 at 21:34:28 UTC, Rob T wrote:
I'm wondering why overloading has been implemented to only 
match on
the argument list rather than the full signature which 
includes the

return type? I know I would use it if it was available.

I'm not requesting this to be a feature of D, I'm only asking 
why it

is not being done.

--rt


This is hardly a new idea. It was implemented in a few 
languages of the
70's and it proved to be adding complexity and generally not 
worth the

trouble.


I guess they just were not doing it right then.

No language nowadays bothers with this based on those past 
lessons.


Haskell.

> fromInteger 2 :: Float
2.0


I thought that Haskell doesn't have function overloading (which 
simplifies this greatly)... Anyway, I mostly meant "standard" 
imperative/OO languages. Sorry for the confusion.


Re: Something needs to happen with shared, and soon.

2012-11-15 Thread Sean Kelly
On Nov 15, 2012, at 3:16 AM, Regan Heath  wrote:
> 
> I suggested something similar as did Sönke:
> http://forum.dlang.org/thread/k7orpj$1tt5$1...@digitalmars.com?page=2#post-op.wnnuiio554xghj:40puck.auriga.bhead.co.uk
> 
> According to deadalnix the compiler magic I suggested to add the mutex isn't 
> possible:
> http://forum.dlang.org/thread/k7orpj$1tt5$1...@digitalmars.com?page=3#post-k7qsb5:242gqk:241:40digitalmars.com
> 
> Most of our ideas can be implemented with a wrapper template containing the 
> sync object (mutex, etc).

If I understand you correctly, you don't need anything that explicitly contains 
the sync object.  A global table of mutexes used according to the address of 
the value to be mutated should work.


> So... my feeling is that the best solution for "shared", ignoring the memory 
> barrier aspect which I would relegate to a different feature and solve a 
> different way, is..
> 
> 1. Remove the existing mutex from object.
> 2. Require that all objects passed to synchronized() {} statements implement 
> a synchable(*) interface
> 3. Design a Shared(*) wrapper template/struct that contains a mutex and 
> implements synchable(*)
> 4. Design a Shared(*) base class which contains a mutex and implements 
> synchable(*)

It would be nice to eliminate the mutex that's optionally built into classes 
now.  The possibility of having to allocate a new mutex on whatever random 
function call happens to be the first one with "synchronized" is kinda not 
great.

Re: I'm back

2012-11-15 Thread Dmitry Olshansky

11/15/2012 5:20 PM, monarch_dodra пишет:

On Thursday, 15 November 2012 at 12:57:24 UTC, Jonathan M Davis wrote:

It's looking like this comes down to either banning ranges with transient
fronts entirely (and changing how ByLine and ByChunk work), or we're
going to
have to deal with quirks like array(map!"a.dup"(file.byLine())) not
working
whereas array(map!"a.idup"(file.byLine())) does work.

- Jonathan M Davis


I still say this could be "simply" solved the same way we solved the
"size_t" indexing problem: Only ranges that have non-transient elements
are guaranteed supported by phobos algorithms/functions.

Everything else: Use at your own risk.



Yeah! Let's introduce undefined behavior into the standard library!

Wrong type of index at least breaks at compile time.

--
Dmitry Olshansky


Re: Binary compatibility on Linux

2012-11-15 Thread Russel Winder
On Thu, 2012-11-15 at 10:35 +0100, Jacob Carlborg wrote:
[…]
> > [2] wiki.debian.org/UpstreamGuide
> 
> I've read that page and from my understanding they prefer to use "make":
> 
> "Please don't use SCons"
> "Using waf as build system is discouraged"

Comments made by people who are steeped in Autoconf/Automake and haven't
actually used more modern systems such as SCons or Waf.

The comments on the website are almost, but not quite, totally wrong on
all important points.
-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: Something needs to happen with shared, and soon.

2012-11-15 Thread Sean Kelly
On Nov 14, 2012, at 6:28 PM, Andrei Alexandrescu 
 wrote:

> On 11/14/12 4:50 PM, Sean Kelly wrote:
>> On Nov 14, 2012, at 2:25 PM, Andrei
>> Alexandrescu  wrote:
>> 
>>> On 11/14/12 1:09 PM, Walter Bright wrote:
 Yes. And also, I agree that having something typed as "shared"
 must prevent the compiler from reordering them. But that's
 separate from inserting memory barriers.
>>> 
>>> It's the same issue at hand: ordering properly and inserting
>>> barriers are two ways to ensure one single goal, sequential
>>> consistency. Same thing.
>> 
>> Sequential consistency is great and all, but it doesn't render
>> concurrent code correct.  At worst, it provides a false sense of
>> security that somehow it does accomplish this, and people end up
>> actually using it as such.
> 
> Yah, but the baseline here is acquire-release which has subtle differences 
> that are all the more maddening.

Really?  Acquire-release always seemed to have equivalent safety to me.  
Typically, the user doesn't even have to understand that optimization can occur 
upwards across the trailing boundary of the block, etc, to produce correct 
code.  Though I do agree that the industry is moving towards sequential 
consistency, so there may be no point in trying for something weaker.

Re: Something needs to happen with shared, and soon.

2012-11-15 Thread Sean Kelly
On Nov 11, 2012, at 6:30 PM, Walter Bright  wrote:
> 
> To make a shared type work in an algorithm, you have to:
> 
> 1. ensure single threaded access by aquiring a mutex
> 2. cast away shared
> 3. operate on the data
> 4. cast back to shared
> 5. release the mutex


So what happens if you pass a reference to the now non-shared object to a 
function that caches a local reference to it?  Half the point of the attribute 
is to protect us from accidents like this.

Re: Something needs to happen with shared, and soon.

2012-11-15 Thread Sean Kelly
On Nov 15, 2012, at 4:54 AM, deadalnix  wrote:

> Le 14/11/2012 21:01, Sean Kelly a écrit :
>> On Nov 14, 2012, at 6:32 AM, Andrei 
>> Alexandrescu  wrote:
>>> 
>>> This is a simplification of what should be going on. The 
>>> core.atomic.{atomicLoad, atomicStore} functions must be intrinsics so the 
>>> compiler generate sequentially consistent code with them (i.e. not perform 
>>> certain reorderings). Then there are loads and stores with weaker 
>>> consistency semantics (acquire, release, acquire/release, and consume).
>> 
>> No.  These functions all contain volatile ask blocks.  If the compiler 
>> respected the "volatile" it would be enough.
> 
> It is sufficient for monocore and mostly correct for x86. But isn't enough.
> 
> volatile isn't for concurency, but memory mapping.

Traditionally, the term "volatile" is for memory mapping.  The description of 
"volatile" for D1, though, would have worked for concurrency.  Or is there some 
example you can provide where this isn't true?

Re: Something needs to happen with shared, and soon.

2012-11-15 Thread Sean Kelly
On Nov 15, 2012, at 5:10 AM, deadalnix  wrote:

> Le 14/11/2012 23:21, Andrei Alexandrescu a écrit :
>> On 11/14/12 12:00 PM, Sean Kelly wrote:
>>> On Nov 14, 2012, at 6:16 AM, Andrei
>>> Alexandrescu wrote:
>>> 
 On 11/14/12 1:20 AM, Walter Bright wrote:
> On 11/13/2012 11:37 PM, Jacob Carlborg wrote:
>> If the compiler should/does not add memory barriers, then is there a
>> reason for
>> having it built into the language? Can a library solution be enough?
> 
> Memory barriers can certainly be added using library functions.
 
 The compiler must understand the semantics of barriers such as e.g.
 it doesn't hoist code above an acquire barrier or below a release
 barrier.
>>> 
>>> That was the point of the now deprecated "volatile" statement. I still
>>> don't entirely understand why it was deprecated.
>> 
>> Because it's better to associate volatility with data than with code.
> 
> Happy to see I'm not alone on that one.
> 
> Plus, volatile and sequential consistency are 2 different beast. Volatile 
> means no register promotion and no load/store reordering. It is required, but 
> not sufficient for concurrency.

It's sufficient for concurrency when coupled with library code that does the 
hardware-level synchronization.  In short, a program has two separate machines 
doing similar optimizations on it: the compiler and the CPU.  In D we can use 
ASM to control CPU optimizations, and in D1 we had "volatile" to control 
compiler optimizations.  "volatile" was the minimum required for handling the 
compiler portion and was easy to get wrong, but it used only one keyword and I 
suspect was relatively easy to implement on the compiler side as well.

Re: Something needs to happen with shared, and soon.

2012-11-15 Thread Sean Kelly
On Nov 15, 2012, at 5:16 AM, deadalnix  wrote:
> 
> What is the point of ensuring that the compiler does not reorder load/stores 
> if the CPU is allowed to do so ?

Because we can write ASM to tell the CPU not to.  We don't have any such 
ability for the compiler right now.

Re: Something needs to happen with shared, and soon.

2012-11-15 Thread Sean Kelly
On Nov 15, 2012, at 7:17 AM, Andrei Alexandrescu 
 wrote:

> On 11/15/12 1:08 AM, Manu wrote:
>> 
>> Side note: I still think a convenient and fairly practical solution is
>> to make 'shared' things 'lockable'; where you can lock()/unlock() them,
>> and assignment to/from shared things is valid (no casting), but a
>> runtime assert insists that the entity is locked whenever it is
>> accessed.
> 
> This (IIUC) is conflating mutex-based synchronization with memory models and 
> atomic operations. I suggest we postpone anything related to that for the 
> sake of staying focused.

By extension, I'd suggest postponing anything related to classes as well.

Re: Binary compatibility on Linux

2012-11-15 Thread Thomas Koch
Russel Winder wrote:

> On Thu, 2012-11-15 at 10:35 +0100, Jacob Carlborg wrote:
> […]
>> > [2] wiki.debian.org/UpstreamGuide
>> 
>> I've read that page and from my understanding they prefer to use "make":
>> 
>> "Please don't use SCons"
>> "Using waf as build system is discouraged"
> 
> Comments made by people who are steeped in Autoconf/Automake and haven't
> actually used more modern systems such as SCons or Waf.
> 
> The comments on the website are almost, but not quite, totally wrong on
> all important points.
The "website" is a wiki site edited by many people over a longer time 
period. If you found points you disagree with I'd love to see a comment 
added.

I for example don't know either SCons or Waf. Maybe the information in our 
UpstreamGuide is not up to date anymore.

Have you found more issues with the text? It would be interesting for us to 
listen to the opinions of non-debian members.

Regards, Thomas Koch





Re: A working way to improve the "shared" situation

2012-11-15 Thread Sönke Ludwig
Since the "Something needs to happen with shared" thread is currently split up 
into a low level
discussion (atomic operations, memory barriers etc.) and a high level one 
(classes, mutexes), it
probably makes sense to explicitly state that this proposal here applies more 
to the latter.


Re: DConf 2013 on kickstarter.com: we're live!

2012-11-15 Thread Iain Buclaw
On 15 November 2012 15:13, Andrei Alexandrescu
 wrote:
> On 11/15/12 6:39 AM, Joseph Rushton Wakeling wrote:
>>
>> On 10/22/2012 07:25 PM, Andrei Alexandrescu wrote:
>>>
>>> Please pledge your support and encourage your friends to do the same.
>>> Hope to
>>> see you in 2013!
>>
>>
>> About that t-shirt thing -- is Kickstarter really accurate to say "US
>> only"? Or can you enter an EU address and pay shipping charges?
>
>
> I think you can, but I'm not sure. Anyhow we can arrange something - feel
> free to contribute $50 for "no reward" and then contact me to get the
> T-shirt.
>
> Andrei

I hope all attendees get a t-shirt. :-)


-- 
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';


Re: function overload on full signature?

2012-11-15 Thread Rob T
I've been wondering for a couple of years about why overloading 
stops at the argument sig in almost all languages, but so far I 
have not seen a good reason why this must be so.


From what I've read so far, the reason why full overloading is 
not being done, is because it is not being done. Other than that, 
I don't have an answer as to why it is not being done because 
clearly it can be done, and the compiler certainly has the means 
to do it already, otherwise it could not err when you assign a 
function return type to the wrong type on the LHS, ie, it most 
certainly is able to determine what the full signature is at some 
point.


So does anyone really know why it is not being done? The "it's 
too complicated" argument seems weak to me since the compiler 
already has to check for both matching argument sig as well as 
the return type, and it already does overloading on the argument 
sig. I figure "it's too complicated" only because the compiler 
was initially designed without taking full signature overloading 
into account from the very start, otherwise it would be not much 
more complicated than the regular overloading that we have now.


The argument that the compiler will get too confused seems a bit 
weak as well, since the compiler can already get confused with 
argument sig overloading, and there are certainly methods of 
working this out, otherwise function overloading would not work 
as it is now. At the end of the day, if the compiler cannot 
figure it out, then it errs, just as it does now when it cannot 
figure it out. There's already an incalculable number of ways a 
programmer can mess up a valid compile, so adding one more 
possible way to the already massive pile cannot be used as a 
reason why it should not be done.


I could argue that overloading only on function sig is too 
complicated and not worth it, but we have it, and it is useful to 
at least some or most people, so why did we stop at the argument 
sig and not go all the way? Is there a theoretical reason why we 
should stop, are their use cases that have shown that it fails to 
help the programmer or makes programming more difficult?


--rt


Re: function overload on full signature?

2012-11-15 Thread monarch_dodra

On Thursday, 15 November 2012 at 17:18:04 UTC, Rob T wrote:
I've been wondering for a couple of years about why overloading 
stops at the argument sig ...


[SNIP]

--rt


I'd say because overall, you gain *very* little out of it, and it 
costs you much more complex compiler rules.


Most of all though, I'd say it is a bad idea in and out of 
itself: If you overload on the return type, you open the 
floodgates to call ambiguity.


I mean, are there even any real use-cases for overload on return 
type?


Re: D is awesome

2012-11-15 Thread Rob T

On Thursday, 15 November 2012 at 11:07:05 UTC, eskimo wrote:

Hey guys!

I just wanted to say that D is really really really awesome and 
I wanted

to thank everyone contributing to it.

I think what D needs the most at the moment is bug fixing so I 
am very

pleased to read the commit messages:

Fixed bug ...
Fixed bug ...
Fixed bug ...
Fixed bug ...
.

Coming from many different contributors. Also every bug I 
stumbled upon
until now had already been reported before, which means that D 
has a

very active and not too small user base already. I like that.

I just wanted to post this, because most of the time people 
post just
what not works or about possible improvements. But at least 
from time to

time one should lean back and smile on all the stuff people have
accomplished so far.

So my final words: D is awesome. vibe.d is awesome. Thank you! 
And of

course lets work together and make it even better.


I'll ditto what you said. I came across two troublesome bugs 
rather quickly as soon as I started doing some of the more meaty 
D stuff, and that really worried me a lot. But I got same day 
support from members of the D community, and found work-a-rounds 
quickly, and both bugs have been reported as fixed in master only 
a couple of days later.


That kind of activity gives me a warm fuzzy feeling to continue 
my "gamble" to invest in D. In my case, I'm working on a 
real-world app that pays the bills, so I do have a few people who 
are eye-balling my insane decision favoring "experimental" D over 
"tried-and-true" C++. If D delivers, we'll likely not do C++ 
again unless we really REALLY have too.


I do however see some significant hurdles to overcome, such as 
fixing the issues surrounding the GC, which include problems with 
getting dynamic libs and plugins to work correctly. I know 
there's more to the list of significant issues, but these are 
problem areas that I'd like to see get resolved sooner than later.


PS: The ddoc thing using candydoc is f***ing amazing! Just tried 
it yesterday, and I love the idea of embedded documentation. By 
chance, anyone know how to create PDF's?


--rt



Re: function overload on full signature?

2012-11-15 Thread bearophile

monarch_dodra:

I mean, are there even any real use-cases for overload on 
return type?


In Haskell many functions are "overloaded" on the return type 
(like the fromString function), and it's nice. But Haskell is 
able to do it because it has a global type inferencer.


Bye,
bearophile


Re: function overload on full signature?

2012-11-15 Thread Sönke Ludwig
Am 14.11.2012 20:07, schrieb Timon Gehr:
> On 11/14/2012 06:30 PM, Rob T wrote:
>> On Wednesday, 14 November 2012 at 09:16:13 UTC, Walter Bright wrote:
 I'm not requesting this to be a feature of D, I'm only asking why it
 is not
 being done.
>>>
>>> Because types are resolved bottom-up, and if the return type were part
>>> of the overloading, there would be no sensible rule to determine the
>>> types.
>>
>> But doesn't the compiler already have to perform overload-like decision
>> making on return types in the "alias this" case, esp once multiple
>> conversions are allowed?
>>
>> class A{
>>int i;
>>bool b;
>>alias i this;
>>alias b this;
>> }
>>
>> main()
>> {
>>auto a = new A;
>>int i = a;
>>bool b = a;
>> }
>>
>> --rt
> 
> alias this is not the best example, but the necessary logic is basically 
> already in the compiler.
> Lambda parameter type deduction based on the expected type is a similar task.
> 
> It is not being done because it is not being done. Full type inference would 
> be even more fun.

In the lambda case it's return type deduction and not overload resolution. 
Those are actually two
very different things.


Re: Undefined identifier WIN32_FILE_ATTRIBUTE_DATA

2012-11-15 Thread Rainer Schuetze



On 11/15/2012 8:17 AM, Martin Drašar wrote:

Dne 15.11.2012 7:45, Rainer Schuetze napsal(a):

[...]

 >
 > importcore.sys.windows.windows(C:\Program
Files\D\dmd2\windows\bin\..\..\src\druntime\import\core\sys\windows\windows.di)



since dmd 2.060 most of the files in druntme/import are plain copies of
the source .d files, not generated .di files. My guess is that you have
copied dmd 2.060 over an older version which included the .di files and
you are now left with a mixture of versions.

I suggest you should reinstall dmd 2.060 into an empty directory.


Hi, Rainer,

you nailed it, thanks! I've managed to overwrite two older installations
on both machines I was playing with. Clean installation did the trick.


In previous versions installing over an older version usually did not 
cause any troubles as long as you didn't use files that were not 
overwritten. dmd 2.060 is special in this regard.




There should probably be a check for previously installed versions in
the installer, so it will at least yell at you that there are problems
waiting. Is there some place where I could fill an enhancement request?


Bug reports and enhancement requests go here: http://d.puremagic.com/issues/

Rainer


Re: I'm back

2012-11-15 Thread H. S. Teoh
On Thu, Nov 15, 2012 at 04:38:04AM -0800, Jonathan M Davis wrote:
> On Thursday, November 15, 2012 13:17:12 jerro wrote:
> > > std.array.array will never work with ranges with a transient front
> > > unless it somehow knew when it was and wasn't appropriate to dup,
> > > which it's not going to know purely by looking at the type of
> > > front. The creator of the range would have to tell them somehow.
> > > And even then, it wouldn't work beyond the built-in types, because
> > > there's no generic way to dup stuff.
> > 
> > Daniel was actually talking about std.byLine.map!"a.dup", which is
> > not a transient range, but would be considered transient if we did
> > what Andrei suggests.
> 
> Well, there's no way around that as far as I can see. Even if all
> ranges had to be explicitly marked as transient or not, map would be
> in a bind here, because it knows nothing about what the function it
> was given is doing, so it has no way of knowing how it affects
> transience. At minimum, it would be forced to mark itself as transient
> if the original range was (even if the function used idup), or it
> would _always_ be forced to mark it as transient (I'm not sure which).
> The only way out would be if there were a way to tell map explicitly
> to mark the resultant range as having a non-transient front.

OK, you've convinced me. The only way to take care of all these corner
cases is to make transient ranges illegal, period. Just the fact that
map!a and map!"a.dup" may be transient or not, shows that this isn't
going to be solved by any simple means. None of our proposals so far
even comes close to handling this one correctly.


> By using type deduction like Andrei is suggesting, then we can at
> least deduce that map!"a.idup" has a non-transient front,

Well, this at least gives us some semblance of workability for this
particular case, though it is very leaky around the edges.


> but the only way that we'd know that map!"a.dup" was non-transient was
> if map were told somehow, and it defined an enum that the
> hasTransientFront trait could examine (i.e. we're back in the boat
> we'd be in if all ranges had to declare whether they were transient or
> not). So, as long as we can have transient fronts, map!"a.dup" is
> screwed, which may or may not be a problem.

This is not good, because it relies on the user to declare whether or
not something is transient when they aren't even the implementor of the
delegate passed to map. It's one thing to require users to declare their
ranges transient or not, but it's quite another thing to require them to
tell map whether or not a.dup is transient (where a.dup can be
substituted with an arbitrarily complex delegate which may not even be
implemented by the user).


[...]
> It's looking like this comes down to either banning ranges with
> transient fronts entirely (and changing how ByLine and ByChunk work),

This is looking like the more attractive option right now.


> or we're going to have to deal with quirks like
> array(map!"a.dup"(file.byLine())) not working whereas
> array(map!"a.idup"(file.byLine())) does work.
[...]

This is ugly. I don't like it. But at least, it does give you a compile
time error when array requires a non-transient range, but gets a
transient one. Better than subtle runtime breakage, for sure.


T

-- 
It said to install Windows 2000 or better, so I installed Linux instead.


Verified documentation comments

2012-11-15 Thread bearophile
Most of the slides of the recent 2012 LLVM Developers' Meeting 
are not yet available. But there are the slides of the "Parsing 
Documentation Comments in Clang" talk by Dmitri Gribenko:


http://llvm.org/devmtg/2012-11/Gribenko_CommentParsing.pdf


With this feature added to Clang (you need the -Wdocumentation 
switch to activate it. For performance it parses comments only in 
this case), some C++ code with documentation comments like this:


/// \brief Does something with \p str.
/// \param [in] Str the string.
/// \returns a modified string.
void do_something(const std::string &str);


Generates "notes" or warnings like this, that help keep such 
comments more aligned to the code. Something similar is probably 
possible in D with DDocs:


example.cc:4:17: warning: parameter ’Str’ not found
in the function declaration [-Wdocumentation]
/// \param [in] Str the string.
^~~
example.cc:5:6: warning: ’\returns’ command used
in a comment that is attached to a function
returning void [-Wdocumentation]
/// \returns a modified string.
~^~


Or like this:


/// \param x value of X coordinate.
/// \param x value of Y coordinate.
void do_something(int x, int y);

example.cc:2:12: warning: parameter ’x’ is already
documented [-Wdocumentation]
/// \param x value of Y coordinate.
   ^


Currently in D if you have a documentation comment like this it 
generates no warnings or notes:


/**
* Params:
*   x = is for this
*   and not for that
*   x = is for this
*   and not for that
*   y = is for that
*
* Returns: The contents of the file.
*/
void foo(int x) {}

void main() {}


Bye,
bearophile


Re: function overload on full signature?

2012-11-15 Thread Rob T
On Thursday, 15 November 2012 at 17:33:24 UTC, monarch_dodra 
wrote:
I'd say because overall, you gain *very* little out of it, and 
it costs you much more complex compiler rules.




But how little, and for how much extra cost? Overloading already 
has a cost to it, and it's really difficult for me to understand 
why adding return type to the mix has to be be many times more 
costly. I will confess that I'm not a compiler designer, but I 
can still try to imagine what would be needed. Already the 
compiler MUST ensure that the return type is valid, so we're 
essentially already there from what I can see.


Most of all though, I'd say it is a bad idea in and out of 
itself: If you overload on the return type, you open the 
floodgates to call ambiguity.


Sure, but not much more so that we have already with the current 
overloading system, and the compiler can certainly prevent an 
invalid compile from happening when there is even a hint of 
ambiguity, as it does already with current overloading. Besides I 
would expect such a feature to be used by advanced programmers 
who know what they are doing because overloading in general is an 
advanced feature and it is certainly not for the easily confused.


I mean, are there even any real use-cases for overload on 
return type?


Yes, I've wanted this for a few years, and I have used a similar 
feature successfully through C++ class operator conversions.


I brought up the example of operator conversion for classes in 
C++. I know some of you have said it's not the same thing, but 
IMO it is the same thing.


int x = a;
string y = a;

Does "a" represent a class or a function? Why should it matter?

class A
{
   int i;
   string s;
   alias i this;
   alias s this; // ouch D does not allow it!
   ...
}

UFCS

int convert( A a )
{
   return a.i;
}

string convert( A a )
{
   return a.s;
}

int i = a.convert;
string s = a.convert;

A real-world use case example is to implement Variant types more 
naturally, where you could do the above and have it convert to 
int or string (and other types) on demand depending on the 
validity of data type. Obviously it will run-time error when the 
type cannot be converted, or perform whatever logic the 
programmer desires.


--rt



Re: I'm back

2012-11-15 Thread H. S. Teoh
On Thu, Nov 15, 2012 at 02:14:15PM +0100, eskimo wrote:
> On Wed, 2012-11-14 at 18:31 -0800, Andrei Alexandrescu wrote:
> > > array(map!"a.dup"(stdin.byLine()))
> 
> As it seems there is a good way of handling ranges with transient
> front for algorithms that need a persistent front?
> 
> Why not simply document any transient range to be transient (should be
> anyway) and add the little hint to map. Also note that some algorithms
> might not work as expected with transient fronts. In addition, at
> least the algorithms in phobos should state in their documentation
> whether they rely on non transient front or not.

This is better than nothing, of course, but still, relying purely on
documentation is not desirable if we can do better. Though at this
point, it looks like we can't, so this may be the only option left.


[...]
> On the other hand if an algorithm depends unnecessarily on non
> transient fronts it should be fixed.

Definitely! I have a fix for std.algorithm.joiner already, and there are
a few others that can be fixed without too much effort (I hope).


> If there are many algorithms which can be more efficient with the
> dependency on non transient front, we could simply provide a second
> module, called std.transalgorithm (or something) offering dedicated
> algorithms for transient fronts. (So people don't have to role their
> own)

AFAIK, none of the algorithms will be more or less efficient depending
on whether non-transience can be assumed. It's just a matter of
reordering operations (don't call .popFront until .front is used); a bit
trickier to write the code, but doesn't change the asymptotic
complexity. The algorithms that *are* affected are those that can't work
with transient ranges anyway, so it doesn't really matter.


> I think this is a very clean and straight forward solution. If you
> want something that simply works you just use map!"a.dup" ( or
> whatever you need to copy your elements) and don't care. If you want
> performance then you would have to check what algorithms to use and
> have a look at std.transalgorithm.
[...]

I don't like duplicating a whole bunch of algorithms in transalgorithm.

However, there *may* be something to the idea of splitting up
std.algorithm so that those algorithms that aren't sensitive to
transience are in one module, and the fragile algorithms in another
module. Then one module can be clearly marked as usable with *all*
ranges, and the other as usable only with non-transient ranges.


T

-- 
Be in denial for long enough, and one day you'll deny yourself of things you 
wish you hadn't.


Re: Verified documentation comments

2012-11-15 Thread Andrej Mitrovic
On 11/15/12, bearophile  wrote:
> Currently in D if you have a documentation comment like this it
> generates no warnings or notes

So you open dlang.org, hit the edit button, and fix it. Doing
semantics in comments is beyond overkill.

And soon enough we won't have to use "---"-style comments for code
snippets anymore because the compiler will auto-insert the code from
the next ddoc'ed unittests as ddoc'ed code samples (there is a pull
ready but it requires a review, and perhaps a rewrite since the
implementation is cheating a little bit).


Re: DConf 2013 on kickstarter.com: we're live!

2012-11-15 Thread Michael Eisendle
It would be really awesome if you could ship the shirts to the 
EU. I pledged 50$ nonetheless, if only there'll be recordings it 
will be awesome enough :)


On Thursday, 15 November 2012 at 15:13:28 UTC, Andrei 
Alexandrescu wrote:

On 11/15/12 6:39 AM, Joseph Rushton Wakeling wrote:

On 10/22/2012 07:25 PM, Andrei Alexandrescu wrote:
Please pledge your support and encourage your friends to do 
the same.

Hope to
see you in 2013!


About that t-shirt thing -- is Kickstarter really accurate to 
say "US

only"? Or can you enter an EU address and pay shipping charges?


I think you can, but I'm not sure. Anyhow we can arrange 
something - feel free to contribute $50 for "no reward" and 
then contact me to get the T-shirt.


Andrei





Re: Something needs to happen with shared, and soon.

2012-11-15 Thread Jacob Carlborg

On 2012-11-15 11:52, Manu wrote:


Interesting concept. Nice idea, could certainly be useful, but it
doesn't address the problem as directly as my suggestion.
There are still many problem situations, for instance, any time a
template is involved. The template doesn't know to do that internally,
but under my proposal, you lock it prior to the workload, and then the
template works as expected. Templates won't just break and fail whenever
shared is involved, because assignments would be legal. They'll just
assert that the thing is locked at the time, which is the programmers
responsibility to ensure.


I don't understand how a template would cause problems.

--
/Jacob Carlborg


Re: Growing a Language (applicable to @attribute design)

2012-11-15 Thread Era Scarecrow
On Wednesday, 14 November 2012 at 22:23:17 UTC, Walter Bright 
wrote:

On 11/14/2012 3:06 AM, Simen Kjaeraas wrote:
But the syntax for built-in types is better, in that you don't 
need to write:


auto x = int(1);



If you're going to argue that D should have some facility to 
create user-defined literals that are arbitrary sequences of 
arbitrary characters, I think you're taking Guy's advice way 
beyond the breaking point.


 Hmmm... Correct me if I'm wrong, but you can create/use 
opAssign, correct? Although that doesn't work during 
initialization...


struct MyInt
{
  int i;
  ref MyInt opAssign(int rhs) {
i = rhs;
return this;
  }
}

MyInt x = MyInt(10);
MyInt y; // = 15; //cannot implicity convert
y = 15;

writeln(x);
writeln(y);



Re: What's the deal with __thread?

2012-11-15 Thread Walter Bright

On 11/15/2012 6:46 AM, Alex Rønne Petersen wrote:

I think most people are aware of this 'quirk' from what I've seen in binding
projects, so it's probably not a big deal.



Also, remember that C code can now have thread local globals, too. Both are 
expressible in D, it's just that the default is reversed.


Re: What's the deal with __thread?

2012-11-15 Thread Walter Bright

On 11/15/2012 6:42 AM, Don Clugston wrote:

On 15/11/12 11:54, Walter Bright wrote:

On 11/15/2012 2:28 AM, Don Clugston wrote:

However, there is one case in the test suite which is unclear to me:

extern(C) __thread int x;

Is there any other way to do this?


extern(C) int x;



What about extern(C) variables which are not thread local?
(which I think would be the normal case).
Then from a C header,

extern(C) int x;

must become:

extern(C) __gshared int x;


That's right. extern(C) doesn't change the storage class.


in D. It's a very rare case, I guess, but it's one of those situations where D
code silently has different behaviour from identical C code.




Re: Binary compatibility on Linux

2012-11-15 Thread Jacob Carlborg

On 2012-11-15 17:23, Russel Winder wrote:


Comments made by people who are steeped in Autoconf/Automake and haven't
actually used more modern systems such as SCons or Waf.

The comments on the website are almost, but not quite, totally wrong on
all important points.


I'm not saying that they're right or wrong. I'm saying that they're 
there and it's obviously someones opinion. It also indicates that 
something that doesn't use Makefiles is not accepted or harder to get 
accepted.


--
/Jacob Carlborg


Re: Binary compatibility on Linux

2012-11-15 Thread Russel Winder
[[I suspect this is getting way off-topic for this list, so if
instructed to take it elsewhere will be happy to do so as long as
elsewhere is defined.]]

On Thu, 2012-11-15 at 17:52 +0100, Thomas Koch wrote:
[…]
> The "website" is a wiki site edited by many people over a longer time 
> period. If you found points you disagree with I'd love to see a comment 
> added.

"Please don't use SCons: we will have to re-implement many standard
features of autoconf/automake, including DESTDIR, out of tree builds,
cleaning and more."

This just shows that the Debian system is so rooted in Autoconf/Automake
that the mindset is to oppose anything that isn't.  SCons supports out
of tree builds far better than Autoconf/Automake.  SCons supports
cleaning far better than Autoconf/Automake, just differently. What is
this "more"? Why is DESTDIR so important? SCons has good ways of doing
installation, just differently.

The problem here is that Debian gives no guidance to people who want to
use SCons how to write their SCons builds to be harmonious with the
Debian way of doing things. Instead the Debian system says "we are
Autoconf/Autotools, so don't use SCons". 


"Using waf as build system is discouraged. One of the reasons is the
recommendation to ship a waf executable in every single package using
it, instead of using a system wide one. Also note that just shipping the
waf executable (which contains a binary blob) is considered to be not
complient with the Debian Free Software guidelines by the FTP Team.
Please see #645190 and UnpackWaf for more details on the issue and how
to avoid it, if you have to use waf."

It is true that Thomas pushes the "carry the build system with the
project" line. In fact Gradle has done something along these lines as
well. Indeed SCons supports this way of working. In a global context, it
is a very good idea, even if it is conflict with the Debian way of
working.  But like SCons, Waf works very well with an installed Waf, the
project supplied Waf can be ignored.  The Waf executable is not a binary
blob really, it is just an encoded source distribution which has to be
decoded. If the people had investigated properly this comment would just
not have been made. Actually the comments on the indicated issue explain
this very clearly.  Sadly other comments wilfully misrepresent the
status quo.

> I for example don't know either SCons or Waf. Maybe the information in our 
> UpstreamGuide is not up to date anymore.

To be honest, the comments never were reasonable, they were founded on
prejudice and lack of research.  If the instructions were "We like
Autotools/Automake and are not prepared to work with anything else." it
would be more acceptable as being opinionated, honest, and a statement
to people how Debian worked. This would be far more acceptable/better
than the FUD that is there.

> Have you found more issues with the text? It would be interesting for us to 
> listen to the opinions of non-debian members.

I am a Debian Unstable user and fan. I hate Autotools/Automake.
Therefore I do not get involved in building packages for Debian, I am
just a freeloading user, total fretard ;-) I have though been known to
build packages and put them in my own repository. I'm sometimes selfish
like that :-) 

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: Binary compatibility on Linux

2012-11-15 Thread Russel Winder
On Thu, 2012-11-15 at 20:42 +0100, Jacob Carlborg wrote:
[…]
> I'm not saying that they're right or wrong. I'm saying that they're 
> there and it's obviously someones opinion. It also indicates that 
> something that doesn't use Makefiles is not accepted or harder to get 
> accepted.

I just submitted a email on this which I think answers the point. I
worry that it is way off-topic for this list though. The summary is that
Debian is Autotools/Automake focussed and any other build is a problem
for them. I think this is sad, but it is Debian's privilege to be
dictatorial on this for the Debian repository.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: Growing a Language (applicable to @attribute design)

2012-11-15 Thread Jacob Carlborg

On 2012-11-15 20:32, Era Scarecrow wrote:


  Hmmm... Correct me if I'm wrong, but you can create/use opAssign,
correct? Although that doesn't work during initialization...

struct MyInt
{
   int i;
   ref MyInt opAssign(int rhs) {
 i = rhs;
 return this;
   }
}

MyInt x = MyInt(10);
MyInt y; // = 15; //cannot implicity convert
y = 15;


That's what a construtor is for:

struct MyInt
{
int i;

this (int i)
{
this.i = i;
}

ref MyInt opAssign(int rhs) {
i = rhs;
return this;
}
}

void main()
{
MyInt i = 3;
}

--
/Jacob Carlborg


Re: Verified documentation comments

2012-11-15 Thread Marco Leise
Am Thu, 15 Nov 2012 20:15:15 +0100
schrieb Andrej Mitrovic :

> Doing semantics in comments is beyond overkill.

They are DDoc and attached to a symbol. I've seen IDEs give
information on errors in documentation comments on the fly.
If at some point we can also automatically document thrown
exceptions I'm happy :)
I'm all for compiler warnings where they are cheap. Why wait
for someone to tell you, that your documentation has obvious
errors that could have been statically checked during its
generation ?
Let's add this as a nice-to-have on the new Wiki. Someone who
is interested in hacking on DMD can pick it up then.

-- 
Marco



Re: function overload on full signature?

2012-11-15 Thread Timon Gehr

On 11/15/2012 07:09 PM, Sönke Ludwig wrote:

Am 14.11.2012 20:07, schrieb Timon Gehr:

On 11/14/2012 06:30 PM, Rob T wrote:

On Wednesday, 14 November 2012 at 09:16:13 UTC, Walter Bright wrote:

I'm not requesting this to be a feature of D, I'm only asking why it
is not
being done.


Because types are resolved bottom-up, and if the return type were part
of the overloading, there would be no sensible rule to determine the
types.


But doesn't the compiler already have to perform overload-like decision
making on return types in the "alias this" case, esp once multiple
conversions are allowed?

class A{
int i;
bool b;
alias i this;
alias b this;
}

main()
{
auto a = new A;
int i = a;
bool b = a;
}

--rt


alias this is not the best example, but the necessary logic is basically 
already in the compiler.
Lambda parameter type deduction based on the expected type is a similar task.

It is not being done because it is not being done. Full type inference would be 
even more fun.


In the lambda case it's return type deduction and not overload resolution. 
Those are actually two
very different things.



Yes, lambda _return_ type deduction is less related, but I have never 
claimed otherwise.


Another case that shows how a compiler must be able to take into account 
the left hand side of an assignment in order to type check the right 
hand side:


int foo(int);
double foo(int);

void main(){
double function(int) fun = &foo;
}


Re: Growing a Language (applicable to @attribute design)

2012-11-15 Thread Joseph Rushton Wakeling

On 11/15/2012 11:54 AM, Walter Bright wrote:

size_t x = 1;


Complete misunderstanding there -- I'd interpreted Simen's remark as saying that 
e.g. auto x = 1; would automatically assign the correct type where builtins were 
concerned, and I was pointing out that this wouldn't cover all builtins.  Though 
I guess auto x = 1UL; would work.


I wasn't asking how to create a size_t per se, which I do know how to do ... :-)

I once came an awful cropper due to lack of UL in an integer assignment.  I'd 
got a bit of C(++) code like this:


 size_t p = 1 << m;

where m was chosen such that p would be the largest power of 2 on the system 
that could (i) be multiplied by 2 without integer wraparound and (ii) was within 
the range of the uniform integer random number generator in use.


And that worked fine ... until I installed a 64-bit OS, and suddenly, all the 
numbers were coming out different.


They shouldn't have been, because the RNG in use was still based around int32_t 
and so the same constraints on m and p should have been in place ... and then I 
discovered what was happening: 1 << m; wasn't taking a size_t (unsigned long) 
and bitshifting it by m places, it was taking a regular int and bitshifting it 
by m places ... which given the value of m, was causing integer wraparound, the 
result of which was then converted to a size_t.


It just so happened that on 32-bit this was taking the value back to where it 
was supposed to be anyway.  But on 64-bit the wraparound made 1 << m a negative 
number which in turn corresponded to a far too _large_ value when converted into 
a size_t.


And so I learned that I had to use 1UL << m; instead ... :-P


Re: I'm back

2012-11-15 Thread Timon Gehr

On 11/14/2012 08:32 PM, Jonathan M Davis wrote:

On Wednesday, November 14, 2012 20:18:26 Timon Gehr wrote:

That is a very imprecise approximation. I think it does not cover any
ground: The day eg. 'array' will require this kind of non-transient
element range is the day where I will write my own.


std.array.array _cannot_ work with a transient front. ...


It can work if 'transient' is over-approximated like suggested in the 
parent post.




Re: I'm back

2012-11-15 Thread Timon Gehr

On 11/14/2012 11:18 PM, Andrei Alexandrescu wrote:

On 11/14/12 11:18 AM, Timon Gehr wrote:

On 11/14/2012 06:43 PM, Andrei Alexandrescu wrote:

On 11/14/12 7:29 AM, H. S. Teoh wrote:

But since this isn't going to be fixed properly, then the only solution
left is to arbitrarily declare transient ranges as not ranges (even
though the concept of ranges itself has no such implication, and many
algorithms don't even need such assumptions), and move on. We will just
have to put up with an inferior implementation of std.algorithm and
duplicate code when one*does* need to work with transient ranges. It is
not a big loss anyway, since one can simply implement one's own library
to deal with this issue properly.


What is your answer to my solution?

transient elements == input range && not forward range && element type
has mutable indirections.

This is testable by any interested clients, covers a whole lot of
ground, and has a good intuition behind it.


Andrei


That is a very imprecise approximation. I think it does not cover any
ground: The day eg. 'array' will require this kind of non-transient
element range is the day where I will write my own.


What would be an example where array would have trouble with using this
definition?

Andrei


import std.array, std.range, std.algorithm, std.stdio, std.conv;

class C{
int x;
this(int x){ this.x = x; }
string toString(){ return "C("~to!string(x)~")"; }
}

void main(){
auto a = iota(0,100).map!(a=>new C(a)).array;
writeln(a);
}


A simple question

2012-11-15 Thread Stugol
When I post on these forums to ask for new features (e.g. 
iterators), you say that you won't be adding any new features at 
the moment, and that you are instead concentrating on making the 
language stable and usable.


However, when I post on these forums to ask for bugs to be fixed 
(e.g. the defective MODULE keyword, or the linker not supporting 
spaces in paths), you say that's not going to happen anytime soon.


So what the fuck's the point? D is a great language, and I really 
want to use it, but it doesn't work. And when I post here about 
its flaws and limitations, I get flamed.




Re: Something needs to happen with shared, and soon.

2012-11-15 Thread David Nadlinger
On Wednesday, 14 November 2012 at 17:54:16 UTC, Andrei 
Alexandrescu wrote:
That is correct. My point is that compiler implementers would 
follow some specification. That specification would contain 
informationt hat atomicLoad and atomicStore must have special 
properties that put them apart from any other functions.


What are these special properties? Sorry, it seems like we are 
talking past each other…


[1] I am not sure where the point of diminishing returns is 
here,
although it might make sense to provide the same options as 
C++11. If I

remember correctly, D1/Tango supported a lot more levels of
synchronization.


We could start with sequential consistency and then explore 
riskier/looser policies.


I'm not quite sure what you are saying here. The functions in 
core.atomic already exist, and currently offer four levels (raw, 
acq, rel, seq). Are you suggesting to remove the other options?


David


Re: Something needs to happen with shared, and soon.

2012-11-15 Thread David Nadlinger

On Thursday, 15 November 2012 at 16:43:14 UTC, Sean Kelly wrote:
On Nov 15, 2012, at 5:16 AM, deadalnix  
wrote:


What is the point of ensuring that the compiler does not 
reorder load/stores if the CPU is allowed to do so ?


Because we can write ASM to tell the CPU not to.  We don't have 
any such ability for the compiler right now.


I think the question was: Why would you want to disable compiler 
code motion for loads/stores which are not atomic, as the CPU 
might ruin your assumptions anyway?


David


Re: Verified documentation comments

2012-11-15 Thread Brian Schott

On Thursday, 15 November 2012 at 20:58:55 UTC, Marco Leise wrote:

Am Thu, 15 Nov 2012 20:15:15 +0100
schrieb Andrej Mitrovic :


Doing semantics in comments is beyond overkill.


They are DDoc and attached to a symbol. I've seen IDEs give
information on errors in documentation comments on the fly.
If at some point we can also automatically document thrown
exceptions I'm happy :)
I'm all for compiler warnings where they are cheap. Why wait
for someone to tell you, that your documentation has obvious
errors that could have been statically checked during its
generation ?
Let's add this as a nice-to-have on the new Wiki. Someone who
is interested in hacking on DMD can pick it up then.


One way to solve this and similar issues may be to add D support 
to PMD. (http://pmd.sourceforge.net/). Many rules can be created 
as XPath expressions, so if someone wants a new check on their 
code, they can just write it. A side effect of doing this is that 
we'd have a javacc-compatible grammar for D.


Re: A simple question

2012-11-15 Thread Jesse Phillips

On Thursday, 15 November 2012 at 21:25:03 UTC, Stugol wrote:
When I post on these forums to ask for new features (e.g. 
iterators), you say that you won't be adding any new features 
at the moment, and that you are instead concentrating on making 
the language stable and usable.


However, when I post on these forums to ask for bugs to be 
fixed (e.g. the defective MODULE keyword, or the linker not 
supporting spaces in paths), you say that's not going to happen 
anytime soon.


So what the fuck's the point? D is a great language, and I 
really want to use it, but it doesn't work. And when I post 
here about its flaws and limitations, I get flamed.


This forum isn't a bug tracking system. It is for discussion, in 
relation to bugs that means identifying it it really is a bug and 
deciding on the priority of that bug over other goals. To say 
that being told you won't see it any times soon is "flaming" is 
exaggeration.


If you have a real example of flames then please do bring that 
forward, but there isn't much the community will be able to do 
about it.


There is a lot of issues in D, selection isn't always objective 
and direction isn't well documented at this point. Many times it 
can seem a voice isn't being heard and suddenly its taken care of.


Really you just need to convince one person it is of priority, 
that person will need the skill/initiative to implement it and 
submit the changes, and maybe that person is yourself. Luckily 
that list of people is growing and not shrinking.


Re: Something needs to happen with shared, and soon.

2012-11-15 Thread Andrei Alexandrescu

On 11/15/12 2:18 PM, David Nadlinger wrote:

On Thursday, 15 November 2012 at 16:43:14 UTC, Sean Kelly wrote:

On Nov 15, 2012, at 5:16 AM, deadalnix  wrote:


What is the point of ensuring that the compiler does not reorder
load/stores if the CPU is allowed to do so ?


Because we can write ASM to tell the CPU not to. We don't have any
such ability for the compiler right now.


I think the question was: Why would you want to disable compiler code
motion for loads/stores which are not atomic, as the CPU might ruin your
assumptions anyway?


The compiler does whatever it takes to ensure sequential consistency for 
shared use, including possibly inserting fences in certain places.


Andrei



Re: Something needs to happen with shared, and soon.

2012-11-15 Thread Andrei Alexandrescu

On 11/15/12 1:29 PM, David Nadlinger wrote:

On Wednesday, 14 November 2012 at 17:54:16 UTC, Andrei Alexandrescu wrote:

That is correct. My point is that compiler implementers would follow
some specification. That specification would contain informationt hat
atomicLoad and atomicStore must have special properties that put them
apart from any other functions.


What are these special properties? Sorry, it seems like we are talking
past each other…


For example you can't hoist a memory operation before a shared load or 
after a shared store.


Andrei


Re: Something needs to happen with shared, and soon.

2012-11-15 Thread David Nadlinger
On Thursday, 15 November 2012 at 22:57:54 UTC, Andrei 
Alexandrescu wrote:

On 11/15/12 1:29 PM, David Nadlinger wrote:
On Wednesday, 14 November 2012 at 17:54:16 UTC, Andrei 
Alexandrescu wrote:
That is correct. My point is that compiler implementers would 
follow
some specification. That specification would contain 
informationt hat
atomicLoad and atomicStore must have special properties that 
put them

apart from any other functions.


What are these special properties? Sorry, it seems like we are 
talking

past each other…


For example you can't hoist a memory operation before a shared 
load or after a shared store.


Well, to be picky, that depends on what kind of memory operation 
you mean – moving non-volatile loads/stores across volatile 
ones is typically considered acceptable.


But still, you can't move memory operations across any other 
arbitrary function call either (unless you can prove it is safe 
by inspecting the callee's body, obviously), so I don't see where 
atomicLoad/atomicStore would be special here.


David


Re: Something needs to happen with shared, and soon.

2012-11-15 Thread David Nadlinger
On Thursday, 15 November 2012 at 22:58:53 UTC, Andrei 
Alexandrescu wrote:

On 11/15/12 2:18 PM, David Nadlinger wrote:
On Thursday, 15 November 2012 at 16:43:14 UTC, Sean Kelly 
wrote:
On Nov 15, 2012, at 5:16 AM, deadalnix  
wrote:


What is the point of ensuring that the compiler does not 
reorder

load/stores if the CPU is allowed to do so ?


Because we can write ASM to tell the CPU not to. We don't 
have any

such ability for the compiler right now.


I think the question was: Why would you want to disable 
compiler code
motion for loads/stores which are not atomic, as the CPU might 
ruin your

assumptions anyway?


The compiler does whatever it takes to ensure sequential 
consistency for shared use, including possibly inserting fences 
in certain places.


Andrei


How does this have anything to do with deadalnix' question that I 
rephrased at all? It is not at all clear that shared should do 
this (it currently doesn't), and the question was explicitly 
about Walter's statement that shared should disable compiler 
reordering, when at the same time *not* inserting barriers/atomic 
ops. Thus the »which are not atomic« qualifier in my message.


David


Re: Something needs to happen with shared, and soon.

2012-11-15 Thread Sean Kelly
On Nov 15, 2012, at 2:18 PM, David Nadlinger  wrote:

> On Thursday, 15 November 2012 at 16:43:14 UTC, Sean Kelly wrote:
>> On Nov 15, 2012, at 5:16 AM, deadalnix  wrote:
>>> What is the point of ensuring that the compiler does not reorder 
>>> load/stores if the CPU is allowed to do so ?
>> 
>> Because we can write ASM to tell the CPU not to.  We don't have any such 
>> ability for the compiler right now.
> 
> I think the question was: Why would you want to disable compiler code motion 
> for loads/stores which are not atomic, as the CPU might ruin your assumptions 
> anyway?

A barrier isn't always necessary to achieve the desired ordering on a given 
system.  But I'd still call out to ASM to make sure the intended operation 
happened.  I don't know that I'd ever feel comfortable with "volatile x=y" even 
if what I'd do instead is just a MOV.

Re: Something needs to happen with shared, and soon.

2012-11-15 Thread Sean Kelly
On Nov 15, 2012, at 3:05 PM, David Nadlinger  wrote:

> On Thursday, 15 November 2012 at 22:57:54 UTC, Andrei Alexandrescu wrote:
>> On 11/15/12 1:29 PM, David Nadlinger wrote:
>>> On Wednesday, 14 November 2012 at 17:54:16 UTC, Andrei Alexandrescu wrote:
 That is correct. My point is that compiler implementers would follow
 some specification. That specification would contain informationt hat
 atomicLoad and atomicStore must have special properties that put them
 apart from any other functions.
>>> 
>>> What are these special properties? Sorry, it seems like we are talking
>>> past each other…
>> 
>> For example you can't hoist a memory operation before a shared load or after 
>> a shared store.
> 
> Well, to be picky, that depends on what kind of memory operation you mean – 
> moving non-volatile loads/stores across volatile ones is typically considered 
> acceptable.

Usually not, really.  Like if you implement a mutex, you don't want 
non-volatile operations to be hoisted above the mutex acquire or sunk below the 
mutex release.  However, it's safe to move additional operations into the block 
where the mutex is held.

Re: Verified documentation comments

2012-11-15 Thread bearophile

Marco Leise:


Let's add this as a nice-to-have on the new Wiki. Someone who
is interested in hacking on DMD can pick it up then.


I have added a suggestion in Bugzilla:
http://d.puremagic.com/issues/show_bug.cgi?id=9032

Bye,
bearophile


  1   2   >