On 06/11/12 19:27, Steven Schveighoffer wrote: > On Mon, 11 Jun 2012 09:41:37 -0400, Artur Skawina <art.08...@gmail.com> wrote: > >> On 06/11/12 14:11, Steven Schveighoffer wrote: >>> On Mon, 11 Jun 2012 07:56:12 -0400, Artur Skawina <art.08...@gmail.com> >>> wrote: >>> >>>> On 06/11/12 12:35, Steven Schveighoffer wrote: >>> >>>>> I wholly disagree. In fact, keeping the full qualifier intact *enforces* >>>>> incorrect code, because you are forcing shared semantics on literally >>>>> unshared data. >>>>> >>>>> Never would this start ignoring shared on data that is truly shared. >>>>> This is why I don't really get your argument. >>>>> >>>>> If you could perhaps explain with an example, it might be helpful. >>>> >>>> *The programmer* can then treat shared data just like unshared. Because >>>> every >>>> load and every store will "magically" work. I'm afraid that after more than >>>> two or three people touch the code, the chances of it being correct would >>>> be >>>> less than 50%... >>>> The fact that you can not (or shouldn't be able to) mix shared and unshared >>>> freely is one of the main advantages of shared-annotation. >>> >>> If shared variables aren't doing the right thing with loads and stores, >>> then we should fix that. >> >> Where do you draw the line? >> >> shared struct S { >> int i >> void* p; >> SomeStruct s; >> ubyte[256] a; >> } >> >> shared(S)* p = ... ; >> >> auto v1 = p.i; >> auto v2 = p.p; >> auto v3 = p.s; >> auto v4 = p.a; >> auto v5 = p.i++; >> >> Are these operations on shared data all safe? Note that if these >> accesses would be protected by some lock, then the 'shared' qualifier >> wouldn't really be needed - compiler barriers, that make sure it all >> happens while this thread holds the lock, would be enough. (even the >> order of operations doesn't usually matter in that case and enforcing >> one would in fact add overhead) > > No, they should not be all safe, I never suggested that. It's impossible to > engineer a one-size-fits-all for accessing shared variables, because it > doesn't know what mechanism you are going to use to protect it. As you say, > once this data is protected by a lock, memory barriers aren't needed. But > requiring a lock is too heavy handed for all cases. This is a good point to > make about the current memory-barrier attempts, they just aren't > comprehensive enough, nor do they guarantee pretty much anything except > simple loads and stores. > > Perhaps the correct way to implement shared semantics is to not allow access > *whatsoever* (except taking the address of a shared piece of data), unless > you: > > a) lock the block that contains it > b) use some library feature that uses casting-away of shared to accomplish > the correct thing. For example, atomicOp.
Exactly; this is what I'm after the whole time. And I think it can be done in most cases without casting away shared. For example by allowing the safe conversions from/to shared of results of expression involving shared data, but only under certain circumstances. Eg in methods with a shared 'this'. > None of this can prevent deadlocks, but it does create a way to prevent > deadlocks. > > If this was the case, stack data would be able to be marked shared, and you'd > have to use option b (it would not be in a block). Perhaps for simple data > types, when memory barriers truly are enough, and a shared(int) is on the > stack (and not part of a container), straight loads and stores would be > allowed. Why? Consider the case of function that directly or indirectly launches a few threads and gives them the address of some local shared object. If the current thread also accesses this object, which has to be possible, then it must obey the same rules. > Now, would you agree that: > > auto v1 = synchronized p.i; > > might be a valid mechanism? In other words, assuming p is lockable, > synchronized p.i locks p, then reads i, then unlocks p, and the result type > is unshared? I think I would prefer auto v1 = synchronized(p).i; ie for the synchronized expression to lock the object, return an unshared reference, and the object be unlocked once this ref goes away. RLII. ;) Which would then also allow for { auto unshared_p = synchronized(p); auto v1 = unshared_p.i; auto v2 = unshared_p.p; // etc } and with a little more syntax sugar it could turn into synchronized (unshared_p = p) { auto v1 = unshared_p.i; auto v2 = unshared_p.p; // etc } The problem with this is that it only unshares the head, which I think isn't enough. Hmm. One approach would be to allow shared struct S { ubyte* data; AStruct *s1; shared AnotherStruct *s2; shared S* next; } and for synchronized(s){} to drop 'shared' from any field that isn't also marked as shared. IOW treat any 'unshared' field as owned by the object. (an alternative could be to tag the fields that should be unshared instead) > Also, inside synchronized(p), p becomes tail-shared, meaning all data > contained in p is unshared, all data referred to by p remains shared. > > In this case, we'd need a new type constructor (e.g. locked) to formalize the > type. I should have read to the end i guess. :) You mean something like I described above, only done by mutating the type of 'p'? That might work too. But I need to think about this some more. Why would we need 'locked'? > Make sense? More and more. artur