>>>>> "AB" == Alan Burlison <[EMAIL PROTECTED]> writes:

AB> Chaim Frenkel wrote:
>> You aren't being clear here.
>> 
>> fetch($a)               fetch($a)
>> fetch($b)               ...
>> add                     ...
>> store($a)               store($a)
>> 
>> Now all of the perl internals are done 'safely' but the result is garbage.
>> You don't even know the result of the addition.

AB> Sorry you are right, I wasn't clear.  You are correct - the final value
AB> of $a will depend on the exact ordering of the FETCHEs and STOREs in the
AB> two threads.  As I said - tough.  The problem is that defining a
AB> 'statement' is hard.  Does map or grep constitute a single statement?  I
AB> bet most perl programmers would say 'Yes'.  However I suspect it
AB> wouldn't be practical to make it auto-locking in the manner you
AB> describe.  In that case you aren't actually making anyone's life easier
AB> by adding auto-locking, as they now have a whole new problem to solve -
AB> remembering which operations are and aren't auto-locking.  Explicit
AB> locks don't require a feat of memory - they are there for all to see in
AB> the code.

I want to make _my_ life easier. I don't expect to have mucho communication
between threads, the more communication the more lossage in performace
to the sheer handshaking. 

So with minimal interaction, why bother with the sprinkling of the lock.
You in effect are tell _all_ users that they must do
        lock($a)
        ...
        unlock($a)

around all :shared variables. Now if that has to be done why not do it
automatically.


AB> The other issue is that auto-locking operations will inevitably be done
AB> inside explicitly locked sections.  This is firstly inefficient as it
AB> adds another level of locking, and secondly may well be prone to causing
AB> deadlocks.

Aha,

You might have missed one of my comments. A lock() within a scope would
turn off the auto locking. The user _knows_ what he wants and is now
willing to accept responsibility.

>> Doing that store of a value in $h, ior pushing something onto @queue
>> is going to be a complex operation.  If you are going to keep a lock
>> on %h while the entire expression/statement completes, then you have
>> essentially given me an atomic operation which is what I would like.

AB> And you have given me something that I don't like, which is to make
AB> every shared hash a serialisation point.  

I'm sure you've done a lot of work in the core and serialization. But
haven't you seen that when you want to lock an entry deep in the heart
of some chain, you have to _first_ lock the chain to prevent changes?

So, 
        lock(aggregate)
                fetch(key)
        lock(chain)
                fetch(chain)
        lock(value)
                fetch(value)
        unlock(value)
        unlock(chain)
        unlock(key)
        unlock(aggregate)

Actually, these could be readlocks, and they might be freed as soon
as they aren't needed, but I'm not sure that the rule to keep holding
might be better. (e.g. Promotion to an exclusive), but these algorithms
have already been worked on for quite a long time.

AB> If I'm thinking of speeding up an app that uses a shared hash by
AB> threading it I'll see limited speedup because under your scheme,
AB> any accesses will be serialised by that damn automatic lock that I
AB> DON'T WANT!

Then just use lock() in the scope.

AB> A more common approach to locking hashes is to have a lock per
AB> chain - this allows concurrent updates to happen as long as they
AB> are on different chains.  

Don't forget that the aggregate needs to be locked before trying to
lock the chain. The aggregate may disappear underneath you unless you
lock it down.

AB> Also, I'm not clear what type of automatic lock you are intending
AB> to cripple me with - an exclusive one or a read/write lock for
AB> example.  My shared variable might be mainly read-only, so
AB> automatically taking out an exclusive lock every time I fetch its
AB> value isn't really helping me much.  

I agree with having a read-only vs. exclusive. But do all platforms
provide this type of locking? If not would the overhead of implementing
it kill any performance wins.

Does promoting a read-only to an exclusive cost that much? So the
automatic lock would be a read-only and the store promotes to
an exclusive during the short storage period.

AB> I think what I'm trying to say is please stop trying to be helpful
AB> by adding auto locking, because in most cases it will just get in
AB> the way.

Here we are arguing about which is the more common access method.
Multiple shared or singluar shared. This is an experience issue.
Those times that I've done threaded code, I've kept the sharing down
to a minimum. Mostly work queues, or using a variable to create a 
critical section.

AB> If you *really* desperately want it, I think it should be optional, e.g.
AB>    my $a : shared, auto lock;
AB> or somesuch.  This will probably be fine for those people who are using
AB> threads but who don't actually understand what they are doing.  I still
AB> however think that you havn't fully addressed the issue of what
AB> constitutes and atomic operation.

I'll take that if nothing else. (We could argue which would be the 
default :auto_lock or :no_auto_lock; but that would be just argumentative
on my part :-)

>> I think we all would agree that an op is atomic. +, op=, push, delete
>> exists, etc. Yes?

AB> Sorry - no I don't agree.  As I said, what about map or grep, or sort? 
AB> I have an alternative proposal - anything that can be the target of a
AB> tie is atomic, i.e. for scalars, STORE, FETCH, and DESTROY etc.

(I'm making the assumption that a lock() in scope does not do auto-lock
or that :auto_lock is attached to the variable and perl is doing it 
for me.)

All ops should be atomic. Anything beyond that I leave to others.

        $a += $b;       # Atomic
        push(@foo, $b); # Atomic

        $a = $a*3 + fn($a) + $h{$a};    # Language/Internals issue

<chaim>
-- 
Chaim Frenkel                                        Nonlinear Knowledge, Inc.
[EMAIL PROTECTED]                                               +1-718-236-0183

Reply via email to