Re: RFC 178 (v2) Lightweight Threads

2000-09-08 Thread Dan Sugalski

At 04:21 PM 9/8/00 -0400, Chaim Frenkel wrote:
> > "DS" == Dan Sugalski <[EMAIL PROTECTED]> writes:
>
>DS> The problem with using database locking and transactions as your
>DS> model is that they're *expensive*. Amazingly so. The expense is
>DS> certainly worth it for what you get, and in many cases the expense
>DS> is hidden (at least to some extent) by the cost you pay in disk
>DS> I/O, but it's definitely there.
>
>I lost you. How is the model wrong? Perl has a resource, Databases
>have a resource. Perl does a V or P operation, so does a Database.

I didn't say it was wrong. I said it was expensive. The model's just fine, 
though there's some handwaving and punting in all the databases that handle 
this. (Mainly in deadlock handling)

>DS> Heavyweight locking schemes are fine for relatively infrequent or
>DS> expensive operations (your average DLM for cluster-wide file
>DS> access is an example) but we're not dealing with rare or heavy
>DS> operations.
>
>Even databases have to handle this problem. The granularity of the
>locking. Row vs. page vs. Table (might be other schemes I don't know enough)

Right, but databases are all dealing with mainly disk access. A 1ms lock 
operation's no big deal when it takes 100ms to fetch the data being locked. 
A 1ms lock operation *is* a big deal when it takes 100ns to fetch the data 
being locked...

>DS> We're dealing with very lightweight, frequent
>DS> operations. That means we need a really cheap locking scheme for
>DS> this sort of thing, or we're going to be spending most of our time
>DS> in the lock manager...
>
>The issue is correctness. Lightweight Heavyweight has no meaning to me.
>How does Lightweight and Heavyweight map to correctness. And what
>is a lightweight and what is a heavyweight.

Correctness is what we define it as. I'm more worried about expense.

We've really got three levels of cost here. (For all these I'm assuming 
non-distributed--when you yank in multiple machines things get funky fast, 
and that doesn't map to what perl's doing anyway)

1) At the top is the Oracle/DB level. You get locking, thread consistency 
for data being read while it's updated, deadlock detection, and rollbacks 
on failure.

2) In the middle level is VMS' lock manager. You get locking and deadlock 
detection, along with a few different flavors of locks (exclusive, read, 
write, and few others)

3) Down at the bottom is the posix thread lock. You get locking here, and 
nothing else. Heck, you don't even get recursive locks unless you pay extra.

Each of these three levels have their own costs and guarantees. Levels 1 & 
2 are really cool and do all sorts of nifty things for you. Unfortunately 
they cost a *lot*. Great gobs of time and complexity. Core-level locking 
(the stuff we use to protect ourselves) can't use either--they're too 
expensive. I'm not sure I want to try and provide them to the user either, 
because of the complexity of their implementation.

>One thing to consider, what do to about Deadlocks and the notification
>and recovery method. Without a rollback mechanism, each and every
>programmer will have to roll their own. So we either provide it or we
>have to make it easy for them to recover from a major blow.  (Unless
>you are going to simply let the threads sit in deadlock until a human
>or watchdog timer kills the entire process)

Detecting deadlocks is expensive and it means rolling our own locking 
protocols on most systems. You can't do it at all easily with PThreads 
locks, unfortunately. Just detecting a lock that blocks doesn't cut it, 
since that may well be legit, and doing a scan for circular locking issues 
every time a lock blocks is expensive.

Rollbacks are also expensive, and they can generate unbounded amounts of 
temporary data, so they're also fraught with expense and peril.

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: RFC 178 (v2) Lightweight Threads

2000-09-08 Thread Alan Burlison

Chaim Frenkel wrote:

> No scanning. I was considering that all variables on a store would
> safe store the previous value in a thread specific holding area[*]. Then
> upon a deadlock/rollback, the changed values would be restored.
> 
> (This restoration should be valid, since the change could not have taken
> place without an exclusive lock on the variable.)
> 
> Then the execution stack and program counter would be reset to the
> checkpoint. And then restarted.

Sigh.  Think about references.  No, think harder.  See?

-- 
Alan Burlison



Re: RFC 178 (v2) Lightweight Threads

2000-09-08 Thread Chaim Frenkel

> "DS" == Dan Sugalski <[EMAIL PROTECTED]> writes:

DS> The problem with using database locking and transactions as your
DS> model is that they're *expensive*. Amazingly so. The expense is
DS> certainly worth it for what you get, and in many cases the expense
DS> is hidden (at least to some extent) by the cost you pay in disk
DS> I/O, but it's definitely there.

I lost you. How is the model wrong? Perl has a resource, Databases
have a resource. Perl does a V or P operation, so does a Database.

DS> Heavyweight locking schemes are fine for relatively infrequent or
DS> expensive operations (your average DLM for cluster-wide file
DS> access is an example) but we're not dealing with rare or heavy
DS> operations. 

Even databases have to handle this problem. The granularity of the
locking. Row vs. page vs. Table (might be other schemes I don't know enough)

DS> We're dealing with very lightweight, frequent
DS> operations. That means we need a really cheap locking scheme for
DS> this sort of thing, or we're going to be spending most of our time
DS> in the lock manager...

The issue is correctness. Lightweight Heavyweight has no meaning to me.
How does Lightweight and Heavyweight map to correctness. And what
is a lightweight and what is a heavyweight.

We haven't yet even discussed the implementation details, we have been
waving our hands, moving between the language layer and implementation
layer.

One thing to consider, what do to about Deadlocks and the notification
and recovery method. Without a rollback mechanism, each and every
programmer will have to roll their own. So we either provide it or we
have to make it easy for them to recover from a major blow.  (Unless
you are going to simply let the threads sit in deadlock until a human
or watchdog timer kills the entire process)


-- 
Chaim FrenkelNonlinear Knowledge, Inc.
[EMAIL PROTECTED]   +1-718-236-0183



Re: RFC 178 (v2) Lightweight Threads(multiversionning)

2000-09-08 Thread Chaim Frenkel

> "r" == raptor  <[EMAIL PROTECTED]> writes:

r> ]- what if we don't use "locks", but multple versions of the same variable
r> !!! What I have in mind :
r>  If there is transaction-based variables THEN we can use multiversioning
r> mechanism like some DB - Interbase for example.
r> Check here : http://216.217.141.125/document/InternalsOverview.htm
r> just thoughts, i've not read the whole discussion.

Doesn't really help. You just move the problem to commit time. 

Remember, the final result has to be _as if_ all of the interleaved
changes were done serially (one thread finishing before the other).

If this can not be done, then one or the other thread has to be notified
of deadlock and the relevant changes thrown away.

(As a former boss liked to say, "Work is conserved."

or perhaps TANSTAFL)


-- 
Chaim FrenkelNonlinear Knowledge, Inc.
[EMAIL PROTECTED]   +1-718-236-0183



Re: RFC 178 (v2) Lightweight Threads

2000-09-08 Thread Chaim Frenkel

> "AB" == Alan Burlison <[EMAIL PROTECTED]> writes:

AB> Please consider carefully the potential consequences of your proposal.

I just realized, that no one has submitted a language level proposal how
deadlocks are detected, delivered to the perl program, how they are
to be recovered from, What happens to the held locks, etc.


-- 
Chaim FrenkelNonlinear Knowledge, Inc.
[EMAIL PROTECTED]   +1-718-236-0183



Re: RFC 178 (v2) Lightweight Threads

2000-09-08 Thread Chaim Frenkel

> "AB" == Alan Burlison <[EMAIL PROTECTED]> writes:

AB> Chaim Frenkel wrote:
>> What tied scalar? All you can contain in an aggregate is a reference
>> to a tied scalar. The bucket in the aggregate is a regular bucket. No?

AB> So you don't intend being able to roll back anything that has been
AB> modified via a reference then?  And if you do intend to allow this, how
AB> will you know when to stop chasing references?  What happens if there
AB> are circular references?  How much time do you think it will take to
AB> scan a 4Gb array to find out which elements need to be checkpointed?

AB> Please consider carefully the potential consequences of your proposal.

No scanning. I was considering that all variables on a store would
safe store the previous value in a thread specific holding area[*]. Then
upon a deadlock/rollback, the changed values would be restored.

(This restoration should be valid, since the change could not have taken
place without an exclusive lock on the variable.)

Then the execution stack and program counter would be reset to the
checkpoint. And then restarted.


[*] Think of it as the transaction log.
-- 
Chaim FrenkelNonlinear Knowledge, Inc.
[EMAIL PROTECTED]   +1-718-236-0183



Re: RFC 178 (v2) Lightweight Threads

2000-09-08 Thread Chaim Frenkel

> "AB" == Alan Burlison <[EMAIL PROTECTED]> writes:

AB> Chaim Frenkel wrote:
>> You aren't being clear here.
>> 
>> fetch($a)   fetch($a)
>> fetch($b)   ...
>> add ...
>> store($a)   store($a)
>> 
>> Now all of the perl internals are done 'safely' but the result is garbage.
>> You don't even know the result of the addition.

AB> Sorry you are right, I wasn't clear.  You are correct - the final value
AB> of $a will depend on the exact ordering of the FETCHEs and STOREs in the
AB> two threads.  As I said - tough.  The problem is that defining a
AB> 'statement' is hard.  Does map or grep constitute a single statement?  I
AB> bet most perl programmers would say 'Yes'.  However I suspect it
AB> wouldn't be practical to make it auto-locking in the manner you
AB> describe.  In that case you aren't actually making anyone's life easier
AB> by adding auto-locking, as they now have a whole new problem to solve -
AB> remembering which operations are and aren't auto-locking.  Explicit
AB> locks don't require a feat of memory - they are there for all to see in
AB> the code.

I want to make _my_ life easier. I don't expect to have mucho communication
between threads, the more communication the more lossage in performace
to the sheer handshaking. 

So with minimal interaction, why bother with the sprinkling of the lock.
You in effect are tell _all_ users that they must do
lock($a)
...
unlock($a)

around all :shared variables. Now if that has to be done why not do it
automatically.


AB> The other issue is that auto-locking operations will inevitably be done
AB> inside explicitly locked sections.  This is firstly inefficient as it
AB> adds another level of locking, and secondly may well be prone to causing
AB> deadlocks.

Aha,

You might have missed one of my comments. A lock() within a scope would
turn off the auto locking. The user _knows_ what he wants and is now
willing to accept responsibility.

>> Doing that store of a value in $h, ior pushing something onto @queue
>> is going to be a complex operation.  If you are going to keep a lock
>> on %h while the entire expression/statement completes, then you have
>> essentially given me an atomic operation which is what I would like.

AB> And you have given me something that I don't like, which is to make
AB> every shared hash a serialisation point.  

I'm sure you've done a lot of work in the core and serialization. But
haven't you seen that when you want to lock an entry deep in the heart
of some chain, you have to _first_ lock the chain to prevent changes?

So, 
lock(aggregate)
fetch(key)
lock(chain)
fetch(chain)
lock(value)
fetch(value)
unlock(value)
unlock(chain)
unlock(key)
unlock(aggregate)

Actually, these could be readlocks, and they might be freed as soon
as they aren't needed, but I'm not sure that the rule to keep holding
might be better. (e.g. Promotion to an exclusive), but these algorithms
have already been worked on for quite a long time.

AB> If I'm thinking of speeding up an app that uses a shared hash by
AB> threading it I'll see limited speedup because under your scheme,
AB> any accesses will be serialised by that damn automatic lock that I
AB> DON'T WANT!

Then just use lock() in the scope.

AB> A more common approach to locking hashes is to have a lock per
AB> chain - this allows concurrent updates to happen as long as they
AB> are on different chains.  

Don't forget that the aggregate needs to be locked before trying to
lock the chain. The aggregate may disappear underneath you unless you
lock it down.

AB> Also, I'm not clear what type of automatic lock you are intending
AB> to cripple me with - an exclusive one or a read/write lock for
AB> example.  My shared variable might be mainly read-only, so
AB> automatically taking out an exclusive lock every time I fetch its
AB> value isn't really helping me much.  

I agree with having a read-only vs. exclusive. But do all platforms
provide this type of locking? If not would the overhead of implementing
it kill any performance wins.

Does promoting a read-only to an exclusive cost that much? So the
automatic lock would be a read-only and the store promotes to
an exclusive during the short storage period.

AB> I think what I'm trying to say is please stop trying to be helpful
AB> by adding auto locking, because in most cases it will just get in
AB> the way.

Here we are arguing about which is the more common access method.
Multiple shared or singluar shared. This is an experience issue.
Those times that I've done threaded code, I've kept the sharing down
to a minimum. Mostly work queues, or using a variable to create a 
critical section.

AB> If you *really* desperately want it, I think it should be optional, e.g.
AB>my $a : shared, auto lock;
AB> or somesuch.  This will probably be fine for those people who are usin

Re: RFC 178 (v2) Lightweight Threads

2000-09-08 Thread Dan Sugalski

At 06:18 PM 9/7/00 -0400, Chaim Frenkel wrote:
> > "AB" == Alan Burlison <[EMAIL PROTECTED]> writes:
>
>AB> Chaim Frenkel wrote:
> >> The problem I have with this plan, is reconciling the fact that a
> >> database update does all of this and more. And how to do it is a known
> >> problem, its been developed over and over again.
>
>AB> I'm sorry, but you are wrong.  You are confusing transactions with
>AB> threading, and the two are fundamentally different.  Transactions are
>AB> just a way of saying 'I want to see all of these changes, or none of
>AB> them'.  You can do this even in a non-threaded environment by
>AB> serialising everything.  Deadlock avoidance in databases is difficult,
>AB> and Oracle for example 'resolves' a deadlock by picking one of the two
>AB> deadlocking transactions at random and forcibly aborting it.
>
>Actually, I wasn't. I was considering the locking/deadlock handling part
>of database engines. (Map row -> variable.)

The problem with using database locking and transactions as your model is 
that they're *expensive*. Amazingly so. The expense is certainly worth it 
for what you get, and in many cases the expense is hidden (at least to some 
extent) by the cost you pay in disk I/O, but it's definitely there.

Heavyweight locking schemes are fine for relatively infrequent or expensive 
operations (your average DLM for cluster-wide file access is an example) 
but we're not dealing with rare or heavy operations. We're dealing with 
very lightweight, frequent operations. That means we need a really cheap 
locking scheme for this sort of thing, or we're going to be spending most 
of our time in the lock manager...

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: RFC 178 (v2) Lightweight Threads

2000-09-08 Thread Chaim Frenkel

> "NI" == Nick Ing-Simmons <[EMAIL PROTECTED]> writes:

NI> Chaim Frenkel <[EMAIL PROTECTED]> writes:

NI> Well if you want to place that restriction on perl6 so be it but in perl5
NI> I can say 

NI> tie $a[4],'Something';

That I didn't realize.

NI> Indeed that is exactly how tied arrays work - they (automatically) add 
NI> 'p' magic (internal tie) to their elements.

Hmm, I always understood a tied array to be the _array_ not each individual
element.

NI> Tk apps to this all the time :

NI>  $parent->Lable(-textvariable => \$somehash{'Foo'});

NI> The reference is just to get the actual element rather than a copy.
NI> Tk then ties the actual element so it can see STORE ops and up date 
NI> label.

Would it be a loss to not allow the elements? The tie would then be
to the aggregate.

I might argue that under threading tieing to the aggregate may be 'more'
correct for coherency (locking the aggregate before accessing.)


-- 
Chaim FrenkelNonlinear Knowledge, Inc.
[EMAIL PROTECTED]   +1-718-236-0183



Re: A tentative list of vtable functions

2000-09-08 Thread Ken Fox

Dan Sugalski wrote:
> At 05:30 PM 8/31/00 -0400, Ken Fox wrote:
> >   before_get_value
> >   after_get_value
> >   before_set_value
> >   after_set_value
> >
> >There ought to be specializations of get_value and set_value
> >that call these hooks if they're defined -- no sense in making the
> >normal case slow.
> 
> You could override the vtable function in that case, I think.

It'd be nice if there were standard functions for this in the core.
I don't want every type author to write debugger code (which is
probably what the before_* and after_* methods will be most useful
for.)

> >We also need GC functions:
> >   children -- nested SV *s (for live object traversals)
> 
> I'm not sure. Maybe.

How else are you going to discover the internal SV references in an
object? I don't want to have a conservative GC.

> >   move -- move object to new memory location
> 
> Don't think this is needed. The base SV structures won't move, I think, and
> we should be able to move the variable's contents (hanging off the sv_any
> analog) at will.

Different types may cache internal pointers instead of offsets. Those
pointers might need to change if an object is moved. (I agree that SVs
should not move.)

> >   resize_granted   -- object's resize request was granted
> 
> I don't think this is needed--the guts of the set functions would handle
> this as needed.

An object might be smart enough to either compress itself or take
advantage of adjacent space becoming available. This method would let
the GC and the object discuss allocations.

> >Is is_same an identify function? Or does it check the type?
> 
> Identity. If two SVs pointed to the identical same data, this'd return true.

Shouldn't the sv_any pointers be the same then?

> >What purpose do the logical_* functions serve? Shouldn't there just be
> >one is_true function that the logical_* ops call?
> 
> In case one wants to overload
> 
>@foo || @bar
> 
> to do a piecewise logical or of the two arrays.

I don't like the idea of using the ops vtbl for syntax overloading. For
one thing there aren't individual ops for every syntax, so the vtbl alone
isn't sufficient. Secondly, I think vtbl ops should have very consistent
semantics to allow the optimizer some room to re-arrange things. Short
circuiting should not be customizable by each type for example.

- Ken



Re: RFC 178 (v2) Lightweight Threads

2000-09-08 Thread Steven W McDougall

> Example
> 
> @a = ();
> async { push @a, (1, 2, 3) };
> push @a, (4, 5, 6);
> print @a;
> 
> Possible output: 142536


Actually, I'm not sure I understand this.
Can someone show how to program push() on a stack machine?


- SWM



Re: RFC 178 (v2) Lightweight Threads

2000-09-08 Thread Steven W McDougall

> > You aren't being clear here.
> > 
> > fetch($a)   fetch($a)
> > fetch($b)   ...
> > add ...
> > store($a)   store($a)
> > 
> > Now all of the perl internals are done 'safely' but the result is garbage.
> > You don't even know the result of the addition.
> 
> Sorry you are right, I wasn't clear.  You are correct - the final value
> of $a will depend on the exact ordering of the FETCHEs and STOREs in the
> two threads.  

...I hadn't been thinking in terms of the stack machine. OK, we
could put the internal locks around fetch and store. Now, can everyone
deal with these examples

Example

$a = 0;
$thread = new Thread sub { $a++ };
$a++;
$thread->join;
print $a;

Output: 1 or 2


Example

@a = ();
async { push @a, (1, 2, 3) };
push @a, (4, 5, 6);
print @a;

Possible output: 142536


- SWM



Re: RFC 178 (v2) Lightweight Threads(multiversionning)

2000-09-08 Thread raptor

> I don't even want to take things out a step to guarantee atomicity at the
> statement level. There are speed issues there, since it means every
> statement will need to conditionally lock everything. (Since we can't
> necessarily know at compile time which variables are shared and which
> aren't) There are also lock ordering issues, which get us deadlock fun.
> And, of course, let's not forget some of the statements can last a *long*
> time and cause all sorts of fun--eval comes to mind, as do some of the
> funkier regex things.

]- what if we don't use "locks", but multple versions of the same variable
!!! What I have in mind :
 If there is transaction-based variables THEN we can use multiversioning
mechanism like some DB - Interbase for example.
Check here : http://216.217.141.125/document/InternalsOverview.htm
just thoughts, i've not read the whole discussion.
=
iVAN
[EMAIL PROTECTED]
=




Re: RFC 178 (v2) Lightweight Threads

2000-09-08 Thread Alan Burlison

Chaim Frenkel wrote:

> You aren't being clear here.
> 
> fetch($a)   fetch($a)
> fetch($b)   ...
> add ...
> store($a)   store($a)
> 
> Now all of the perl internals are done 'safely' but the result is garbage.
> You don't even know the result of the addition.

Sorry you are right, I wasn't clear.  You are correct - the final value
of $a will depend on the exact ordering of the FETCHEs and STOREs in the
two threads.  As I said - tough.  The problem is that defining a
'statement' is hard.  Does map or grep constitute a single statement?  I
bet most perl programmers would say 'Yes'.  However I suspect it
wouldn't be practical to make it auto-locking in the manner you
describe.  In that case you aren't actually making anyone's life easier
by adding auto-locking, as they now have a whole new problem to solve -
remembering which operations are and aren't auto-locking.  Explicit
locks don't require a feat of memory - they are there for all to see in
the code.

The other issue is that auto-locking operations will inevitably be done
inside explicitly locked sections.  This is firstly inefficient as it
adds another level of locking, and secondly may well be prone to causing
deadlocks.

> AB> I think you are getting confused between the locking needed within the
> AB> interpreter to ensure that it's internal state is always consistent and
> AB> sane, and the explicit application-level locking that will have to be in
> AB> multithreaded perl programs to make them function correctly.
> AB> Interpreter consistency and application correctness are *not* the same
> AB> thing.
> 
> I just said the same thing to someone else. I've been assuming that
> perl would make sure it doesn't dump core. I've been arguing for having
> perl do a minimal guarentee at the user level.

Right - I think everyone is in agreement that there are two types of
locking under discussion, and that the first - internal locking to
ensure interpreter consistency - is a must.  The debate is now over how
much we try to do automatically at the application level.

> Sorry, internal consistancy isn't enough.
> 
> Doing that store of a value in $h, ior pushing something onto @queue
> is going to be a complex operation.  If you are going to keep a lock
> on %h while the entire expression/statement completes, then you have
> essentially given me an atomic operation which is what I would like.

And you have given me something that I don't like, which is to make
every shared hash a serialisation point.  If I'm thinking of speeding up
an app that uses a shared hash by threading it I'll see limited speedup
because under your scheme, any accesses will be serialised by that damn
automatic lock that I DON'T WANT!  A more common approach to locking
hashes is to have a lock per chain - this allows concurrent updates to
happen as long as they are on different chains.  Also, I'm not clear
what type of automatic lock you are intending to cripple me with - an
exclusive one or a read/write lock for example.  My shared variable
might be mainly read-only, so automatically taking out an exclusive lock
every time I fetch its value isn't really helping me much.  I think what
I'm trying to say is please stop trying to be helpful by adding auto
locking, because in most cases it will just get in the way.

If you *really* desperately want it, I think it should be optional, e.g.
   my $a : shared, auto lock;
or somesuch.  This will probably be fine for those people who are using
threads but who don't actually understand what they are doing.  I still
however think that you havn't fully addressed the issue of what
constitutes and atomic operation.

> I think we all would agree that an op is atomic. +, op=, push, delete
> exists, etc. Yes?

Sorry - no I don't agree.  As I said, what about map or grep, or sort? 
I have an alternative proposal - anything that can be the target of a
tie is atomic, i.e. for scalars, STORE, FETCH, and DESTROY etc.

-- 
Alan Burlison



Re: RFC 178 (v2) Lightweight Threads

2000-09-08 Thread Alan Burlison

Chaim Frenkel wrote:

> What tied scalar? All you can contain in an aggregate is a reference
> to a tied scalar. The bucket in the aggregate is a regular bucket. No?

So you don't intend being able to roll back anything that has been
modified via a reference then?  And if you do intend to allow this, how
will you know when to stop chasing references?  What happens if there
are circular references?  How much time do you think it will take to
scan a 4Gb array to find out which elements need to be checkpointed?

Please consider carefully the potential consequences of your proposal.

-- 
Alan Burlison



Re: RFC 178 (v2) Lightweight Threads

2000-09-08 Thread Alan Burlison

Jarkko Hietaniemi wrote:

> Being multithreaded is not difficult, impossible, or bad as such.
> It's the make-believe that we can make all data automagically both
> shared and safe that is folly.  Data sharing (also known as code
> synchronization) should be explicit; explicitly controlled by the
> programmer.

Exactly.  The intention behind the proposal to do auto-locking is
praiseworth - to make the programmers life easier.  However, the
suggested solution is more akin to killing him with kindness.

-- 
Alan Burlison



Event model for Perl...

2000-09-08 Thread Grant M.

I am reading various discussions regarding threads, shared objects,
transaction rollbacks, etc., and was wondering if anyone here had any
thoughts on instituting an event model for Perl6? I can see an event model
allowing for some interesting solutions to some of the problems that are
currently being discussed.
Grant M.





Re: RFC 130 (v4) Transaction-enabled variables for Perl6

2000-09-08 Thread dLux

/---  On Fri,  Sep  08, 2000  at  06:59:24AM  +, Nick  Ing-Simmons
wrote:
|
| >>> eval {
| >>> my($_a, $_b, $_c) = ($a, $b, $c);
| >>> ...
| lock $abc_guard;
| >>> ($a, $b, $c) = ($_a, $_b, $_c);
| >>> }
|
| Then no one has to guess what is going on?
|
| But what do you do if $b (say) is tied so that assign to it needs
| a $abc_guard lock in another thread for assign to complete?
| i.e. things get hairy in the "final assignment".
|
\---

Guys,  please read  the RFC  130.  I mainly  figured out  most of  the
things what  you are talking  about. Look  at the example,  the Object
and Tie interface.

THEN we  can continue the talk,  because there are some  white area in
the implementation. The  RFC is near to freeze I  think, because there
was  no constructive  suggestion  in  the last  week.  I  fou want  to
develop it, please use this RFC as the base of the discussion.

Thanks,

dLux
--
mailto:[EMAIL PROTECTED] icq:30329785



Re: RFC 178 (v2) Lightweight Threads

2000-09-08 Thread Nick Ing-Simmons

Chaim Frenkel <[EMAIL PROTECTED]> writes:
>
>What tied scalar? All you can contain in an aggregate is a reference
>to a tied scalar. The bucket in the aggregate is a regular bucket. No?

I tied scalar is still a scalar and can be stored in a aggregate.

Well if you want to place that restriction on perl6 so be it but in perl5
I can say 

tie $a[4],'Something';

Indeed that is exactly how tied arrays work - they (automatically) add 
'p' magic (internal tie) to their elements.

Tk apps to this all the time :

 $parent->Lable(-textvariable => \$somehash{'Foo'});

The reference is just to get the actual element rather than a copy.
Tk then ties the actual element so it can see STORE ops and up date 
label.

-- 
Nick Ing-Simmons




Re: RFC 130 (v4) Transaction-enabled variables for Perl6

2000-09-08 Thread Nick Ing-Simmons

Bart Lateur <[EMAIL PROTECTED]> writes:
>On Wed, 06 Sep 2000 11:23:37 -0400, Dan Sugalski wrote:
>
>>>Here's some high-level emulation of what it should do.
>>>
>>> eval {
>>> my($_a, $_b, $c) = ($a, $b, $c);
>>> ...
>>> ($a, $b, $c) = ($_a, $_b, $_c);
>>> }
>>
>>Nope. That doesn't get you consistency. What you need is to make a local 
>>alias of $a and friends and use that.
>
>My example should have been clearer. I actually intended that $_a would
>be a variable of the same name as $a. It's a bit hard to write currently
>valid code that way. Second attempt:
>
>   eval {
>   ($a, $b, $c) = do {
>   local($a, $b, $c) = ($a, $b, $c); #or my(...)
>   ... # code which may fail
>   ($a, $b, $c);
>   };
>   };
>
>So the final assignment of the local values to the outer scoped
>variables will happen, and in one go, only if the whole block has been
>executed succesfully.

So what is wrong with (if you mean that) saying:

>>> eval {
>>> my($_a, $_b, $_c) = ($a, $b, $c);
>>> ...
lock $abc_guard;
>>> ($a, $b, $c) = ($_a, $_b, $_c);
>>> }

Then no one has to guess what is going on?

But what do you do if $b (say) is tied so that assign to it needs
a $abc_guard lock in another thread for assign to complete?
i.e. things get hairy in the "final assignment".

>
>I would simply block ALL other threads while the final group assignment
>is going on. This should finish typically in a few milliseconds.

So we "only" stall the other CPUs for a few million instructions each ;-)

>
>>It also means that if we're including *any* sort of external pieces (even 
>>files) in the transaction scheme we need to have some mechanism to roll 
>>back changes. If a transaction fails after truncating a 12G file and 
>>writing out 3G of data, what do we do?
>
>That does not belong in the kernel of a language. All that you may
>expect, is transactions on simple variables; plus maybe some hooks to
>attach external transaction code (transactions on files etc) to it. A
>simple "create a new file, and rename to the old filename when done"
>will usually do.

I am concerned that this is making "simple things easyish, BUT hard things 
impossible". i.e. we have a scheme which will be hard to explain, 
will only cover a few fairly uninteresting cases, and get in the 
way of doing it "properly".


-- 
Nick Ing-Simmons