Re: SV: Parrot multithreading?

2001-10-01 Thread Dan Sugalski

At 04:15 PM 9/30/2001 -0400, Sam Tregar wrote:
On Sun, 30 Sep 2001, Nick Ing-Simmons wrote:

  The main problem with perl5 and threads is that threads are an 
 afterthought.

Which, of course, also goes for UNIX and threads and C and threads.
It's good for us to be thinking about as early as possible but it's no
garauntee that there won't be big problems anyway.  Extensions in
C come to mind...

If they follow the rules, things'll be fine. We'll make sure it's all laid 
out clearly.

Has anything come down from the mountain about the future of XS in Perl 6?
Speaking of which, what's taking Moses so long?

Work, life... y'know, the standard stuff. :)

Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: SV: Parrot multithreading?

2001-10-01 Thread Dan Sugalski

At 09:23 AM 10/1/2001 -0400, Michael Maraist wrote:
   Just because parrot knows what functions can croak, it
   doesn't mean that
   it can possibly know which locks have been taken out all
   the way back up
   the stack between the call to longjmp and the
   corresponding setjmp. And,
   under your scheme we would potentially end up with two
   copies of every
   utility function - one croak_safe and one croak_unsafe.
 
  Not very likely - the only reason I can find for most
  utility functions (other than possibly string coercions) to
  fail is either panic(out of memory!) or panic(data
  structures hopelessly confused!) (or maybe panic(mutexes
  not working!)) - anything likely to throw a programmatic
  exception would be at the opcode level, and so not be open
  to being called by random code.

The perl6 high-level description currently sugests that op-codes can 
theoretically
be written in perl.  Perhaps these are only second-class op-codes
(switched off a single user-defined-op-code), but that suggests that
the good ole die/croak functionality will be desired.

Sure, but that's no problem. Things should propagate up those code streams 
the way they do any other.

Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




RE: SV: Parrot multithreading?

2001-09-29 Thread Michael Maraist

  or have entered a muteX,

 If they're holding a mutex over a function call without a
 _really_ good reason, it's their own fault.

General perl6 code is not going to be able to prevent someone from
calling code that in-tern calls XS-code.  Heck, most of what you do in
perl involves some sort of function call (such as stringifying).

whatever solution is found will probably have to deal with exceptions
/ events within a mutex.


That said, there's no reason why we can't have _all_ signal handler
code be:

void sig_handler:
  interp-signal=X;

This would just require some special handling within XS I suspect.

-Michael




Re: SV: Parrot multithreading?

2001-09-29 Thread Benjamin Stuhl

--- Alan Burlison [EMAIL PROTECTED] wrote:
 
   or have entered a mutex,
  
  If they're holding a mutex over a function call without
 a
  _really_ good reason, it's their own fault.
 
 Rubbish.  It is common to take out a lock in an outer
 functions and then
 to call several other functions under the protection of
 the lock.

Let me be more specific: if you're holding a mutex over a
call back into parrot, it's your own fault. Parrot itself
knows which functions may croak() and which won't, so it
can use utility funtions that return a status in places
where it'd be unsafe to croak(). (And true panics probably
should not be croak()s the way they are in perl5 - there's
not much an application can do with Bizarre copy of
ARRAY)
 
The alternative is that _every_ function simply
 return
   a status, which
is fundamentally expensive (your real retval has to
 be
   an out
parameter, to start with).
 
 Are we talking 'expensive in C' or 'expensive in parrot?'

Expensive in C (wasted memory bandwidth, code bloat -
cache waste), which translates to a slower parrot.

  It is also slow, and speed is priority #1.
 
 As far as I'm aware, trading correctness for speed is not
 an option.

This is true, which is why I asked if there were any
platforms that have a nonfunctional (set|long)jump.

-- BKS

__
Do You Yahoo!?
Listen to your Yahoo! Mail messages from any phone.
http://phone.yahoo.com



Re: SV: Parrot multithreading?

2001-09-28 Thread David M. Lloyd

On Fri, 28 Sep 2001, Alan Burlison wrote:

 Arthur Bergman wrote:

  longjmp in a controlled fashion isn't thread-safe? Or longjmping while
  holding mutexs and out from asynchronous handlers is not thread-safe?

 Arthur It *may* be possible to use longjmp in threaded programs in a
 restricted fashion on some platforms.  However if you use it on
 Solaris, for example, where we don't commit to it being thread-safe
 and it breaks - tough.  This includes breakage introduced by either
 new patches or new OS releases, as we haven't committed to it being
 thread-safe in the first place.

This raises another issue:  Is the Perl_croak() thing going to stay
around?  As far as I can tell, this uses siglongjmp.  I personally can't
think of any other way to do this type of exception handling in C, so
either we don't use croak(), find another way to do it, or just deal with
the potential problems.

- D

[EMAIL PROTECTED]




Re: SV: Parrot multithreading?

2001-09-28 Thread Dan Sugalski

At 01:03 PM 9/28/2001 -0500, David M. Lloyd wrote:
On Fri, 28 Sep 2001, Alan Burlison wrote:

  Arthur Bergman wrote:
 
   longjmp in a controlled fashion isn't thread-safe? Or longjmping while
   holding mutexs and out from asynchronous handlers is not thread-safe?
 
  Arthur It *may* be possible to use longjmp in threaded programs in a
  restricted fashion on some platforms.  However if you use it on
  Solaris, for example, where we don't commit to it being thread-safe
  and it breaks - tough.  This includes breakage introduced by either
  new patches or new OS releases, as we haven't committed to it being
  thread-safe in the first place.

This raises another issue:  Is the Perl_croak() thing going to stay
around?  As far as I can tell, this uses siglongjmp.  I personally can't
think of any other way to do this type of exception handling in C, so
either we don't use croak(), find another way to do it, or just deal with
the potential problems.

Croak's going to throw an interpreter exception. There's a little bit of 
documentation about the exception handling opcodes in 
docs/parrot_assembly.pod, with more to come soonish.

Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: SV: Parrot multithreading?

2001-09-28 Thread Benjamin Stuhl

Thus did the Illustrious Dan Sugalski [EMAIL PROTECTED]
write:
 Croak's going to throw an interpreter exception. There's
 a little bit of 
 documentation about the exception handling opcodes in 
 docs/parrot_assembly.pod, with more to come soonish.

This is fine at the target language level (e.g. perl6,
python, jako, whatever), but how do we throw catchable
exceptions up through six or eight levels of C code?
AFAICS, this is more of why perl5 uses the JMP_BUF stuff -
so that XS and functions like sv_setsv() can Perl_croak()
without caring about who's above them in the call stack.
The alternative is that _every_ function simply return a
status, which is fundamentally expensive (your real retval
has to be an out parameter, to start with).

-- BKS

__
Do You Yahoo!?
Listen to your Yahoo! Mail messages from any phone.
http://phone.yahoo.com



RE: SV: Parrot multithreading?

2001-09-28 Thread Hong Zhang


  This is fine at the target language level (e.g. perl6, python, jako,
  whatever), but how do we throw catchable exceptions up through six or
  eight levels of C code? AFAICS, this is more of why perl5 uses the
  JMP_BUF stuff - so that XS and functions like sv_setsv() can
  Perl_croak() without caring about who's above them in the call stack.
 
 This is my point exactly.

This is the wrong assumption. If you don't care about the call stack, 
how can you expect the [sig]longjmp can successfully unwind stack?
The caller may have a malloc memory block, or have entered a mutex,
or acquire the file lock of Perl cvs directory. You probably have
to call Dan or Simon for the last case.

 The alternative is that _every_ function simply return a status, which
 is fundamentally expensive (your real retval has to be an out
 parameter, to start with).

This is the only right solution generally. If you really really really
know everything between setjmp and longjmp, you can use it. However,
the chance is very low.

 To answer my own question (at least, with regards to Solaris), the
 attributes(5) man page says that 'Unsafe' is defined thus:
 
  An Unsafe library contains global and static data that is not
  protected.  It is not safe to use unless the application arranges for
  only one thread at time to execute within the library. Unsafe
  libraries may contain routines that are Safe;  however, most of the
  library's routines are unsafe to call.
 
 This would imply that in the worst case (at least for Solaris) we could
 just wrap calls to [sig]setjmp and [sig]longjmp in a mutex.  'croak'
 happens relatively infrequently anyway.

This is not the point. The [sig]setjmp and [sig]longjmp are generally
safe outside signal handler. Even they are not safe, we can easily
write our own thread-safe version using very small amount of assembly
code. The problem is they can not be used inside signal handler under
MT, and it is (almost) impossible to write a thread-safe version.

Hong



RE: SV: Parrot multithreading?

2001-09-28 Thread Benjamin Stuhl

--- Hong Zhang [EMAIL PROTECTED] wrote:
 
   This is fine at the target language level (e.g.
 perl6, python, jako,
   whatever), but how do we throw catchable exceptions
 up through six or
   eight levels of C code? AFAICS, this is more of why
 perl5 uses the
   JMP_BUF stuff - so that XS and functions like
 sv_setsv() can
   Perl_croak() without caring about who's above them in
 the call stack.
  
  This is my point exactly.
 
 This is the wrong assumption. If you don't care about the
 call stack, 
 how can you expect the [sig]longjmp can successfully
 unwind stack?
 The caller may have a malloc memory block, 

Irrelevant with a GC.

 or have entered a mutex,

If they're holding a mutex over a function call without a
_really_ good reason, it's their own fault.

 or acquire the file lock of Perl cvs directory. You
 probably have
 to call Dan or Simon for the last case.
 
  The alternative is that _every_ function simply return
 a status, which
  is fundamentally expensive (your real retval has to be
 an out
  parameter, to start with).
 
 This is the only right solution generally. If you really
 really really
 know everything between setjmp and longjmp, you can use
 it. However,
 the chance is very low.

It is also slow, and speed is priority #1.

[snip, snip]
 code. The problem is they can not be used inside signal
 handler under
 MT, and it is (almost) impossible to write a thread-safe
 version.

Signals are an event, and so don't need jumps. Under MT,
it's not like there would be a lot of contention for
PAR_jump_lock.

-- BKS

__
Do You Yahoo!?
Listen to your Yahoo! Mail messages from any phone.
http://phone.yahoo.com



RE: SV: Parrot multithreading?

2001-09-28 Thread Hong Zhang

  This is the wrong assumption. If you don't care about the call stack, 
  how can you expect the [sig]longjmp can successfully unwind stack?
  The caller may have a malloc memory block, 
 
 Irrelevant with a GC.

Are you serious? Do you mean I can not use malloc in my C code?

  or have entered a mutex,
 
 If they're holding a mutex over a function call without a
 _really_ good reason, it's their own fault.

If you don't care about caller, why the caller cares about you?
Why the callers need to present their reason for locking a
mutex? You ask too much.

  or acquire the file lock of Perl cvs directory. You
  probably have
  to call Dan or Simon for the last case.
  
   The alternative is that _every_ function simply return
  a status, which
   is fundamentally expensive (your real retval has to be
  an out
   parameter, to start with).
  
  This is the only right solution generally. If you really
  really really
  know everything between setjmp and longjmp, you can use
  it. However,
  the chance is very low.
 
 It is also slow, and speed is priority #1.

If so, just use C, which does not check nothing.

 Signals are an event, and so don't need jumps. Under MT,
 it's not like there would be a lot of contention for
 PAR_jump_lock.

Show me how to convert SIGSEGV to event. Please read previous
messages. Some signals are events, some are not.

Hong



Re: SV: Parrot multithreading?

2001-09-28 Thread Alan Burlison


  or have entered a mutex,
 
 If they're holding a mutex over a function call without a
 _really_ good reason, it's their own fault.

Rubbish.  It is common to take out a lock in an outer functions and then
to call several other functions under the protection of the lock.

   The alternative is that _every_ function simply return
  a status, which
   is fundamentally expensive (your real retval has to be
  an out
   parameter, to start with).

Are we talking 'expensive in C' or 'expensive in parrot?'

 It is also slow, and speed is priority #1.

As far as I'm aware, trading correctness for speed is not an option.

-- 
Alan Burlison
--
$ head -1 /dev/bollocks
effectively incubate innovative network infrastructures



Re: SV: Parrot multithreading?

2001-09-28 Thread Alan Burlison

Benjamin Stuhl wrote:

 Again, having a GC makes things easier - we clean up
 anything we lost in the GC run. If they don't actually work
 (are there any platforms where they don't work?), we can
 always write our own ;-).

I eagerly await your design for a mutex and CV garbage collector.

-- 
Alan Burlison
--
$ head -1 /dev/bollocks
systematically coordinate e-business transactional integrity



Re: SV: Parrot multithreading?

2001-09-28 Thread Dan Sugalski

At 11:56 PM 9/28/2001 +0100, Alan Burlison wrote:

   or have entered a mutex,
 
  If they're holding a mutex over a function call without a
  _really_ good reason, it's their own fault.

Rubbish.  It is common to take out a lock in an outer functions and then
to call several other functions under the protection of the lock.

And every vtable function on shared variables has the potential to aquire a 
mutex. Possibly (probably) more than one.

  It is also slow, and speed is priority #1.

As far as I'm aware, trading correctness for speed is not an option.

No, it isn't.

Short answer, longjmp is out. If we can find a way to use it, or something 
like it, safely on some platforms we might, but otherwise no.

Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: SV: Parrot multithreading?

2001-09-25 Thread Bryan C . Warnock

 On Monday 24 September 2001 11:54 am, Dan Sugalski wrote:
  Odds are you'll get per-op event checking if you enable debugging, since
  the debugging oploop will really be a generic check event every op
  loop that happens to have the pending debugging event bit permanently
  set. Dunno whether we want to force this at compile time or consider
  some way to set it at runtime. I'd really like to be able to switch
  oploops dynamically, but I can't think of a good way to do that
  efficiently.

On a side note, back when I was doing some of my initial benchmarking, I 
came up with this solution to the opcode loop / event check condundrum: 
eventless events.  (An attempt to integrate opcodes, events, and priorities.)
For those that want the executive summary, it worked, but was so slow (slow 
as in measured-in-multiples-rather-than-percentages slow) that I never 
pursued it further.  (Particularly because checking a flag is so relatively 
inexpensive, really.)

Currently, the DO_OP loop is essentially a 1x1 table for opcode dispatch. 
(By 1x1, I mean one priority level, one pending opcode deep.)  Events are a 
completely separate beast.  

So I elected to abstract an event as a set series of opcodes that run at a 
given priority, as would be referenced (basically) by the head of that 
particular branch of the opcode tree.  I set an arbitrary number of (and 
meaning to) priorities, from signals to async i/o to user-defined callbacks.

To remove the last vestige of distinction between regular opcodes and 
events, I abstracted regular code as a single event that ran at the lowest 
priority.  (Or the next-to-lowest.  I was contemplating, at one point, 
having BEGIN, INIT, CHECK, and END blocks implemented in terms of priority.) 
So now every opcode stream is an event, or every event is an opcode stream; 
depending on how you care to look at it.

So now you have an 'p' x 1 table for opcode dispatch, where 'p' is the  
different possible run-levels within the interpreter, with one pending 
opcode (branch head) per runlevel.

But, of course, you can have pending events.  Giving our (Uri, Dan, Simon, 
and I - way back at Uri's BOF at the OSCon) previous agreement that 
events at a given priority shouldn't preempt an already scheduled event at 
that priority, we needed a way to queue events so that they were lost, but 
would still be processed at the correct time (according to our scheduler).  
So I lengthened the width of the table to handle 'e' events.

I've now an 'p' x 'e' table.  (Implemented as an array ['p'] of linked lists 
['e'].)  Now to offload the event overhead onto the events themselves.

Each interpreter has its current priority available.  The DO_OP loop uses 
that priority as the offset into the dispatch table (up the 'p' axis).  The 
first opcode in the list is what gets executed.  That opcode, in turn, then 
updates itself (the table entry) to point to the next opcode within the 
particular event.

When a new event arrives, it appends its branch head to the priority list, 
and repoints the interpreter's current priority if it is now the highest.  
(This, in effect, suspends the current opcode stream, and the DO-OP loop 
begins processing the higher-level code immediately.  When regular 
processing resumes, it picks up more or less exactly from where it left off.)

When the event exits, it deletes its own node in the linked list, and, if 
it were the last branch at that priority,  repoints the current priority to 
the next highest priority that needs to be processed.  It took a 
while to come up with the necessary incantations to Do The Right Thing when 
the priority switchers were themselves interrupted by an event at a higher, 
lower, or identical priority to the one that was just leaving.

Sure, events were a lot hairier themselves than how they currently look, but 
events and prioirties are still rather non-existent on paper - who knows how 
hairy they may become to work properly.  Besides, cleaning up the opcode 
dispatch itself was supposed to make up the difference.

For those of you playing along at home, I'm sure you obviously see why 
*that's* not the case.  Testing equality is one of the more efficient 
processor commands; more so when testing for non-zero (on machines that have 
a zero-register, or support a test for non-zero).  Which is all a check 
against an event flag would do.  Instead, I replaced it with doubly 
indirected pointer deferencing, which is not only damn inefficient (from a 
memory, cache, and paging perspective), but also can't be optimized into 
something less heinous.

An oft-mentioned (most recently by Simon on language-dev) lament WRT Perl 6 
is the plethora of uninformed-ness from contributors.  So I am now informed. 
And so are you, if you weren't already.

-- 
Bryan C. Warnock
[EMAIL PROTECTED]



Re: SV: Parrot multithreading?

2001-09-24 Thread Uri Guttman

 DS == Dan Sugalski [EMAIL PROTECTED] writes:

   do we always emit one in
   loops?

  DS At least one per statement, probably more for things like regexes.

   what about complex conditional code? i don't think there is an
   easy way to guarantee events are checked with inserted op codes. doing
   it in the op loop is better for this.

  DS I'd agree in some cases, but I don't think it'll be a big problem
  DS to get things emitted properly. (It's funny we're arguing exactly
  DS opposite positions than we had not too long ago... :)

true!

then what about a win/win? we could make the event checking style a
compile time option. an event pragma will set it to emit op codes, or
check in the op loop or do no checking in the loop but have an main
event loop. we need 2 or 3 variant op loops for that (very minor
variants) and some minor compile time conditions. i just like to be able
to offer control to the coder. we can make the emit event checks version
the default as that will satisfy the most users with the least trouble.

uri

-- 
Uri Guttman  -  [EMAIL PROTECTED]  --  http://www.sysarch.com
SYStems ARCHitecture and Stem Development -- http://www.stemsystems.com
Search or Offer Perl Jobs  --  http://jobs.perl.org



Re: SV: Parrot multithreading?

2001-09-24 Thread Michael Maraist


 Odds are you'll get per-op event checking if you enable debugging, since
 the debugging oploop will really be a generic check event every op loop
 that happens to have the pending debugging event bit permanently set.
 Dunno whether we want to force this at compile time or consider some way to
 set it at runtime. I'd really like to be able to switch oploops
 dynamically, but I can't think of a good way to do that efficiently.

If you're looking to dynamically insert statis checks every op, then
that sounds like picking a different runops function.  We've already got a
trace varient.  We could farm out a couple of these and have execution
flags specify which one to use.  If you wanted every 5'th op to check
flags, you could trivially do:

while(code) {
  DO_OP(..)

  if(code) DO_OP(..)

  if(code) DO_OP(..)

  if(code) DO_OP(..)

  if(code) DO_OP(..)

  CHECK_EVENTS(interp)
}

The inner loop is a little bigger, but aside from cache-issues, has no
performance overhead.  This would prevent having to interleave check-ops
everywhere (more importantly, it would reduce the complexity of the
compiler which would have to garuntee the injection of check-events inside
all code-paths (especially for complex flow-control like last FOO.
You could use asynchronous timers to set various flags in the check-events
section (such as gc every so-often).  Of course this requires using a more
sophisticated alarm/sleep control system than the simple wrapper around
alarm/sleep and $SIG{X}, etc.

Other methods might be whenever a dynamic variable referencee is
reassigned / derefed, an event flag is set to Q the gc, etc.

-Michael




Re: SV: Parrot multithreading?

2001-09-24 Thread Michael Maraist

 then what about a win/win? we could make the event checking style a
 compile time option.

 Odds are you'll get per-op event checking if you enable debugging, since
 the debugging oploop will really be a generic check event every op loop
 that happens to have the pending debugging event bit permanently set.
 Dunno whether we want to force this at compile time or consider some way to
 set it at runtime. I'd really like to be able to switch oploops
 dynamically, but I can't think of a good way to do that efficiently.


long-jump!!!

runops(bla bla){
  setjmp(..);
  switch(flags) {
fast_runops(bla bla);
debug_runops(bla bla);
trace_runops(bla bla);
conservative_runops(bla bla);
thread_safe_runops(bla bla);
  }
}

AUTO_OP sys_opcode_change_runops {
  bla bla
  set run-flags..
  longjmp(..)
}

In C++ I'd say throw the appropriate exception, but this is close enough.

This would work well for fake-threads too, since each thread might have a
different desired main-loop.  You'd have to do something like this if you
transitioned bewteen non-threaded and threaded anyway.

-Michael




Re: SV: Parrot multithreading?

2001-09-24 Thread Dan Sugalski

At 12:27 PM 9/24/2001 -0400, Michael Maraist wrote:
  then what about a win/win? we could make the event checking style a
  compile time option.
 
  Odds are you'll get per-op event checking if you enable debugging, since
  the debugging oploop will really be a generic check event every op loop
  that happens to have the pending debugging event bit permanently set.
  Dunno whether we want to force this at compile time or consider some way to
  set it at runtime. I'd really like to be able to switch oploops
  dynamically, but I can't think of a good way to do that efficiently.
 

long-jump!!!

I did say *good* way... :)

This would work well for fake-threads too

We're not doing fake threads. Luckily we don't need it for real ones.


Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: SV: Parrot multithreading?

2001-09-24 Thread Uri Guttman

 DS == Dan Sugalski [EMAIL PROTECTED] writes:

   then what about a win/win? we could make the event checking style a
   compile time option.

  DS Odds are you'll get per-op event checking if you enable debugging,
  DS since the debugging oploop will really be a generic check event
  DS every op loop that happens to have the pending debugging event
  DS bit permanently set.  Dunno whether we want to force this at
  DS compile time or consider some way to set it at runtime. I'd really
  DS like to be able to switch oploops dynamically, but I can't think
  DS of a good way to do that efficiently.

hmmm. what about a special op that implements another form of op loop?
the overhead is almost nil (one op call). the called op loop can run
forever or decide to return and then the parent op loop takes over
again.

this would be very cool for event loop management. you could force a
scan of event explicitly by making a call to a event flag checking loop
when you feel like it in some large crunching code. similarly, you could
enable a debug/trace/event flag loop explicitly at run time. we would
need some form of language support for this but is it nothing odd. just
a special var or call that selects a loop type. the parrot code
generated is just the op loop set function. it could be block scoped or
global (which means all code/calls below this use it).

uri

-- 
Uri Guttman  -  [EMAIL PROTECTED]  --  http://www.sysarch.com
SYStems ARCHitecture and Stem Development -- http://www.stemsystems.com
Search or Offer Perl Jobs  --  http://jobs.perl.org



Re: SV: Parrot multithreading?

2001-09-24 Thread David M. Lloyd

On Mon, 24 Sep 2001, Uri Guttman wrote:

then what about a win/win? we could make the event checking style a
compile time option.

   DS Odds are you'll get per-op event checking if you enable debugging,
   DS since the debugging oploop will really be a generic check event
   DS every op loop that happens to have the pending debugging event
   DS bit permanently set.  Dunno whether we want to force this at
   DS compile time or consider some way to set it at runtime. I'd really
   DS like to be able to switch oploops dynamically, but I can't think
   DS of a good way to do that efficiently.

 hmmm. what about a special op that implements another form of op loop?
 the overhead is almost nil (one op call). the called op loop can run
 forever or decide to return and then the parent op loop takes over
 again.

This type of approach could be implemented in an extension module, could
it not?  Because of the current flexible design of Parrot, we don't have
to implement this type of opcode into the core any more than, say fork. Do
we?

- D

[EMAIL PROTECTED]




Re: SV: Parrot multithreading?

2001-09-24 Thread Bryan C . Warnock

On Monday 24 September 2001 11:54 am, Dan Sugalski wrote:
 Odds are you'll get per-op event checking if you enable debugging, since
 the debugging oploop will really be a generic check event every op loop
 that happens to have the pending debugging event bit permanently set.
 Dunno whether we want to force this at compile time or consider some way
 to set it at runtime. I'd really like to be able to switch oploops
 dynamically, but I can't think of a good way to do that efficiently.

Embed (them) within an outer loop (function).  Program end would propogate 
the finish.  Otherwise, simply redirect to a new runops routine.  
Potentially increases the call-stack by one, but performance hit only occurs 
during the switch.  Or you could collapse it all, if you have a fixed 
number, into a switch.  

runops ( ... ) 
{
run_ops_t run_ops_type= BLUE_MOON;

while (opcode != END) {

switch (run_ops_type) {

/* I want those events checked... */
case (YESTERDAY) {
while (opcode == VALID) { DO_OP1() } break;
}

/* Check the events every... */
case (NOW_AND_THEN) {
while (opcode == VALID) { DO_OP2() } break;
}

/* Look for an event once in a... */
case (BLUE_MOON) {
while (opcode == VALID) { DO_OP3() } break;
}

/* I'll check for an event when... */
case (HELL_FREEZES_OVER) {
while (opcode == VALID) { DO_OP4() } break;
}
}
run_ops_type = new_runops_loop(I,opcode);
}
/* yada yada yada */
}
  


-- 
Bryan C. Warnock
[EMAIL PROTECTED]



Re: SV: Parrot multithreading?

2001-09-21 Thread Dan Sugalski

At 09:07 PM 9/20/2001 -0400, Uri Guttman wrote:
  DS == Dan Sugalski [EMAIL PROTECTED] writes:


   DS There probably won't be any. The current thinking is that since
   DS the ops themselves will be a lot smaller, we'll have an explicit
   DS event checking op that the compiler will liberally scatter through
   DS the generated code. Less overhead that way.

we talked about that solution before and i think it has some
problems. what if someone writes a short loop. will it generate enough
op codes that a check_event one is emitted?

The compiler will make sure, yes.

do we always emit one in
loops?

At least one per statement, probably more for things like regexes.

what about complex conditional code? i don't think there is an
easy way to guarantee events are checked with inserted op codes. doing
it in the op loop is better for this.

I'd agree in some cases, but I don't think it'll be a big problem to get 
things emitted properly. (It's funny we're arguing exactly opposite 
positions than we had not too long ago... :)


Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: SV: Parrot multithreading?

2001-09-20 Thread Michael L Maraist

Arthur Bergman wrote:

  Arthur Bergman wrote:
 
   In an effort to rest my braine from a coredumping perl5 I started to think a bit 
on threading under parrot?
  
   While it has been decided that perl should be using ithread like threading, I 
guess that is irelevant at the parrot level. Are you
   going to have one virtual cpu per thread with it's own set of registers or are 
you going to context switch the virtual cpu?
  
   If it was one virtual cpu per thread  then one would just create a new virtual 
cpu and feed it the bytecode stream?
  
   Is there anything I could help with regarding this?
  
   Arthur
 
  The context is almost identical to that of Perl5's MULTIPLICITY which passes the 
perl-interpreter to each op-code.  Thus there is
  inherent support for multiple ithread-streams.  In the main-loop (between each 
invoked op-code) there is an event-checker (or was in
  older versions at any rate).  It doesn't do anything yet, but it would make sence 
to assume that this is where context-switches
  would occur, which would simply involve swapping out the current pointer to the 
perl-context; A trivial matter.

 Uhm, are you talking perl 5 here? The event checker checks for signals, we got safe 
signals now.

There wasn't any code for CHECK_EVENTS w/in Parrot when I first read the source-code.  
I merely assumed that it's role was not-yet determined, but considered the possible 
uses.  CHECK_EVENTS seems to be gone at the moment, so it's a moot point.


 MULTIPLICITY is just allowing multiple interpreters, ithreads is letting them run at 
the same time and properly clone them. If you want to use it switch interpreters at 
runtime for fake threads, patches are welcome, send it and I will apply it.



  The easiest threading model I can think of would be to have a global var called 
next_interpreter which is always loaded in the
  do-loop.  An asynchronous timer (or event) could cause the value of 
next_interpreter to be swapped.  This way no schedule
  function need be checked on each operation.  The cost is that of an extra 
indirection once per op-code.
 
  True MT code simply has each thread use it's own local interpreter instance.  
MT-code is problematic with non MT-safe extensions
  (since you can't enforce that).

 I am sorry to say, but perl 5 is true MT.

Yes, but that feature never got past being experimental.  I know of a couple DBDs that 
would not let you compile XS code with MT enabled since they weren't MT-safe.  The 
interpreter can be built MT-safe (java is a good example), but extensions are always 
going to be problematic. (Especially when many extensions are simply wrappers around
existing non-MT-aware APIs).  I think a good solution to them would be to tread it 
like X does (which says you can only run X-code w/in the main-thread).  An extension 
could say whether it was MT-safe or not, and be forced to be serialized w/in the 
main-physical-thread, which becomes the monitoring thread.  An alternative would be to 
simply
have XS code compile in a flag which says to throw an exception if the code is run 
outside of the main-thread;  Documentation would emphatically state that it's up to 
the user to design the system such that only the main-thread calls it.

On the side, I never understood the full role of iThreads w/in perl 5.6.  As far as I 
understood, it was merely used as a way of faking fork on NT by running multiple 
true-threads that don't share any globals.  I'd be curious to learn if there were 
other known uses for it.



  In iThread, you don't have a problem with atomic operations, but you can't take 
advantage of multiple CPUs nor can you garuntee
  prevention of IO-blocking (though you can get sneaky with UNIX-select).
 

 Where did you get this breaking info? ithread works with multiple CPUs and IO 
blocking is not a problem.

 Arthur

I'm under the impression that the terminology for iThreads assumes an independance of 
the physical threading model.  As other posters have noted, there are portability 
issues if we require hardware threading.  Given the prospect of falling back to 
fake-threads, then multi-CPU and IO blocking is problematic; though the latter can be 
avoided
/ minimized if async-IO is somehow enforced.  From my scarce exposure to the Linux 
Java movement, green-threads were considered more stable for a long time, even 
though the porters were just trying to get things to work on one platform.

I would definately like hardware threading to be available.  If nothing else, it lets 
students taking Operating Systems to experiment with threading w/o all the headaches 
of c.  (Granted there's Java, but we like perl)  However, I'm not convinced that 
threading won't ultimately be restrictive if used for generation operation (such as 
for the
IO-subsystem).  I'm inclined to believe that threading is only necessary when the user 
physically wants it (e.g. requests it), and that in many cases fake-threads fulfill 
the basic desires of everyone involved 

Re: SV: Parrot multithreading?

2001-09-20 Thread Dan Sugalski

At 05:23 PM 9/20/2001 -0400, Michael L Maraist wrote:
There wasn't any code for CHECK_EVENTS w/in Parrot when I first read the 
source-code.  I merely assumed that it's role was not-yet determined, but 
considered the possible uses.  CHECK_EVENTS seems to be gone at the 
moment, so it's a moot point.

There probably won't be any. The current thinking is that since the ops 
themselves will be a lot smaller, we'll have an explicit event checking op 
that the compiler will liberally scatter through the generated code. Less 
overhead that way.

Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: SV: Parrot multithreading?

2001-09-20 Thread Uri Guttman

 DS == Dan Sugalski [EMAIL PROTECTED] writes:


  DS There probably won't be any. The current thinking is that since
  DS the ops themselves will be a lot smaller, we'll have an explicit
  DS event checking op that the compiler will liberally scatter through
  DS the generated code. Less overhead that way.

we talked about that solution before and i think it has some
problems. what if someone writes a short loop. will it generate enough
op codes that a check_event one is emitted? do we always emit one in
loops? what about complex conditional code? i don't think there is an
easy way to guarantee events are checked with inserted op codes. doing
it in the op loop is better for this. or of course, go with an event
loop style dispatcher but then the perl level programs need to be
written for that style.

uri

-- 
Uri Guttman  -  [EMAIL PROTECTED]  --  http://www.sysarch.com
SYStems ARCHitecture and Stem Development -- http://www.stemsystems.com
Search or Offer Perl Jobs  --  http://jobs.perl.org