Re: [perl #39597] Problems with string constants in method calls

2006-06-24 Thread Matt Diephouse

via RT Matt Diephouse [EMAIL PROTECTED] wrote:

# New Ticket Created by  Matt Diephouse
# Please include the string:  [perl #39597]
# in the subject line of all future correspondence about this issue.
# URL: https://rt.perl.org/rt3/Ticket/Display.html?id=39597 


The following code in lines 108-110 of languages/tcl/src/class/
tclcommand.pir are giving parrot some trouble:

inlined.emit(  if epoch != %0 goto dynamic_%1, epoch, label_num)
inlined .= retval
inlined.emit(  goto end_%0, label_num)


It looks like pbc_merge is the actual source of the trouble here. If I
change languages/tcl/src/tclsh.pir to load the individual bytecode
files instead of the merged file, it works as expected.

--
matt diephouse
http://matt.diephouse.com


Re: lexical lookup and OUTER::

2006-06-24 Thread Nicholas Clark
On Fri, Jun 23, 2006 at 01:43:03PM -0700, Matt Diephouse wrote:

 While you can't do this with find_lex currently, you *can* do it. Tcl
 walks the lexpads to find lexicals. (See
 languages/tcl/runtime/variables.pir):

[Parrot assembler implementation]

 Of course, that doesn't mean that I wouldn't like an opcode to do it for 
 me. :-)

Is Parrot assembler considered a more productive language to write in than C?
If yes, is it logical to write opcodes such as this one in Parrot assembler
itself?

Nicholas Clark


Re: lexical lookup and OUTER::

2006-06-24 Thread Audrey Tang

2006/6/24, Nicholas Clark [EMAIL PROTECTED]:

On Fri, Jun 23, 2006 at 01:43:03PM -0700, Matt Diephouse wrote:
[Parrot assembler implementation]

 Of course, that doesn't mean that I wouldn't like an opcode to do it for
 me. :-)

Is Parrot assembler considered a more productive language to write in than C?
If yes, is it logical to write opcodes such as this one in Parrot assembler
itself?


Err, well, that will likely completely kill the performance. :-)

I had an implementation before Parrot had lexical pads, using global
variables and manual walking to emulate it.  It was horribly slow.
Especially consider that Pugs currently compiles:

   my $x; sub f { say $x; ...

to

   my $x; sub f { say $OUTER::x; ...

because later in the scope $x may be declared, so it's safer to just
put OUTER right there.

Of course, if we cannot have an efficient OUTER lookup, we can always
do a post analysis on the block body and convert unneccessary OUTER::
away, but I don't think it's a good idea to force compile writers to
do that.

Thanks,
Audrey


Re: lexical lookup and OUTER::

2006-06-24 Thread Patrick R. Michaud
On Sat, Jun 24, 2006 at 08:03:47AM -0700, Audrey Tang wrote:
 2006/6/24, Nicholas Clark [EMAIL PROTECTED]:
 Is Parrot assembler considered a more productive language to write in than 
 C?
 If yes, is it logical to write opcodes such as this one in Parrot assembler
 itself?
 
 Err, well, that will likely completely kill the performance. :-)
 
 I had an implementation before Parrot had lexical pads, using global
 variables and manual walking to emulate it.  It was horribly slow.
 Especially consider that Pugs currently compiles:
 
my $x; sub f { say $x; ...
 
 to
 
my $x; sub f { say $OUTER::x; ...
 
 because later in the scope $x may be declared, so it's safer to just
 put OUTER right there.

I don't think $x can be declared later in the scope.  According to S04,

If you've referred to $x prior to the first declaration, 
and the compiler tentatively bound it to $OUTER::x, 
then it's an error to declare it, and the compiler is 
allowed to complain at that point.

The Perl 6/Parrot compiler is simply using the find_lex opcode,
which of course does the outer lookups directly.  OUTER comes into 
play only when we need to explicitly exclude the current lexical 
scope from the lookup.

 Of course, if we cannot have an efficient OUTER lookup, we can always
 do a post analysis on the block body and convert unneccessary OUTER::
 away, but I don't think it's a good idea to force compile writers to
 do that.

Part of me wonders if OUTER occurs frequently enough that it
needs a C-based opcode.  However, given that we have at least 
two languages (Perl 6 and Tcl) that are indicating they need 
to be able to do OUTER lookups, it may deserve an opcode.

Pm


Re: lexical lookup and OUTER::

2006-06-24 Thread Nicholas Clark
On Sat, Jun 24, 2006 at 10:41:44AM -0500, Patrick R. Michaud wrote:
 On Sat, Jun 24, 2006 at 08:03:47AM -0700, Audrey Tang wrote:
  2006/6/24, Nicholas Clark [EMAIL PROTECTED]:
  Is Parrot assembler considered a more productive language to write in than 
  C?
  If yes, is it logical to write opcodes such as this one in Parrot assembler
  itself?
  
  Err, well, that will likely completely kill the performance. :-)

 Part of me wonders if OUTER occurs frequently enough that it
 needs a C-based opcode.  However, given that we have at least 

Which was sort of my question, although I wasn't clear.

 two languages (Perl 6 and Tcl) that are indicating they need 
 to be able to do OUTER lookups, it may deserve an opcode.

Is it only possible to write parrot opcodes in C?

Nicholas Clark


Parrot IO

2006-06-24 Thread Vishal Soni

Hi,

Is Parrot IO going to be implemented via opcodes or PMC?

I looked at some old email discussion. There were discussions on refactoring
some IO opcodes to PMC's (e.g socket opcodes). Have we reached on any
decisions as to how we are going to implement the Parrot IO?

--
Thanks,
Vishal


Re: lexical lookup and OUTER::

2006-06-24 Thread Audrey Tang


在 2006/6/24 上午 8:41 時,Patrick R. Michaud 寫到:

because later in the scope $x may be declared, so it's safer to just
put OUTER right there.


I don't think $x can be declared later in the scope.  According to  
S04,


If you've referred to $x prior to the first declaration,
and the compiler tentatively bound it to $OUTER::x,
then it's an error to declare it, and the compiler is
allowed to complain at that point.



Hmm, looks like it's been changed this April.  In that case, indeed  
the emitter can safely remove the implicit OUTER calls. Pugs's Parrot  
backend has been updated accordingly.  Thanks!


(...and Cc'ing p6l for the part below)

However, the spec wording is ambiguous:

$x = 1 if my $x;

The compiler is allowed to complain, but does that means it's also  
okay to not die fatally, and recover by pretending as if the user has  
said this?


# Current Pugs behaviour
$OUTER::x = 1 if my $x;

If it's required to complain, then the parser need to remember all  
such uses and check it against declaration later, and it'd be better  
to say that in the spec instead.


Thanks,
Audrey

PGP.sig
Description: This is a digitally signed message part


Re: lexical lookup and OUTER::

2006-06-24 Thread Patrick R. Michaud
On Sat, Jun 24, 2006 at 04:52:26PM -0700, Audrey Tang wrote:
 $x = 1 if my $x;
 
 The compiler is allowed to complain, but does that means it's also  
 okay to not die fatally, and recover by pretending as if the user has  
 said this?
 
 # Current Pugs behaviour
 $OUTER::x = 1 if my $x;

I think that a statement like  C $x = 1 if my $x;  ought to
complain.  

Put slightly differently, if it's an error in any of the compilers,
it probably should be an error in all of them.

 If it's required to complain, then the parser need to remember all  
 such uses and check it against declaration later, and it'd be better  
 to say that in the spec instead.

I think that S04's phrase then it's an error to declare it 
indicates that this should always be treated as an error.  How/when
the compiler chooses to report the error is up to the compiler.  :-)
That said, I wouldn't have any objection to removing or altering
the compiler is allowed to complain at that point phrase so
as to remove this particular ambiguity.

Pm


Exceptions, dynamic scope, Scheme, and Lisp: A modest proposal

2006-06-24 Thread Bob Rogers
   From: Chip Salzenberg [EMAIL PROTECTED]
   Date: Tue, 20 Jun 2006 20:59:45 -0700

   WRT exception handling, I think the lisp condition/handler model is a good
   starting point.  It's simple enough to explain and use, and static models
   can easily be implemented in terms of it.

Excellent; I'm sure you won't be surprised to learn that I agree.  ;-}

   But I really don't like one thing about the CL handler model: it conflates
   non-local transfers of control with this exception is now handled.

FWIW, some pre-ANSI implementations did have a mechanism for marking a
condition as having being handled, but in the long run, I believe this
was considered not useful.  In particular, having a stateless
condition object (i.e. one that does not record its progress through the
signalling mechanism) makes it cleaner to resignal the same condition
object later on.

   So (1) every continuation invocation has to check to see whether an
   exception is live so it can be marked dead, which complicates what
   should be as efficient as possible, . . .

I don't understand this.  Is the exception itself a first-class object?
If so, then whether it's live or dead is up to the GC.  If not, then
users can't be allowed to get their grubby paws on it, lest they stuff
it away in some persistent data structure.  But probably I have a
fundamental misunderstanding here; maybe you meant live and dead in
a different sense?

   . . . and (2) creative condition handlers can't use continuations as
   an implementation tool.

I don't understand this either (I'm certainly planning on doing so), but
that is probably because you've already lost me.

   But I see a way out; see below.

   On Thu, Jun 15, 2006 at 12:03:56AM -0400, Bob Rogers wrote:
   3.  FWIW, the Scheme dynamic-wind feature requires an action to be
invoked when re-entering the context as well as leaving it.  But this is
probably not relevant, as a real Scheme implementation would probably
not need Parrot continuations or actions in any case.

   Huh, that's odd, coming from you.  Having just spent the better part of my
   evening wrapping my head around call/cc and dynamic-wind, I'm about to
   modify pdd23 to replace push_handler with:

push_handler??  I assume you mean pushaction?

   $P0 = newclosure sub_to_call_when_entering_scope
   $P1 = newclosure sub_to_call_when_leaving_scope
   $P2 = newclosure sub_to_call_when_scope_is_finally_inaccessible
   push_dynscope $P0, $P1, $P2   # [*]
   ...
   pop_dynscope  # [**]

   So, having chosen Scheme as a good model for scope and continuation
   handling, wouldn't a Scheme compiler want to take advantage of that?

Good question.  ;-} I could be mistaken, but I said this because I
believe that Scheme has a different idea of continuation than Parrot.
In brief [1], if you rewrite call-and-return into pure
continuation-passing calling, it looks like a call into the sub followed
by another call to the continuation.  All of these can in fact be
implemented as tail calls, since there are no returns, and Scheme
stipulates that you *must* tail-merge where possible.  This means you
don't need a stack -- there is never more than one activation record at
any given time -- so all of the state required for a continuation can be
captured by a closure.  In the Lisp community generally, closure and
continuation are therefore often used interchangeably, though this
does sweep some distinctions under the rug.

   So all a CPS language really needs is support for closures.  Such an
implementation is truly and utterly stackless, which means that
dynamic-wind needs to keep its own stack explicitly, and similarly for
dynamic binding (which, IIUC, is generally implemented in terms of
dynamic-wind).

   Mind you, my knowledge of Scheme is purely theoretical.  In fact, I
hadn't even encountered dynamic-wind myself until just a few months ago.
So my guesses about what a Scheme implementer would or would not do must
be taken with a large grain of salt.  It might be possible to create a
conforming Scheme implementation without CPS, but on the other hand,
every time I encounter CWCC I learn something new, so what do I know?

   One question about push_dynscope, though:  Is the
sub_to_call_when_scope_is_finally_inaccessible called when the
Parrot_Context is reclaimed?  If so, why is that needed?

   And getting back to exceptions, I'm seeing something that's pretty much like
   the CL model, where the 'push_eh' opcode takes a _closure_, and the list of
   handlers is its own array in the interpreter, not in the generic control
   stack, and which is called at 'throw' time in the dynamic context of the
   'throw'.  For conventional static languages like Perl 6 (:-)), the handler
   would pretty much report that the exception was handled (e.g. with a
   'caught' opcode) and then invoke a continuation which had been taken by the
   Perl 6 compiler to point to the 'catch' code . . .

That sounds good 

Re: Exceptions, dynamic scope, Scheme, and Lisp: A modest proposal

2006-06-24 Thread Chip Salzenberg
On Sat, Jun 24, 2006 at 11:18:41PM -0400, Bob Rogers wrote:
From: Chip Salzenberg [EMAIL PROTECTED]
Date: Tue, 20 Jun 2006 20:59:45 -0700
 
WRT exception handling, I think the lisp condition/handler model is a good
starting point.  It's simple enough to explain and use, and static models
can easily be implemented in terms of it.
 
 Excellent; I'm sure you won't be surprised to learn that I agree.  ;-}

Consensus is easy to achieve when one party is obviously correct.  :-)

But I really don't like one thing about the CL handler model: it conflates
non-local transfers of control with this exception is now handled.
 
 FWIW, some pre-ANSI implementations did have a mechanism for marking a
 condition as having being handled [...]

No, nothing like a 'handled' flag in the condition.  It's just a question of
how Parrot should be informed that an exception is caught ... whether all
languages, rather than CL, should make non-local control flow semantically
significant.

Consider: from Parrot's low-level POV, how could Parrot notice when it's
leaving the dynamic context of a condition handler specifically, so as to
change its internal state of There's a live condition that's in the process
of being handled into Ah, all done then, the handling is over?  The most
obvious answer involves extra processing checking for the situation of
exception-handling, but that would slow down every continuation invocation.

Fortunately, we don't have to go there.  To quote myself:

If dynamic-wind is implemented (see below), it seems to me that a CL
compiler could wrap each handler in a dynamic scope in such a way as to 
 trap
a non-local transfer distinctly from a return, and in the former case,
automatically invoke the hypothetical Parrot caught opcode.  So CL users
get full CL semantics, and everybody gets a faster continuation.invoke()
operation.

In other words: Dymamic-wind processing will be required in every
continuation invocation.  Therefore, if Lisp-style condition handling is
build on D-W, CL will *not* require a special flag/check/slowdown.  In fact,
CL would not be alone in this: most of the exception code for most languages
could be written in PIR.  That's not only nice in itself, but it's a very
good sign for the power of PIR.

Quoting you out of order:

I even intend to use continations to implement THROW and CATCH; I
 just won't be able to expose them to users via standard Lisp constructs.
 So, yes, I could install the equivalent of an UNDO block around the Lisp
 code that does whatever Parrot maintenance is required on the Parrot
 exception object (which, it now occurs to me, may need to be distinct
 from the Lisp condition object).  But would I really need to do anything
 here?  If an exception is caught by Lisp, why would Parrot even need to
 know?  S04 seems to require a great deal of bookkeeping for unhandled
 exceptions, but would that necessarily impact Lisp handlers?

It's just a little hack, no big deal.  Imagine this scenario:

  1. Parrot has exceptions
  2. Parrot requires handlers to mark exceptions handled with a caught
 opcode
  3. Parrot has dynamic-wind

Given:

  (handler-case (signal condition)
 (printer-on-fire () YOUR_FORM_HERE))

Your CL compiler would replace YOUR_FORM_HERE with the equivalent of this,
written in pidgin Scheme:

   (let ((handled #t))
 (dynamic-wind
   ;; entry thunk (none)
   nil

   ;; body thunk (CL handler code goes here)
   (lambda ()
 YOUR_FORM_HERE
 (set! handled #f))

   ;; departure thunk
   (lambda ()
 (when handled;; only a non-local transfer could avoid 
(set! ... #f)
   (parrot-emit caught) ;; here's where you tell Parrot the exception 
is handled
   (set! handled #f)  ;; ... but we only want to do so once per 
exception

I suspect that the lexical nature of the 'handled' flag may not match the
interpreter-wide-dynamic nature of the signal stack, leading to incorrect
results with nested signals.  But with that caveat, I think this would work.

Anyway, the point of this whole dance is to implement the CL semantics,
which require you to *detect* and *take special action on* the handler body
making a non-local transfer out of the dynamic scope ... something which you
want, since non-local transfers are semantically significant in the
definition of CL condition handlers (and only CL condition handlers :-)).



Moving on: I may have missed some of the implications of what I'm not
quoting, but this:

 Such an implementation is truly and utterly stackless, which means that
 dynamic-wind needs to keep its own stack explicitly, and similarly for
 dynamic binding (which, IIUC, is generally implemented in terms of
 dynamic-wind).

... actually describes Parrot, present and future.  Parrot doesn't need to
recurse in C to invoke continuations or closures (even if maybe it does in
some cases (weasel word alert)).  And my