Re: Why lexical pads
On Fri, Sep 24, 2004 at 09:30:22AM -0700, Steve Fink wrote: But I agree that it is doing a name lookup in the string eval case. Although if you try it, you get puzzling results: perl -le 'sub x {my $foo = 1; return sub { eval q($foo++) } };$x=x();print $x-(), $x-(), $x-()' prints 012 again. Which confused me, because Perl *can* do named lookups of lexicals. The problem, apparently, is that it's doing the lookup but not finding it. With bleedperl, you'd get $ ./perl -wle 'sub x {my $foo = 1; return sub { eval q($foo++) } };$x=x();print $x-(), $x-(), $x-()' Variable $foo is not available at (eval 1) line 1. Variable $foo is not available at (eval 2) line 1. Variable $foo is not available at (eval 3) line 1. 000 $ -- Now is the discount of our winter tent -- sign seen outside camping shop
Re: Why lexical pads
On Sat, Sep 25, 2004 at 02:11:10PM -0400, Chip Salzenberg wrote: : According to Dan Sugalski: : At 12:25 PM -0400 9/25/04, Chip Salzenberg wrote: : my $i is register; : : Except that makes things significantly sub-optimal in the face of : continuations, since registers aren't preserved... : : Well, I know I'd be willing to put in a few register declarations for : inner loops. The intent is that saying things like my int $i; my num $x; could have that very effect (at least, whenever the optimizer decides it wouldn't be a bad idea). The declaration even tells it what kind of register you want so it doesn't have to guess. Though scalar registers only get you so far, even under JIT. My guess is that inner loops will be sped up a lot more by declarations of compact arrays, especially when we get optimized hyperoperators cooking over them, *especially* when we can hand them off to a modern GPU that has a heck of a lot more power than a Cray-1, and just leave those slow scalar registers for the CPU to fiddle around with while the GPU does the heavy lifting. Larry
Re: Why lexical pads
On Sat, Sep 25, 2004 at 10:01:42PM -0700, Larry Wall wrote: : We've also said that MY is a pseudopackage referring to the current : lexical scope so that you can hand off your lexical scope to someone : else to read (but not modify, unless you are currently compiling : yourself). However, random subroutines are not allowed access : to your lexical scope unless you specifically give it to them, : with the exception of $_ (as in 1 above). Otherwise, what's the : point of lexical scoping? Note that this definition of MY as a *view* of the current lexical scope from a particular spot is exactly what we already supply to an Ceval, so we're not really asking for anything that isn't already needed implicitly. MY is just the general way to invoke the pessimization you would have to do for an Ceval anyway. Larry
Re: Why lexical pads
On Sep 25, 2004, at 10:27 PM, Larry Wall wrote: On Sat, Sep 25, 2004 at 10:01:42PM -0700, Larry Wall wrote: : We've also said that MY is a pseudopackage referring to the current : lexical scope so that you can hand off your lexical scope to someone : else to read (but not modify, unless you are currently compiling : yourself). However, random subroutines are not allowed access : to your lexical scope unless you specifically give it to them, : with the exception of $_ (as in 1 above). Otherwise, what's the : point of lexical scoping? Note that this definition of MY as a *view* of the current lexical scope from a particular spot is exactly what we already supply to an Ceval, so we're not really asking for anything that isn't already needed implicitly. MY is just the general way to invoke the pessimization you would have to do for an Ceval anyway. A mildly interesting thought would be for Ceval to take additional parameters to make explicit what's visible to the eval'd code--essentially making the running of the code like a subroutine call. So the traditional Ceval would turn into something like eval $str, MY, but you could also have eval $str, $x, $y, or just eval $str, which would execute in an empty lexical scope. That would allow additional optimizations at compile-time (and make MY the sole transporter of lexical scope), since not every Ceval would need what MY provides, but even more importantly, it would allow the programmer to protect himself against accidentally referencing a lexical he didn't intend, just because the code in his string coincidentally used the same variable name. More optimization opportunities, and more explicit semantics. But that's now a language issues, so I'm cc-ing this over to there. JEff
Re: Why lexical pads
Chip Salzenberg [EMAIL PROTECTED] wrote: my $i is register; I See A Great Need. Well, the Perl6 notation is: my int $i; that even specifies, which kind of register is used. The caveat WRT continuation still applies. And such natural typed variables aren't stored in the lexical pad. leo
Re: Why lexical pads
According to Jeff Clites: But it's nice to have stuff that a compiler can optimize away in a standard run, and maybe leave in place when running/compiling a debug version [...] my $i is register; I See A Great Need. -- Chip Salzenberg - a.k.a. - [EMAIL PROTECTED] I don't really think it is a question of bright people and dumb people, but rather people who can see the game they're playing and those who can't. -- Joe Cosby
Re: Why lexical pads
At 12:25 PM -0400 9/25/04, Chip Salzenberg wrote: According to Jeff Clites: But it's nice to have stuff that a compiler can optimize away in a standard run, and maybe leave in place when running/compiling a debug version [...] my $i is register; I See A Great Need. Except that makes things significantly sub-optimal in the face of continuations, since registers aren't preserved... -- Dan --it's like this--- Dan Sugalski even samurai [EMAIL PROTECTED] have teddy bears and even teddy bears get drunk
Re: Why lexical pads
At 7:43 PM -0700 9/24/04, Jeff Clites wrote: On Sep 24, 2004, at 7:32 PM, Dan Sugalski wrote: At 7:28 PM -0700 9/24/04, Jeff Clites wrote: On Sep 24, 2004, at 6:51 PM, Aaron Sherman wrote: However, the point is still sound, and that WILL work in P6, as I understand it. Hmm, that's too bad--it could be quite an opportunity for optimization, if you could use-and-discard lexical information at compile-time, when you know there's no eval around to need it. Even if not it's going in anyway. The introspection abilities are more than worth the extra memory that the name hashes use. It's a compiler issue. Well... sorta. It's a specification issue. *Not* having pads of some sort, whether they're dull, bog-simple activation frames which're just arrays, or the more complex ordered hash system we've gone with, makes continuations untenable. You need a stable backing store, otherwise resuming a continuation's a dodgy thing. (Though that's usually not too big a deal, as you've normally saved registers at any spot where a continuation could be taken, so the risks are minimized) It also makes up-call lexical peeking and modification impossible. This is something Larry's specified Perl 6 code will be able to do. That is, any routine should be able to inspect the environment of its caller, and modify that environment, regardless of where the caller came from. That means that you can't really optimize it away at compile time, since you can't know at compiletime what your call path is going to look like. (To a point. Leaf subs and methods can know, if we stipulate that vtable methods are on their own, which is OK with me) (And I'm less worried about the memory than I am about all of the pushing and popping and by-name stores and lookups, which could optimized away to just register usage.) There shouldn't be much, if any, pushing and popping for stuff like this, and access to lexical pads should be an O(1) operation. Remember, you know at compile time what lexical variables are in scope which means if you have a data structure which can be accessed by name and index (like, say, the OrderedHash PMC clas...) most, if not all, of the access to lexicals will be by integer index. That access is also likely to be a one-time deal -- since PMCs are all dealt with by pointer you only need to fetch the pointer out of the store (or put it in for a new PMC) once. It's unlikely that any sub or method's going to find its runtime dominated by the time it takes to store PMC pointers into an array... :) -- Dan --it's like this--- Dan Sugalski even samurai [EMAIL PROTECTED] have teddy bears and even teddy bears get drunk
Re: Why lexical pads
According to Dan Sugalski: That is, any routine should be able to inspect the environment of its caller, and modify that environment, regardless of where the caller came from. Understood. Leaf subs and methods can know [their call paths], if we stipulate that vtable methods are on their own, which is OK with me. So, given this sub and tied $*var: sub getvar { my $i = rand; $*var } the FETCH method implementing $*var might not be able to see $i? Which implies that there may be no pad and $i could be in a register? -- Chip Salzenberg - a.k.a. - [EMAIL PROTECTED] I don't really think it is a question of bright people and dumb people, but rather people who can see the game they're playing and those who can't. -- Joe Cosby
Re: Why lexical pads
According to Dan Sugalski: At 12:25 PM -0400 9/25/04, Chip Salzenberg wrote: my $i is register; Except that makes things significantly sub-optimal in the face of continuations, since registers aren't preserved... Well, I know I'd be willing to put in a few register declarations for inner loops. -- Chip Salzenberg - a.k.a. - [EMAIL PROTECTED] I don't really think it is a question of bright people and dumb people, but rather people who can see the game they're playing and those who can't. -- Joe Cosby
Re: Why lexical pads
At 2:10 PM -0400 9/25/04, Chip Salzenberg wrote: According to Dan Sugalski: Leaf subs and methods can know [their call paths], if we stipulate that vtable methods are on their own, which is OK with me. So, given this sub and tied $*var: sub getvar { my $i = rand; $*var } the FETCH method implementing $*var might not be able to see $i? Which implies that there may be no pad and $i could be in a register? Yeah, I think that's OK. I'm certainly OK with it, though there is an appeal to introspection. (Since you might want to fiddle with things in a debugger) -- Dan --it's like this--- Dan Sugalski even samurai [EMAIL PROTECTED] have teddy bears and even teddy bears get drunk
Re: Why lexical pads
On Sep 25, 2004, at 10:14 AM, Dan Sugalski wrote: At 7:43 PM -0700 9/24/04, Jeff Clites wrote: On Sep 24, 2004, at 7:32 PM, Dan Sugalski wrote: At 7:28 PM -0700 9/24/04, Jeff Clites wrote: On Sep 24, 2004, at 6:51 PM, Aaron Sherman wrote: However, the point is still sound, and that WILL work in P6, as I understand it. Hmm, that's too bad--it could be quite an opportunity for optimization, if you could use-and-discard lexical information at compile-time, when you know there's no eval around to need it. Even if not it's going in anyway. The introspection abilities are more than worth the extra memory that the name hashes use. It's a compiler issue. Well... sorta. It's a specification issue. *Not* having pads of some sort, whether they're dull, bog-simple activation frames which're just arrays, or the more complex ordered hash system we've gone with, makes continuations untenable. You need a stable backing store, otherwise resuming a continuation's a dodgy thing. Wait--I think we're talking about two different things. I'm talking about optimizing away nested scopes _within_ a sub; for instance, consider: sub foo { my $x; ... $x = 2; if(blah) { my $x; ... $x = 3; if(goo) { my $x; ... $x = 4; } ++$x; } ++$x; } At each curly-brace, a new pad needs to be pushed, and something stored into its x slot (maybe just a pointer to a PerlReference also stored in a register), and at the end the pad needs to be popped (and presumably, later garbage-collected). But, if the semantics of your language are such that you can't look up lexical by name, then the fact that all 3 of those variables have the same name doesn't matter--you could change the names to $x, $y, $z, and create code would behave identically. That is, all the consequences of the lexical structure can be worked out at compile-time, and it doesn't need to be preserved at run-time. So a compiler could compile the above into just some register stores and branches--no need to push new lexical pads while inside the body of the sub, and that should be _much_ faster without the pad manipulation overhead. (But if there could be something like eval inside of those ... sections, then you _do_ have to save it all.) [And if, for continuations, those registers need to be stored away to something else, that's still only one pad needed per sub, rather than one needed per lexical scope within a sub.] But by, It's a compiler issue, I mean that parrot doesn't get to decide if those curly braces are supposed to mean something at runtime--the compiler make the decision as to whether it needs to emit pad-manipulation ops. (And it goes without saying that the compiler is bound by the semantics of the language it's compiling) As I already said, I'm not arguing that Parrot shouldn't have lexical pads, just that languages with appropriate semantics can benefit greatly in cases where they don't have to use them. (And for example, I believe I've heard of languages which have something like eval, but for which the execution of the eval'd string doesn't occur in the lexical scope where it's invoked, so they never need by-name lookups of lexicals. That's essentially what you'd get in Perl5 if you had sub myEval { eval $_[0] }, and only ever called myEval, and not eval directly.) It also makes up-call lexical peeking and modification impossible. This is something Larry's specified Perl 6 code will be able to do. That is, any routine should be able to inspect the environment of its caller, and modify that environment, regardless of where the caller came from. If I'm interpreting that correctly, then it may just have the consequence that the Perl6 compiler can perform fewer optimizations than others. (And if the body of a sub can reference and modify lexicals in its caller, then it sounds like they're not really *lexicals*, which is confusing) Hmm, that also precludes part of what tail-call optimization usually gives you, since you can't reuse the current stack frame, if the called sub is supposed to be able to look back at the caller's state, so you lose the potential for unbounded recursion. If that's all correct, Perl6 is giving up a lot for that peeking feature. (And I'm less worried about the memory than I am about all of the pushing and popping and by-name stores and lookups, which could optimized away to just register usage.) There shouldn't be much, if any, pushing and popping for stuff like this, and access to lexical pads should be an O(1) operation. Remember, you know at compile time what lexical variables are in scope which means if you have a data structure which can be accessed by name and index (like, say, the OrderedHash PMC clas...) most, if not all, of the access
Re: Why lexical pads
On Sep 25, 2004, at 11:15 AM, Dan Sugalski wrote: At 2:10 PM -0400 9/25/04, Chip Salzenberg wrote: According to Dan Sugalski: Leaf subs and methods can know [their call paths], if we stipulate that vtable methods are on their own, which is OK with me. So, given this sub and tied $*var: sub getvar { my $i = rand; $*var } the FETCH method implementing $*var might not be able to see $i? Which implies that there may be no pad and $i could be in a register? Yeah, I think that's OK. I'm certainly OK with it, though there is an appeal to introspection. (Since you might want to fiddle with things in a debugger) But for a debugging, you'd want to be able to compile with optimizations disabled, so there's no real problem there. (And also, a clever-enough debugger might be able to let you do by-name manipulations of things stored in registers--you just need to preserve enough information at compile-time, in a form the debugger can use, like a separate symbols file). JEff
Why lexical pads
Hello, I've been wondering for some time about this, so I thought, why not ask. The thing is, I've been playing a few times with (Parrot, but also .NET) compilers, and my conclusion was that the most difficult part is getting assignments right (when by value, when by ref, etc.). (that is, any construct, such as while, is only a set of labels, the most important thing is assignments. Even translating function calls are easier than assignments). Anyway, when one creates a simple language, compiling local variables can easily be done through PIR's .local syntax. However, when assigning to locals, you're really just assigning to registers, not actually storing variables in local pads. (and when registers run out, they're being spilled to an array in P31, right?). So, my question is, why would one need lexical pads anyway (why are they there)? Klaas-Jan
Re: Why lexical pads
On Fri, Sep 24, 2004 at 04:03:46PM +0200, KJ wrote: Hello, I've been wondering for some time about this, so I thought, why not ask. The thing is, I've been playing a few times with (Parrot, but also .NET) compilers, and my conclusion was that the most difficult part is getting assignments right (when by value, when by ref, etc.). (that is, any construct, such as while, is only a set of labels, the most important thing is assignments. Even translating function calls are easier than assignments). Anyway, when one creates a simple language, compiling local variables can easily be done through PIR's .local syntax. However, when assigning to locals, you're really just assigning to registers, not actually storing variables in local pads. (and when registers run out, they're being spilled to an array in P31, right?). So, my question is, why would one need lexical pads anyway (why are they there)? Klaas-Jan If your language has no equivalent of string eval, lexical values can live directly in registers. If it has some string eval statement, you need to provide the appropriate context when such an eval is used. That's what scratchpads is about. Hopefully, in many places, we will be able to do without scratchpad at run time. -- stef
Re: Why lexical pads
On Fri, 2004-09-24 at 10:03, KJ wrote: So, my question is, why would one need lexical pads anyway (why are they there)? They are there so that variables can be found by name in a lexically scoped way. One example, in Perl 5, of this need is: my $foo = 1; return sub { $foo ++ }; Here, you keep this pad around for use by the anon sub (and anyone else who still has access to that lexical scope) to find and modify the same $foo every time. In this case it doesn't look like a by-name lookup, and once optimized, it probably won't be, but remember that you are allowed to say: perl -le 'sub x {my $foo = 1; return sub { ${foo}++ } }$x=x();print $x-(), $x-(), $x-()' Which prints 012 because of the ability to find foo by name. Of course, you can emulate this behavior, but in doing so, you're going to have to invent the pad :) Someone else suggested that you need this for string eval, but you don't really. You need it for by-name lookups, which string evals just happen to also need. If you can't do by-name lookups, then string eval doesn't need pads (and thus won't be able to access locals). -- 781-324-3772 [EMAIL PROTECTED] http://www.ajs.com/~ajs
Re: Why lexical pads
At 4:03 PM +0200 9/24/04, KJ wrote: So, my question is, why would one need lexical pads anyway (why are they there)? Pads do three things. First, as has been pointed out, they store sufficient metadata so string evals (that's where code gets compiled on the fly and accesses the surrounding environment) work out properly. It's possible to get around this other ways, but it's a pain. Secondly, they provide that metadata for introspection, which is also quite nice, albeit of limited utility outside of on the fly compilation. Third, and most important, they're needed for closures. Without pads of some sort you can't do closures. (They may be called different things, but they're the same) -- Dan --it's like this--- Dan Sugalski even samurai [EMAIL PROTECTED] have teddy bears and even teddy bears get drunk
Re: Why lexical pads
On Sep-24, Aaron Sherman wrote: On Fri, 2004-09-24 at 10:03, KJ wrote: So, my question is, why would one need lexical pads anyway (why are they there)? They are there so that variables can be found by name in a lexically scoped way. One example, in Perl 5, of this need is: my $foo = 1; return sub { $foo ++ }; Here, you keep this pad around for use by the anon sub (and anyone else who still has access to that lexical scope) to find and modify the same $foo every time. In this case it doesn't look like a by-name lookup, and once optimized, it probably won't be, but remember that you are allowed to say: perl -le 'sub x {my $foo = 1; return sub { ${foo}++ } }$x=x();print $x-(), $x-(), $x-()' Which prints 012 because of the ability to find foo by name. Umm maybe Im confused, but I'd say that your example prints 012 because of the *inability* to find foo by name. If it could find foo by name, it would be printing 123. Your snippet is actually finding the global $main::foo, not the lexical $foo. But I agree that it is doing a name lookup in the string eval case. Although if you try it, you get puzzling results: perl -le 'sub x {my $foo = 1; return sub { eval q($foo++) } };$x=x();print $x-(), $x-(), $x-()' prints 012 again. Which confused me, because Perl *can* do named lookups of lexicals. The problem, apparently, is that it's doing the lookup but not finding it. If you add in a nonsensical use of $foo to make sure it sticks around to be found, it works: perl -le 'sub x {my $foo = 1; return sub { $foo; eval q($foo++) } };$x=x();print $x-(), $x-(), $x-()' Now apparently the closure captures the lexical $foo, and thus the eval is able to find it. On the other hand, your original example still doesn't work, and I think that's because symbolic references do not do pad lookups: perl -le 'sub x {my $foo = 1; return sub { $foo; ${foo}++ } }$x=x();print $x-(), $x-(), $x-()' still prints 012. Yep. From perlref: Only package variables (globals, even if localized) are visible to symbolic references. Lexical variables (declared with my()) aren't in a symbol table, and thus are invisible to this mechanism. For example:
Re: Why lexical pads
On Sep 24, 2004, at 8:07 AM, Aaron Sherman wrote: On Fri, 2004-09-24 at 10:03, KJ wrote: So, my question is, why would one need lexical pads anyway (why are they there)? They are there so that variables can be found by name in a lexically scoped way. One example, in Perl 5, of this need is: my $foo = 1; return sub { $foo ++ }; Here, you keep this pad around for use by the anon sub (and anyone else who still has access to that lexical scope) to find and modify the same $foo every time. In this case it doesn't look like a by-name lookup, and once optimized, it probably won't be, but remember that you are allowed to say: perl -le 'sub x {my $foo = 1; return sub { ${foo}++ } }$x=x();print $x-(), $x-(), $x-()' Which prints 012 because of the ability to find foo by name. Ha, you're example is actually wrong (but tricked me for a second). Here's a simpler case to demonstrate that you can't look up lexicals by name (in Perl5): % perl -le '$x = 2; print ${x}' 2 % perl -le 'my $x = 2; print ${x}' (printed nothing) The first case prints 2 because $x is a global there; in the second case, it's a lexcial, and ${x} is looking for a global. In your example, ${foo} is actually addressing the global $foo, not your lexical. To demonstrate, both this: perl -le 'sub x {my $foo = 7; return sub { ${foo}++ } }$x=x();print $x-(), $x-(), $x-()' and this: perl -le 'sub x { return sub { ${foo}++ } }$x=x();print $x-(), $x-(), $x-()' also print 012. (Your example should have printed 123, if the lexical $foo had been what the closure was incrementing.) Someone else suggested that you need this for string eval, but you don't really. You need it for by-name lookups, which string evals just happen to also need. If you can't do by-name lookups, then string eval doesn't need pads (and thus won't be able to access locals). String eval is special because without it, you can tell at compile time all of the places where a lexical is used (ie, you can trace all of the places it's used back to which declaration matches them), and obviate the need for a by-name lookup (since Perl5 doesn't allow explicit by-name lookups of lexicals). For string evals to work in the lexical scope in which they occur, you do have to have by-name lookups behind-the-scenes. I believe that's all correct. And where this could be a savings is that, in any lexical scope in which there is no eval visible (looking down the tree of nested lexical scopes), then you don't need to save the name-to-variable mapping in nested pads. Add a call to eval, and you need to save a lot more stuff. JEff
Re: Why lexical pads
On Fri, 2004-09-24 at 12:36, Jeff Clites wrote: Ha, you're example is actually wrong (but tricked me for a second). Here's a simpler case to demonstrate that you can't look up lexicals by name (in Perl5): You are, of course, correct. If I'd been ignorant of that in the first place, this would be much less embarassing ;-) However, the point is still sound, and that WILL work in P6, as I understand it. -- Aaron Sherman [EMAIL PROTECTED] Senior Systems Engineer and Toolsmith It's the sound of a satellite saying, 'get me down!' -Shriekback
Re: Why lexical pads
On Sep 24, 2004, at 6:51 PM, Aaron Sherman wrote: On Fri, 2004-09-24 at 12:36, Jeff Clites wrote: Ha, you're example is actually wrong (but tricked me for a second). Here's a simpler case to demonstrate that you can't look up lexicals by name (in Perl5): You are, of course, correct. If I'd been ignorant of that in the first place, this would be much less embarassing ;-) No need to be embarrassed--it's easy to trick yourself. (I had forgotten myself, until I recently tried it while thinking whether lexical pads really needed a by-name API.) However, the point is still sound, and that WILL work in P6, as I understand it. Hmm, that's too bad--it could be quite an opportunity for optimization, if you could use-and-discard lexical information at compile-time, when you know there's no eval around to need it. JEff
Re: Why lexical pads
At 7:28 PM -0700 9/24/04, Jeff Clites wrote: On Sep 24, 2004, at 6:51 PM, Aaron Sherman wrote: However, the point is still sound, and that WILL work in P6, as I understand it. Hmm, that's too bad--it could be quite an opportunity for optimization, if you could use-and-discard lexical information at compile-time, when you know there's no eval around to need it. Even if not it's going in anyway. The introspection abilities are more than worth the extra memory that the name hashes use. (And perl 6's going to be injecting things up-scope in some circumstances, so you can't even tell at compile-time) -- Dan --it's like this--- Dan Sugalski even samurai [EMAIL PROTECTED] have teddy bears and even teddy bears get drunk
Re: Why lexical pads
On Sep 24, 2004, at 7:32 PM, Dan Sugalski wrote: At 7:28 PM -0700 9/24/04, Jeff Clites wrote: On Sep 24, 2004, at 6:51 PM, Aaron Sherman wrote: However, the point is still sound, and that WILL work in P6, as I understand it. Hmm, that's too bad--it could be quite an opportunity for optimization, if you could use-and-discard lexical information at compile-time, when you know there's no eval around to need it. Even if not it's going in anyway. The introspection abilities are more than worth the extra memory that the name hashes use. It's a compiler issue. You're right that no matter what, you need lexical pads as a feature in Parrot for...those cases where you need lexical pads. But it's nice to have stuff that a compiler can optimize away in a standard run, and maybe leave in place when running/compiling a debug version--but that's a matter of the semantics of the language. (And I'm less worried about the memory than I am about all of the pushing and popping and by-name stores and lookups, which could optimized away to just register usage.) JEff
Q: Sub vs Closure lexical pads
While trying to generate a small example that shows the memory corruption problem reported by Steve, I came along these issues: a) [1] is .Sub, [2] is turned off The subroutine prints main's $m - very likely wrong. Q: Should the Sub get a NULL scratch pad, or a new empty scratch pad stack? b) [1] is .Closure, [2] is turned off The closure prints main - probably ok c) [1] is .Closure, [2] is newpad 0 Lexical '$m' not found d) [1] is .Closure, [2] is newpad 1 prints main Q: What is correct? .sub _main new_pad 0 new $P0, .PerlString set $P0, main\n store_lex -1, $m, $P0 .sym pmc foo newsub foo, .Sub, _foo # [1] .pcc_begin prototyped .pcc_call foo .pcc_end pop_pad end .end .sub _foo prototyped # new_pad 1# [2] find_lex $P0, $m print $P0 # pop_pad .pcc_begin_return .pcc_end_return .end leo
Re: Q: Sub vs Closure lexical pads
Leopold Toetsch writes: While trying to generate a small example that shows the memory corruption problem reported by Steve, I came along these issues: a) [1] is .Sub, [2] is turned off The subroutine prints main's $m - very likely wrong. Well, Subs don't do anything with pads, so I'd say this is correct. Q: Should the Sub get a NULL scratch pad, or a new empty scratch pad stack? Or just keep the existing one like current subs do. A top-level sub should be a closure under an empty pad stack. The following correspond to Perl code (approximately): b) [1] is .Closure, [2] is turned off The closure prints main - probably ok eval 'my $m = main\n; ' . 'print $m'; c) [1] is .Closure, [2] is newpad 0 Lexical '$m' not found sub foo { print $m } { my $m = main\n; foo() } d) [1] is .Closure, [2] is newpad 1 prints main my $m = main\n; sub { print $m }-(); Q: What is correct? It looks to me as though they all are. Luke .sub _main new_pad 0 new $P0, .PerlString set $P0, main\n store_lex -1, $m, $P0 .sym pmc foo newsub foo, .Sub, _foo # [1] .pcc_begin prototyped .pcc_call foo .pcc_end pop_pad end .end .sub _foo prototyped # new_pad 1# [2] find_lex $P0, $m print $P0 # pop_pad .pcc_begin_return .pcc_end_return .end leo
Re: Lexical Pads
Will Coleda writes: Can someone explain to me the lexical pad stack and static nesting depth? I'm trying to write global for tcl, and trying to pull out a variable from the outermost pad, and failing to find it. - I'm fairly certain this is because I'm abusing new_pad and store_lex (always using 0 as the static nesting depth). Works fine when all I care about is the current pad - but getting to variables elsewhere in the pad stack results in a lexical not found error. Do I need to manually keep track of my nesting depth? If so, what's the rationale? (why have the stack if you also have the nesting depth?) Because the stack is dynamic, while the nesting depth is static/lexical. That is, you keep track of the nesting depth only at compile-time. If you want globals to just be lexicals at the top level, that's fine. Just do a Cnew_pad 0 at the beginning of your program, and then Cstore_lex 0 for each global. I don't know TCL, but if you have other lexical scopes, start at Cnew_pad 1 and use Cfind_lex -1 for all lexicals. But to play nice with other languages, you should probably use Cstore_global and Cfind_global for globals. Luke Heading off to experiment... -- Will Coke Coledawill at coleda dot com
Re: Lexical Pads
On Oct 16, 2003, at 10:53 PM, Will Coleda wrote: I'm trying to write global for tcl, and trying to pull out a variable from the outermost pad, and failing to find it. Globals aren't stored in the lexical pads--there are find_global and store_global ops (see var.ops)--is that what you are looking for? JEff
Lexical Pads
Can someone explain to me the lexical pad stack and static nesting depth? I'm trying to write global for tcl, and trying to pull out a variable from the outermost pad, and failing to find it. - I'm fairly certain this is because I'm abusing new_pad and store_lex (always using 0 as the static nesting depth). Works fine when all I care about is the current pad - but getting to variables elsewhere in the pad stack results in a lexical not found error. Do I need to manually keep track of my nesting depth? If so, what's the rationale? (why have the stack if you also have the nesting depth?) Heading off to experiment... -- Will Coke Coledawill at coleda dot com
Lexical Pads (was: [perl #22767] ...)
Dan Sugalski [EMAIL PROTECTED] wrote: Pads shouldn't really be stacks, they should be plain linked lists. A plain linked list is lacking the chunk allocation scheme. I'm not sure if allocating a buffer_like and Parrot_allocate the memory for the pads is better, then the current stack based implementation. But I still don't know, how control stack, exception objects located there, and scopes are playing together. Pads and namespaces are singly-linked lists, which we need to deal with separately. WRT namespaces: pdd06 has named global tables and indexed access for globals in such a table. This is similar to the indexed access in lexical pads. The implementation of both is totally different though. First we should know, if $HL is likely to access lexicals/globals by name or by index or mixed ... Then such a pad or namespace table probably should be a separate PMC class with a hash and an array inside. So manipulation of these could be done with normal opcodes + some special shortcut ops for common operations. leo