Re: Perl6 perlplexities
Rob Kinyon [EMAIL PROTECTED] writes: First-class blocks make continuations and coros almost neglible to implement from an API perspective. Almost makes me wonder how much trouble it would be to implement this in P5 ... Um... tosh. Seriously. Full continuations need some fairly serious retooling of the call stack if they are to work properly. And one shot continuations are the next best thing to useless. -- Piers Cawley [EMAIL PROTECTED] http://www.bofh.org.uk/
Chained buts optimizations?
This question came out of a joking comment on IRC, but it's a serious concern. Can chained buts be optimized, or must the compiler strictly create intermediate metaclasses, classes and objects in the following: my $a = $b but C but D but E but F; The difference is between: my $tmprole = role { is $b.meta.role; does C; does D; does E; does F; }; my $a = $b but $tmprole; and my $tmpbc = $b but C; my $tmpbcd = $tmpbc but D; my $tmpbcde = $tmpbcd but E; my $a = $tmpbcde but F; In the second example, constructors are called 4 times, but in the first case constructors are called only once sort of. Until you get to taking about constructors on the metaclasses, but metaclass constructors still make my head hurt. The same goes for destructors, but I don't think those get to get called until everything goes away (since there's a reference chain between them). -- Aaron Sherman [EMAIL PROTECTED] Senior Systems Engineer and Toolsmith It's the sound of a satellite saying, 'get me down!' -Shriekback
This week's summary
The Perl 6 Summary for the fortnight ending 2005-11-13 Welcome to another fortnight's worth of summary. We'll get back to a weekly schedule one of these fine days, you see if we don't. This fortnight in perl6-compiler There was a surprisingly large amount of activity on the list, but again, the place to look for perl6 compiler news is the Planet Perl Six aggregator. http://planetsix.perl.org/ PGE improvements and changes Patrick announced that he'd checked in some major changes to the PGE internals. The changes include a shiny new shift-reduce operator precedence parser which is used to parse the rules themselves. PGE finally has a p6rule parsing rule which can be used to parse a valid Perl 6 rule. There are other changes, but those two are the headlines. Patrick asked for the usual questions, comments, patches and tests. A couple of days later, he posted a more comprehensive overview of the new and shiny bits in PGE. http://xrl.us/ifuy http://xrl.us/ifuz PGE problem with non-greedy quantifiers Allison fell foul of some changes in the new PGE. This turned out to be a bug in PGE, so Patrick fixed it. http://xrl.us/ifu2 The meaning of \n and \N Noting that Synopsis 5 says that '\n now matches a logical (platform independent) newline, not just \012', Patrick asked the list for more details about what that should mean so he could get on and implement it in PGE. He offered up a suggested matching rule. Larry thought that the suggested rule was close enough for jazz. http://xrl.us/ifu3 [] and () on rule modifiers Patrick continues to work on the PGE. This time he asked about the behaviour of rule modifiers, with particular reference to the :w modifier. Larry had answers. http://xrl.us/ifu4 Parrot 0.3.1 Wart released Leo announced the release of Parrot 0.3.1 Wart, complete with shiny new features like variable sized register frames and no more spilling, a much better PGE (see above) and other goodies. The latest release has more than 3000 tests, and that's probably still not enough. http://xrl.us/ifu5 Octal in p6rules (and strings) Patrick Continued his voyage of stringy discovery, this time asking about the black art of specifying glyphs/bytes/whatever using octal notation. He wondered about his assumption that the correct way to do it is with \o123 by analogy with using 0o123 to specify a number in octal. He also wanted confirmation that the \nnn notation had been dropped. A surprisingly long discussion ensued as Larry did a good deal of thinking aloud and Patrick got on with implementing the nailed down bits. http://xrl.us/ifu6 Meanwhile, in perl6-internals SWIGging Parrot John Lenz is one of the developers a SWIG, which started off as the Python equivalent to Perl's XS. He had some questions about writing a SWIG module for parrot and asked if there would be interest in having SWIG be one of the 'official' ways of doing native calls from Parrot. Leo thought not, pointing out that Parrot's NCI is fully dynamic and groovy. http://xrl.us/ifu7 NCI using ffcall library Garrett Goebel joined in the ongoing discussion of using ffcall to implement the Parrot NCI (Native Call Interface) by pointing back to an earlier discussion of using libffi to implement the Parrot NCI. Last time round, Dan had pointed out that, because libffi is an external library, there still needs to be a supported (if possibly hackish) way of doing NCI that comes with Parrot, but that configure could probe for external libraries to use where they are available. http://xrl.us/ifu8 Heredocs in function calls Patrick wondered if there might be a convenient way to support heredoc parameters in PIR function calls. Nicholas Clark wondered why one would bother since most PIR code should be generated code. Later on, Leo implemented them. About the only place they don't work now is in macro arguments. http://xrl.us/ifu9 http://xrl.us/ifva Simple register allocation Summarizing a discussion on IRC, Patrick noted that it would be nice if the PIR compiler had a way to use a very basic register allocation for .subs that only use a small number of registers. After all, there's little point in doing a complex analysis of control flow if a sub uses (say) 5 registers at most. The problem is that this analysis gets harder as the subs get longer (O(n) on the length of the sub). In the case of PGE (for instance), the subs can get very long, with lots of control flow statements, but use a maximum of 10 PMC, 9 int and 4 string registers for the whole thing. Warnock applies. http://xrl.us/ifvb Careful with that bsr Eugene Leo noted that, with the introduction of variable sized register frames, it is no longer
Re: Chained buts optimizations?
On 11/15/05, Aaron Sherman [EMAIL PROTECTED] wrote: This question came out of a joking comment on IRC, but it's a serious concern. Can chained buts be optimized, or must the compiler strictly create intermediate metaclasses, classes and objects in the following: my $a = $b but C but D but E but F; Certainly. The semantics should precisely equivalent either way (constructors don't get called during a rebless, I think). Luke
Re: Chained buts optimizations?
On Tue, 2005-11-15 at 12:30, Luke Palmer wrote: On 11/15/05, Aaron Sherman [EMAIL PROTECTED] wrote: This question came out of a joking comment on IRC, but it's a serious concern. Can chained buts be optimized, or must the compiler strictly create intermediate metaclasses, classes and objects in the following: my $a = $b but C but D but E but F; Certainly. The semantics should precisely equivalent either way (constructors don't get called during a rebless, I think). So, are you saying that: $a = $b but C; is: $a = $b.clone.rebless(class {is $b.meta.class; does C;}); ? Or are you refering to does instead of but (which creates a new object)? If you are saying the former, then would: $a = $b but C but D; be: $a = $b.clone.rebless(class {is $b.meta.class; does C; does D}); or: { my $_tmp = $b.clone.rebless(class {is $b.meta.class; does C;}); $a = $_tmp.clone.rebless(class {is $_tmp.meta.class; does D;}); } ? This is where the semantic difference arises, since the constructor and/or destructor for $b.meta.class might well be something that I expected to be called multiple times, and won't see. All of that is fine, as far as I'm concerned, as long as we give the user the proviso that chained buts might be optimized down into a single cloning operation or not at the compiler's whim, but it could be a nasty shock if it's not documented, and it's a rather ugly amount of overhead if we don't allow for the optimization. -- Aaron Sherman [EMAIL PROTECTED] Senior Systems Engineer and Toolsmith It's the sound of a satellite saying, 'get me down!' -Shriekback
Re: This week's summary = Perl 6 perlplexities
On Tue, 15 Nov 2005, The Perl 6 Summarizer wrote: Perl 6 perlplexities Michele Dondi worries that the increase in complexity of some aspects of Perl 6 is much bigger than the increase in functionality that the complexity buys us. In particular Michele is concerned that the Perl 6 parameter passing and signature stuff is going to be a big loss. People mostly disagreed with him. Rob Kinyon made a remark that chimed strongly To be sure, I never intended to claim that signature stuff is going to be a big loss, and I hope that I didn't. First of all I chose it solely as an example. Then the sense that I was trying to convey is that 90% of what has already been stuffed in it will already be the best thing since sliced bread, and that trying to fit the remaining 10% of all fancy types of parameter passing may not really make it better hence resulting in a _possible_ loss. Michele -- premature optimization is the root of all evil - Tad McClellan in clpmisc, Re: Whats the variable holding the dir seperator?
Re: Chained buts optimizations?
On Tue, Nov 15, 2005 at 02:11:03PM -0500, Aaron Sherman wrote: : All of that is fine, as far as I'm concerned, as long as we give the : user the proviso that chained buts might be optimized down into a single : cloning operation or not at the compiler's whim, but it could be a nasty : shock if it's not documented, and it's a rather ugly amount of overhead : if we don't allow for the optimization. The situation will probably not arise frequently if we just give people the opportunity to write my $a = $b but C | D | E | F; instead, or whatever our type set notation turns out to be. Larry
Re: Chained buts optimizations?
On Tue, Nov 15, 2005 at 11:23:49AM -0800, Larry Wall wrote: On Tue, Nov 15, 2005 at 02:11:03PM -0500, Aaron Sherman wrote: : All of that is fine, as far as I'm concerned, as long as we give the : user the proviso that chained buts might be optimized down into a single : cloning operation or not at the compiler's whim, but it could be a nasty : shock if it's not documented, and it's a rather ugly amount of overhead : if we don't allow for the optimization. The situation will probably not arise frequently if we just give people the opportunity to write my $a = $b but C | D | E | F; instead, or whatever our type set notation turns out to be. If adding a but involves calling some code to initialize the but-iness (I don't know if it does myself), that code might inspect or operate on the item that is being modified in a way that would be changed if a previous but had not yet been fully initialized. So, the initialization code for each of the buts (if any) should be called in order. Reblessing for each one would only matter if the subsequent but code used introspection and varied its actions depending on the blessed state. The choice between: my $a = $b but C | D | E | F; and: my $a = $b but C but D but E but F; might be used to control the short-cut initialization (which, would have to be an explicit definition rather than an optimization since it could have different meaning). --
Re: context matters
On Tue, Nov 15, 2005 at 12:32:38PM -0600, Patrick R. Michaud wrote: : On Tue, Nov 15, 2005 at 10:26:05AM -0800, jerry gay wrote: : Thus, while PGE::Match currently defines a C__get_pmc_keyed_int : method, it's doesn't yet define a C__get_string_keyed_int method. : So, a statement like : : .local string res : .local pmc match : res = match[0] : : is defaulting to using the inherited op from the Hash class, and : since there's not an entry at the 0 key in the hash (as opposed to : the array) you get the null PMC. : : it seems to me it could inherit from Array as well, but it may not be : a precise fit. : : Worse, I think the two might interact in strange and undesirous : ways. Inheritance is wrong here anyway. We need some kind of basic Tree node object that *does* Hash, Array, and Item, but isn't any of them. Think about how you'd want to represent XML, for instance: ~$obj name of tag, probably +$obj number of elements? +$obj[] number of elements? +$obj{} number of attributes? $obj[] ordered child elements $obj{} unordered attributes But the scalar values don't match up with how Match objects work, so it would likely have to be: ~$obj representation of entire tag.../tag. +$obj +~$obj (0 with warning?) Another approach would be to say that we make Hash smart enough to behave like an array or a scalar in context, and then we write ~%obj name of tag, probably +%obj number of attributes? +%obj[] number of elements? %obj[] elements %obj{} attributes But then hashes should have to store scalars and arrays as hidden keys, and we still have an inconsistent scalar interface. Plus it smacks of pseudo-hashery. Yet another approach is to reinvent typeglobish objects (but without confusing them with symbol table entries.) But we've stolen the * sigil since then. And it might be more readable to simply be able to declare highlanderish variables such that my Node $obj; my @obj ::= $obj[]; my %obj ::= $obj{}; And otherwise we just stick with $ sigil and semantics. Basically, match objects are ordinary objects that merely *contain* other types, while providing Str, Int, Num, Array and Hash roles. Of course, we could give syntactic relief in just the declaration on the order of my ?obj;# the '?' is negotiable, of course that implies the creation of a highlander variable. Outside the declaration you'd only be able to use one of the real sigils. Interestingly, though, that kind of implies that ^obj as an rvalue would give the type of $obj in that scope. One interesting question is, if you said my ?obj := %random_hash; whether it would try to emulate the $ and @ views or merely fail, or something in between, like returning null lists and undefined values. Presumably obj would likely fail unless ?obj contained a code object of some sort. It would make sense to allow tests for exists obj and such. And then maybe we'd be talking about the ?/ variable rather than the $/ variable. And we'd get @/ and %/, FWIW. Of course, none of this highlander stuff buys you anything as soon as you go down a level in the tree (unless you realias the child nodes). To my mind the main benefit of declaring something like ?obj rather than $obj is that you are documenting the expected polymorphism, and only secondarily that you're claiming all the local obj namespaces. [Followups to p6l.] Larry
Re: Chained buts optimizations?
On Tue, Nov 15, 2005 at 03:43:59PM -0500, John Macdonald wrote: : On Tue, Nov 15, 2005 at 11:23:49AM -0800, Larry Wall wrote: : On Tue, Nov 15, 2005 at 02:11:03PM -0500, Aaron Sherman wrote: : : All of that is fine, as far as I'm concerned, as long as we give the : : user the proviso that chained buts might be optimized down into a single : : cloning operation or not at the compiler's whim, but it could be a nasty : : shock if it's not documented, and it's a rather ugly amount of overhead : : if we don't allow for the optimization. : : The situation will probably not arise frequently if we just give people : the opportunity to write : : my $a = $b but C | D | E | F; : : instead, or whatever our type set notation turns out to be. : : If adding a but involves calling some code to initialize the : but-iness (I don't know if it does myself), that code might : inspect or operate on the item that is being modified in a way : that would be changed if a previous but had not yet been fully : initialized. So, the initialization code for each of the buts : (if any) should be called in order. Reblessing for each one : would only matter if the subsequent but code used introspection : and varied its actions depending on the blessed state. : : The choice between: : : my $a = $b but C | D | E | F; : : and: : : my $a = $b but C but D but E but F; : : might be used to control the short-cut initialization (which, : would have to be an explicit definition rather than an : optimization since it could have different meaning). Roles are *supposed* to be well behaved. If there's a difference between those two notations, it would be that I'd expect the first to do the normal compile-time collision checking for all the new roles at once, while the nested form should do run-time mixins at each step that potentially hide any previous methods of the same name. You could have intermediate forms like: my $a = $b but C | D but E | F; where any E or F methods hide any C or D methods, but it detects collisions between E and F or between C and D. Larry
Re: This week's summary
On Nov 15, 2005, at 17:24, The Perl 6 Summarizer wrote: The Perl 6 Summary for the fortnight ending 2005-11-13 string_bitwise_* Leo, it seems to boil down to a choice between throwing an exception or simply mashing everything together and marking the 'resulting bit mess' as binary. Warnock applies. I've today cleaned up the string_bitwise code a bit. These rules apply now: - usage of non-fixed_8 encoded strings in binary string ops throws an exception - else the result string has charset binary, fixed_8 encoded. Thanks again for your concise summaries, leo
Re: Error Laziness?
On 11/16/05, Luke Palmer [EMAIL PROTECTED] wrote: Here is some perplexing behavior: say Foo; hello there; sub hello () { say Bar; } sub there () { say Baz; } This prints: Foo *** No compatible subroutine found: hello at lazy.p6 line 2, column 1-12 I would expect it to print: Foo Baz *** No compatible subroutine found: hello at lazy.p6 line 2, column 1-12 Okay, this makes sense. Apparently Pugs supports is lazy (cool!). So you don't know when to evaluate your arguments until after you've selected the call. There are two reasons I've posted to perl6-language this time. First of all, is this acceptable behavior? Is it okay to die before the arguments to an undefined sub are evaluated? Second, consider this is lazy code: sub foo ($bar is lazy) { my $bref = \$bar; do_something($bref); } foo(42); This will evaluate $bar, even if it is not used in do_something. In fact, this will evaluate $bar even if the do_something call is omitted altogether. This doesn't give you much control over the time of evaluation, and presumably if you're saying is lazy, control is precisely what you want. I think we need more control. I think is lazy parameters should pass a thunk that needs to be call()ed: sub foo ($bar is lazy) { say $bar; # says something like Thunk(...) say $bar(); # evaluates parameter and prints it say $bar; # still says something like Thunk(...) say $bar(); # doesn't evaluate again, just fetches } Whaddaya think? Luke