Re: [PATCH] Fixes logical ops in Parrot Scheme compiler
Gregor N. Purdy sent the following bits through the ether: I'd like to see the folks with other language implementations speak up again about their current status and desires to have their stuff in CVS My JVM - Parrot stuff is going slowly, but parts of a Better Solution are going up on CPAN. My previous attempts needed java to be installed locally, but I'm now implementing a Perl Java Classfile parser: http://search.cpan.org/search?dist=Java-JVM-Classfile It's only basic and development will be pretty slow but I think I'll be targetting Perl first so I can play with translation ideas and not have to worry about static type inference... Errr, so not yet. But I'll be updating the parrotcode.org examples rsn, honest... Leon -- Leon Brocard.http://www.astray.com/ Nanoware...http://www.nanoware.org/ ... Gravity is a myth - the earth sucks
Re: Calling conventions -- easier way?
At 11:00 PM 10/19/2001 -0400, Bryan C. Warnock wrote: On Friday 19 October 2001 01:46 pm, Dan Sugalski wrote: I'm currently leaning either towards returning values in registers [PSIN][0-4] with the total number of each type in register I0 somehow Order determination of the return values. That's an issue if we don't have declared return value lists. If we do, we can do it implicitly--the first five string returns are in S0-S4, the first four integers in I1-I4, the first five PMCs in P0-P4, and the first five floats in N0-N4. Without return value lists, I think we're going to have to either push 'em on a stack, or return a single list PMC with the returns. The only problem with a list is that you can only put PMCs in 'em, which won't work too well for those cases when you don't need a full PMC. Dan --it's like this--- Dan Sugalski even samurai [EMAIL PROTECTED] have teddy bears and even teddy bears get drunk
Re: Why is make test so slow?
At 07:12 PM 10/20/2001 -0400, Sam Tregar wrote: On 20 Oct 2001, Gregor N. Purdy wrote: I want to libify everything to the point where Perl wrappers around the libs allow you to pass the .pasm stuff as a string and get back a packfile that you can pass on to the interpreter, without firing off separate processes and writing files. Sounds like a good idea. It'll be necessary to have something like this to support string eval() anyway, right? Something like that, yep. (Though string eval won't fire off an assembler) What'd be really nice, though probably a bad idea for the test suite, was if the test program could run multiple tests sequentially without having to exit after each one. Like: create interpreter load bytecode for interpreter execute bytecode in interpreter destroy interpreter over and over. Certainly be a good test of the GC system. :) Dan --it's like this--- Dan Sugalski even samurai [EMAIL PROTECTED] have teddy bears and even teddy bears get drunk
Re: Revamping the build system
At 04:13 PM 10/20/2001 -0700, Robert wrote: On Thu, 2001-10-11 at 12:24, Dan Sugalski wrote: No, we don't have to do it in C. We can do it in perl, we just can't require perl for the initial build. The steps would be: 1) Build minimal perl 6 with default parameters using platform build tool 2) Run configure with minimal perl 6 3) Build full perl 6 with perl build tool and minimal perl 4) Build full perl 6 distrib with full perl Did you mean to say perl6 here or parrot? If you meant perl6, then this system cannot be implemented for quite a while. (Note lack of actual language to write in..) Perl 6. That is, after all, the language kit we're ultimately tasked with shipping. :) By the time we're in alpha we'll have a working perl compiler, so we're OK. Basically we ship a Makefile and bare-bones config.h, enough to build miniparrot. Miniparrot then reconfigures itself and builds full parrot, which then goes and builds the world. Yes - this makes sense - but how does this affect what we want to do now? We don't want to write our buildsystem in parrot bytecode. For now, we can use perl 5. What are your ideas about requirements for the perl build tool, ignoring the basic stuff that make does (dependencies, etc.) There's nothing really past what make does. The reason for having our own is: *) Make isn't everywhere (like windows) *) Make on various platforms has different syntax (VMS, Windows, and Unix are all different) *) Not speaking for anyone else, but I find make's gotten rather creaky around the edges--after 20+ years presumably we can make things a bit better *) Having the full power of perl in the build tool should give us a big boost in utility. (Consider the difference between C macros and Perl source filters) *) It'll be really unfamiliar to everyone, which'll stop folks from falling into old, non-portable habits. Dan --it's like this--- Dan Sugalski even samurai [EMAIL PROTECTED] have teddy bears and even teddy bears get drunk
Re: [PATCH] Fixes logical ops in Parrot Scheme compiler
At 07:10 PM 10/20/2001 -0400, Sam Tregar wrote: PS: Can we get this into languages/scheme? I'm OK with that. Dan --it's like this--- Dan Sugalski even samurai [EMAIL PROTECTED] have teddy bears and even teddy bears get drunk
Languages in the core source tree?
Okay, we've now got minimal: *) Parrot assembly *) Perl *) Python *) JVM *) Scheme *) Jako *) Ruby? (Do we? I can't remember for sure) support for Parrot. This is a cool thing, but it brings up the questions: 1) Do we put them all in the parrot CVS tree 2) Do we require them to meet the same levels of quality as the core interpreter? For the first, I don't mind (I think it's kinda cool, actually) but I worry about the possibility that things will get out of sync or fall unsupported. I do *not* want us to feel that we can't ship, say, Parrot 0.09 because it the Scheme compiler's not working. (For reasons unrelated to parrot, at least... :) For the second, I really do feel that if it's in the source tree it should be subject to the same requirements on patches and submissions that the rest of Parrot is. (And we're working on getting that together a set of guidelines) That means no non-working code, good tests, and sufficient documentation on things. I'd be happy if the parrot kit could either ship with compiler modules runtime support for lots of non-perl languages or, at the very least, the non-perl languages could each have a single install kit that could be layered on top of parrot. (Of course, the potential for Well, to install that module requires installing the Ruby, Scheme, and INTERCAL runtimes coming up, but I'm OK with that...) Dan --it's like this--- Dan Sugalski even samurai [EMAIL PROTECTED] have teddy bears and even teddy bears get drunk
Re: My first parrot install - make test
On Sat, Oct 20, 2001 at 10:12:55PM +0200, Mattia Barbon wrote: Any volunteers to hack in distclean? What does it exactly do? Delete everything not in MANIFEST? Yeah, but I did it as part of my PMC fiddling over this past weekend. -- By God I *KNOW* what this network is for, and you can't have it. - Russ Allbery, http://www.slacker.com/rant.html
Re: PMCs and how the opcode functions will work
On Mon, Oct 08, 2001 at 06:36:32PM -0400, Dan Sugalski wrote: num_type: 0, 1, 2, 3, 4, 5 for same as you, native int, bigint, native float, bigfloat, object P1-vtable_funcs[VTABLE_ADD + P2-num_type](P1, P2, P0); I don't understand the same as you thing; num_type isn't a function, but a part of the structure. How does P2-num_type then know that it's the same or different from anything else? -- The course of true anything never does run smooth. -- Samuel Butler
Re: PMCs and how the opcode functions will work
At 02:59 PM 10/20/2001 +0100, Simon Cozens wrote: On Mon, Oct 08, 2001 at 06:36:32PM -0400, Dan Sugalski wrote: num_type: 0, 1, 2, 3, 4, 5 for same as you, native int, bigint, native float, bigfloat, object P1-vtable_funcs[VTABLE_ADD + P2-num_type](P1, P2, P0); I don't understand the same as you thing; num_type isn't a function, but a part of the structure. How does P2-num_type then know that it's the same or different from anything else? It doesn't, and in the general case--dispatch via opcodes--it won't matter or be used. (Well, unless we check, but I don't know that the cost of the check is worth it) It's in there as a short-cut optimization for use when the interpreter *knows* that the second PMC is the same class as the first, for example if it's working with temps, or there's complex funkiness going on inside and there's sufficient information to determine that the two PMCs are the same. If it turns out to be not used, we'll yank it. Dan --it's like this--- Dan Sugalski even samurai [EMAIL PROTECTED] have teddy bears and even teddy bears get drunk
Re: use parrot;
At 08:11 PM 10/20/2001 +0200, raptor wrote: hi, will it be possible to do this inside Perl program : use parrot; ...parrot code... no parrot; OR sub mysub is parrot { parrot code ... } I suppose. I hadn't planned on inlining parrot assembly into any other language. (The first person who suggests an asm() function *will* get smacked... :) You'll certainly be able to use modules written purely in parrot assembly. Dan --it's like this--- Dan Sugalski even samurai [EMAIL PROTECTED] have teddy bears and even teddy bears get drunk
Re: PMCs and how the opcode functions will work
On Wed, Oct 10, 2001 at 11:27:24AM +0200, Paolo Molaro wrote: ... and to go a step further in sanity and maintainability, I'd suggest using a structure with properly typed function pointers instead of an array: typedef void (*parrot_pmc_add) (PMC *dest, PMC *a, PMC *b); typedef void (*parrot_pmc_dispose) (PMC *cookie); ... I've now changed the vtable structure to reflect this, but I'd like someone to confirm that the variant forms of the ops can be addressed the way I think they can. (ie. structure-base_element + 1 to get thing after base_element) Simon -- Old Japanese proverb: There are two kinds of fools -- those who never climb Mt. Fuji, and those who climb it twice.
Re: use parrot;
On Sun, Oct 21, 2001 at 12:20:29PM -0400, Dan Sugalski wrote: I suppose. I hadn't planned on inlining parrot assembly into any other language. (The first person who suggests an asm() function *will* get smacked... :) You'll certainly be able to use modules written purely in parrot assembly. 1. B::Parrot 2. Parrot.xs 3. Providing opcodes for libperl functions and linking it in. I haven't suggested asm(), so technically I'm safe. Right? :) -- Rocco Caputo / [EMAIL PROTECTED] / poe.perl.org / poe.sourceforge.net
Re: use parrot;
At 03:41 PM 10/21/2001 -0400, Rocco Caputo wrote: On Sun, Oct 21, 2001 at 12:20:29PM -0400, Dan Sugalski wrote: I suppose. I hadn't planned on inlining parrot assembly into any other language. (The first person who suggests an asm() function *will* get smacked... :) You'll certainly be able to use modules written purely in parrot assembly. 1. B::Parrot 2. Parrot.xs 3. Providing opcodes for libperl functions and linking it in. Heck, something like: use SomeParrotModule; would be fine. If there's a SomeParrotModule.pbc, we'll use it. I haven't suggested asm(), so technically I'm safe. Right? :) Yep. Dan --it's like this--- Dan Sugalski even samurai [EMAIL PROTECTED] have teddy bears and even teddy bears get drunk
Re: Have I given the big The Way Strings Should Work talk?
In message [EMAIL PROTECTED] Dan Sugalski [EMAIL PROTECTED] wrote: I've given it a few places, but I don't know that I've sent it to perl6-internals. If not, or if I should do it again, let me know. I want to make sure we're all on the same page here. Not that I recall. I thought that was what strings.pod was... Tom -- Tom Hughes ([EMAIL PROTECTED]) http://www.compton.nu/
Re: PMCs and how the opcode functions will work
In message [EMAIL PROTECTED] Simon Cozens [EMAIL PROTECTED] wrote: I've now changed the vtable structure to reflect this, but I'd like someone to confirm that the variant forms of the ops can be addressed the way I think they can. (ie. structure-base_element + 1 to get thing after base_element) Legally speaking they can't as ISO C says that you can't do pointer calculations and comparisons across object boundaries and separate members of a structure are different objects. If you replace this: set_integer_method_t set_integer_1; set_integer_method_t set_integer_2; set_integer_method_t set_integer_3; set_integer_method_t set_integer_4; set_integer_method_t set_integer_5; with this: set_integer_method_t set_integer[5]; then you would be able to, as an array is all one object. Practically speaking I think it will work on every system that I can think of at the moment but who knows what wierd things are out there... Tom -- Tom Hughes ([EMAIL PROTECTED]) http://www.compton.nu/
Re: [PATCH] Bugfix for push_generic_entry
In message [EMAIL PROTECTED] Jason Gloudon [EMAIL PROTECTED] wrote: The stacktest patch will fail on the current CVS source, due to a bug in push_generic_entry. This looks good to me so I have committed it. Thanks for spotting it! Tom -- Tom Hughes ([EMAIL PROTECTED]) http://www.compton.nu/
Re: PMCs and how the opcode functions will work
On Sun, Oct 21, 2001 at 07:56:08PM +0100, Simon Cozens wrote: On Wed, Oct 10, 2001 at 11:27:24AM +0200, Paolo Molaro wrote: ... and to go a step further in sanity and maintainability, I'd suggest using a structure with properly typed function pointers instead of an array: typedef void (*parrot_pmc_add) (PMC *dest, PMC *a, PMC *b); typedef void (*parrot_pmc_dispose) (PMC *cookie); ... I've now changed the vtable structure to reflect this, but I'd like someone to confirm that the variant forms of the ops can be addressed the way I think they can. (ie. structure-base_element + 1 to get thing after base_element) If you mean vtable-add_1 + 1 gives you vtable-add_2, no. Pointer arithmetic on a function pointer is not meaningful. You probably want to use an array of function pointers for each method, like add[7] and initialize each entry with the appropriate function pointer. -- Jason
Work on PMCs
OK, I did a little (stress little) work on PMCs this weekend. Let me just explain how I see PMCs as working, and then I'll explain what I've done. PMCs are essentially objects on which methods are called. These objects will usually come from pre-defined classes: Parrot will ship with a bunch of default base classes which provide sensible (currently Perl-like, but that's open to change if anyone else codes some non-Perl-like) behaviour. There'll be an integer class, a scalar class, and so on. Of course, people are more than welcome to swap in their own methods at any point - at the moment, PMCs get a pointer to a vtable, but it might come to the point where they'll need a private copy so they can modify it. You call methods on PMCs either like this: (pmc1-vtable-is_same)(pmc1, pmc2) or like this: (pmc-vtable-set_integer[2])(pmc, intval) PMCs are created either by calling a bootstrapping pmc_new function, or by calling a PMC's new method to get a new PMC with the same vtable. In the former case, you pass in an index to Parrot's array of base classes. So, what have I done so far? Firstly, I've moved the PMC structure out of parrot.h and into pmc.h, and added an enum for the base classes. I've decided that we ought to put classes in a subdirectory, so I've created classes/, and have added a program genclass.pl which takes the name of a class and spits out a C skeleton to help you implement it. I've also added a (undocumented, yes, I'm a bad boy) utility function to Parrot::Vtable, which enumerates all the vtable methods with their types *including* multimethods, and I've updated its idea of the vtable structure to include PACKAGE and that sort of thing. I've also started on pmc_new. Implementing vtable methods should be relatively straightforward now. Any questions? :) -- He was a modest, good-humored boy. It was Oxford that made him insufferable.
Interesting experiment with lexical scoped VM
A while back I wondered if a higher-level VM might be useful for implementing higher-level languages. I proposed a lexically scoped machine without registers or stacks. The response wasn't encouraging. A quick tour through the library turned up a few machine designs that sounded very similar to what I was thinking. There's even a name for them: storage to storage architectures. The one I found the most literature on was CRISP. ATT based the Hobbit chips off the CRISP design. CRISP is a RISC architecture that does not have named registers. It maps the top of the stack to a large register file and provides a few addressing modes to access the stack. This hugely simplifies a compiler because it doesn't have to worry about register conventions -- especially it doesn't have to worry about saving/restoring register sets in function calls. The effect is sort of like custom-fit register windows. One thing that makes these particularly well suited for a software VM is that they never copy data. A software register architecture needs to be *very* smart about register usage, otherwise all it's doing is loading two copies of everything into L2. I became interested enough that I implemented a throw-away VM based on the concept. It's a Perl 5 module that implements a lexically scoped, storage to storage VM. Parts of the system ended up being more of a framework -- there's a parser, assembler, byte-code generator and VM all integrated together. It's very easy to hack on. I'm calling the module Kakapo. The Kakapo is the heaviest parrot in the world. It's critically endangered because the *flightless* bird uses a stand and freeze defense to evade predators. As you can see I have high hopes for this code. ;) I realize I'm too slow to have any direct impact on Parrot, but maybe some of the ideas will find a home. Here are a couple factorial examples in Kakapo: ; code a stupid compiler might generate .begin fact1:arg L0, 0 cmp L1, L0, 1 brne L1, else ret 1 else: sub L2, L0, 1 jsr L3, fact1, L2 mul L4, L0, L3 ret L4 .end ; code a smart compiler might generate .begin fact2:arg L0, 0 sub L1, L0, 1 bre L1, done jsr L1, fact2, L1 mul L0, L0, L1 done: ret L0 .end The .begin and .end commands are assembly directives that introduce a lexical scope. Every time a scope is entered (via a subroutine call or branch) the VM compares the current lexical scope state to the one the instruction requires. The VM will automatically sync up. This works almost exactly like make-your-own register windows. Here's an example: .begin set L0, 1 .begin set L0, L0@1 .end .end The inner scope has it's own register set -- it only uses L0 so the set only has one register in it. The L0@1 syntax asks for the L0 register in the parent scope. (This isn't the calling scope like the way register windows work on the Sparc -- this is lexical scoping. The arg op is used to reference the caller's scope. It treats the argument list of the jsr op as an export statement. The arg op imports from that list.) Using scopes allows nested subroutines to be written very easily: .begin tens: arg L0, 0 set L1, 10 set L2, 0 jmp test next: jsr L2, add, L2 sub L0, L0, 1 test: cmp L3, L0, 0 brg L3, next ret L2 .begin add:arg L0, 0 add L0, L0, L1@1 ret L0 .end .end Here tens is the address of a subroutine that takes one input argument and return 10x the value. It calls the add subroutine in a loop. add takes one argument and adds L1@1 -- or the L1 register in the parent lexical scope. If you could write nested subroutines in Perl, the code would look about like this: sub tens { my $n = $_[0]; my $i = 10; my $r = 0; sub add { return $_[0] + $i } while ($n 0) { $r = add($r); $n-- } return $r } Another cool thing about scopes is that they can be marked static. (This is how global and module global scopes work.) When a scope is static -- even if it's in the middle of a bunch of dynamic scopes -- the VM will re-use a single instance of the scope. Here's a simple example: .begin set L0, 1 .begin static add L0, L0, L0@1 .end .end The inner scope is static so L0 is actually an accumulator that will keep incrementing by the value of the parent's L0. This is pretty handy for implementing class and function static variables. Performance is slower than Parrot on some things, but faster on others. For example, Kakapo is about 50% slower on tight loops like: ; Kakapo loop1: addL0, L0, 1 cmpL1, L0, 1 brlL1, loop1 # Parrot REDO: eq I2, I4, DONE