At 07:13 PM 11/8/2001 -0500, Ken Fox wrote:
>Dan Sugalski wrote:
> > [native code regexps] There's a hugely good case for JITting.
>
>Yes, for JITing the regexp engine. That looks like a much easier
>problem to solve than JITing all of Parrot.

The problem there's no different than for the rest of the opcodes. Granted, 
doing a real JIT of the integer math routines is a lot easier than doing it 
for, say, the map opcode, because they generally decompose to a handful of 
machine instructions that are easy to generate with templates.

> > If you think about it, the interpreter loop is essentially:
> >
> >    while (code) {
> >      code = (func_table[*code])();
> >    }
>
>That's *an* interpreter loop. ;)

Yes, I know.

Some day I'm going to have to write 
HOW_TO_RECOGNIZE_CODE_WHICH_IS_SIMPLIFIED_FOR_PURPOSES_OF_EXAMPLE.pod... :)

> > you pay a lot of overhead there in looping (yes, even with the computed
> > goto, that computation's loop overhead), plus all the pipeline misses from
> > the indirect branching. (And the contention for D cache with your real 
> data)
>
>The gcc goto-label dispatcher cuts the overhead to a single indirect
>jump through a register.

Based on data in the D stream. Say "Hi" to Mr. Pipeline Flush there.

>If Perl code *requires* that everything goes through vtable PMC ops,
>then the cost of the vtable fetch, the method offset, the address
>fetch and the function call will completely dominate the dispatcher.

Oh, absolutely. Even with perl 5, the opcode overhead's not a hugely 
significant part of the cost of execution. Snipping out 3-5% isn't shabby, 
though, and we *will* need all the speed we can muster with regexes being 
part of the generic opcode stream.

> > Dynamic languages have potential overheads that can't be generally 
> factored
> > out the way you can with static languages--nature of the beast, and one of
> > the things that makes dynamic languages nice.
>
>I know just enough to be dangerous. Lots of people have done work in
>this area -- and on languages like Self and Smalltalk which are as
>hard to optimize as Perl. Are we aiming high enough with our
>performance goals?

Our aim is higher than perl 5. No, it's not where a lot of folks want to 
aim, but we can get there using well-known techniques and proven theories. 
Perl is, above all, supposed to be practical. The core interpreter's not a 
place to get too experimental or esoteric. (That's what the cores written 
to prove I'm a chuckle-headed moron with no imagination are for :)

>I'll be happy with a clean re-design of Perl. I'd be *happier* with
>an implementation that only charges me for cool features when I use
>them.

There's a minimum charge you're going to have to pay for the privilege of 
dynamicity, or running a language not built by an organization with 20 
full-time engineers dedicated to it. Yes, with sufficient cleverness we can 
identify those programs that are, for optimization purposes, FORTRAN in 
perl, but that sort of cleverness is in short supply, and given a choice 
I'd rather it go to making method dispatch faster, or making the parser 
interface cleaner.

>(Oops. That's what Bjarne said and look where that took him... ;)

Yeah, that way lies madness. Or C++. (But I repeat myself)

                                        Dan

--------------------------------------"it's like this"-------------------
Dan Sugalski                          even samurai
[EMAIL PROTECTED]                         have teddy bears and even
                                      teddy bears get drunk

Reply via email to