At 09:11 PM 9/6/2001 +0200, Paolo Molaro wrote:
>On 09/06/01 Dan Sugalski wrote:
> > >The original mono interpreter (that didn't implement all the semantics
> > >required by IL code that slow down interpretation) ran about 4 times
> > >faster than perl/python on benchmarks dominated by branches, function
> > >calls,
> > >integer ops or fp ops.
> >
> > Right, but mono's not an interpreter, unless I'm misunderstanding. It's a
> > version of .NET, so it compiles its code before executing. And the IL it
> > compiles is darned close to x86 assembly, so the conversion's close to
> > trivial.
>
>Nope, if we had written a runtime, library, compiler and JIT engine in two
>months we'd be all on vacation now ;-)

Then I'm impressed. I expect you've done some things that I haven't yet. 
The implementation of the current interpreter is somewhat naive.

>no assembly whatsoever. And, no, IL is not close to x86 assembly:-)

I dunno about that. From reading the Microsoft docs on it, it doesn't look 
that far off. (OK, the x86 doesn't do objects, but the rest maps in pretty 
darned well)

Also, while I have numbers for Parrot, I do *not* have comparable numbers 
for Perl 5, since there isn't any equivalence there. By next week we'll 
have a basic interpreter you can build so we can see how it stacks up 
against Mono.

> > >Java is way faster than perl currently in many tasks:
> >
> > Only when JITed. In which case you're comparing apples to oranges. A 
> better
> > comparison is against Java without JIT. (Yes, I know, Java *has* a JIT, 
> but
> > for reasonable numbers at a technical level (and yes, I also realize that
> > generally speaking most folks don't care about that--they just want to 
> know
> > which runs faster) you need to compare like things)
>
>It's not so much that java *has* a JIT, but that it *can* have it. My 
>point is,
>take it into consideration when designing parrot. There's no need to
>code it right from the start, that would be wrong, but allow for it in the 
>design.

Ah. That's a long-done deal. (You missed it by about a year... ;) TIL 
capabilities are on the list of things to remember, as is a straight-to-C 
translator for bytecode. Compilation to java bytecode, .NET, and tie-ins to 
native compilers (GCC, GEM, whatever) are as well.

That's in for the design. We're not doing them to start, but they're there 
for the design.

> > >it will be difficult
> > >to beat it starting from a dynamic langauge like perl, we'll all pay
> > >the price to have a useful language like perl.
> >
> > Unfortunately (and you made reference to this in an older mail I haven't
> > answered yet) dynamic languages don't lend themselves to on-the-fly
> > compilation quite the way that static languages do. Heck, they don't tend
> > to lend themselves to compilation (certainly not optimization, and don't
> > get me started) period as much as static languages. That's OK, it just
> > means our technical challenges are similar to but not the same as for
> > Java/C/C++/C#/Whatever.
>
>Yep, but for many things there is an overlap. As for the dynamic language
>issue, I'd like the ActiveState people that worked on perl <-> .net
>integration to share their knowledge on the issues involved.

That one was easy. They embedded a perl interpreter into the .NET execution 
engine as foreign code and passed any perl to be executed straight to the 
perl interpreter.

> > >The speed of the above loop depends a lot on the actual implementation
> > >(the need to do a function call in the current parrot code whould blow
> > >away any advantage gained skipping stack updates, for example).
> >
> > A stack interpreter would still have the function calls. If we were
> > compiling to machine code, we'd skip the function calls for both.
>
>Nope, take the hint: inline the code in a big switch and voila', no
>function call ;-)

Not everywhere. We've run tests, and we'll run more later, but... given the 
various ways of dispatch--big switch, computed goto, and function 
calls--the right way is platform dependent. Different architectures like 
different methods of calling, depending on the chip design and compiler. 
We're actually going to test for the right way at configure time and build 
the core interpreter loop accordingly.

Also, we can't do away with some of the function calls, since we've 
designed in the capability to have lexically scoped opcode functions, which 
means we have to dispatch to a function for all but the base opcodes.

> > numbers. (This is sounding familiar--at TPC Miguel tried to convince me
> > that .Net was the best back-end architecture to generate bytecode for)
>
>I know the ActivState people did work on this area. Too bad their stuff
>is not accessible on Linux (some msi file format stuff).
>I don't know if .net is the best back-end arch, it's certanly going
>to be a common runtime to target since it's going to be fast and
>support GC, reflection etc. With time and the input from the dynamic language
>people may become a compelling platform to run perl/python on.

Doubt it. That'd require Microsoft's involvement, and I don't see that 
happening. Heck, most of what makes dynamic languages really useful is 
completely counter to what .NET (and java, for that matter) wants to do. 
Including runtime compilation from source and runtime redefinition of 
functions. (Not to mention things like per-object runtime changeable 
multimethod dispatch, which just makes my head hurt)

                                        Dan

--------------------------------------"it's like this"-------------------
Dan Sugalski                          even samurai
[EMAIL PROTECTED]                         have teddy bears and even
                                      teddy bears get drunk

Reply via email to