On 11 Feb 2003, Dirk Koopman wrote: > On Tue, 2003-02-11 at 00:02, Shevek wrote: > > > > AICI, the current Perl interpreter is written traditionally whereas the > > Parrot assembler interpreter is written to allow direct threaded > > interpretation. This should reduce the bytecode overhead, but doesn't give > > any of the advantages of a system like DyC. > > What does 'written traditionally' mean here? Oh come *on*, read the > comment: "Programmer cluefulness being equal".
while (1) { switch (*ip++) { case F_ADD: f_add() break; (etc) } } which has one range check and five (wet finger) jumps per instruction in four code localities. Compare this to: #define NEXT() goto *insn_table[*ip++] f_add() { ... NEXT() } which has as overhead no range checks and one jump in one locality. I ask you to remember the comment "cluefulness being equal" for the remainder of this mail. > > > Having said that, I've heard of compilers being so smart they outpace > > > hand-coded C but I imagine on the whole that's the edge case. > > > > Basically, you have more information at runtime than you have at compile > > time. Your basic yardstick is compile-time optimised C. This has to make > > certain assumptions about locality, branch taken/not taken, and so on. At > > runtime, you get actual statistical information about things like branch > > taken optimisations, so you can manage instruction scheduling, which way > > to do the jumps, even on which branches to spill registers, and get a > > considerable speedup with the runtime information. > > And then again, you may not. You may also get a large overhead > collecting this information, altering the code accordingly. You may also > get race conditions and oscillation as you exercise different bits of > code and recast it on the fly. The case of DyC is probably sufficiently well studied and will answer all these questions. Caveat reader, it's been a while since I read the papers on it, but it was published in a well respected journal. > > Bytecoded systems typically maintain a lot more semantic information than > > simple compiled C or assembler. Therefore, after flash compiling to > > instruction code, it's possible to annotate the semantic information in > > the bytecode with the runtime statistics, and use this to guide the > > optimiser in a re-compilation of the binary. > > Yes, maybe, but show me one of these systems that _consistantly_ > produces faster code than someone who is "talented". I willingly agree > that the code is physically produced faster - but it don't go as well. This is again why I brought up DyC, a compiler which will compile your standard C code into a more semantically rich form which does give speedup over a standard C compiler. It takes exactly the same source code as input and uses the semantic richness available to it (and Java and hopefully Parrot) to produce faster code. I was answering a purely technical question. > And this is the nub of it. Basically there aren't enough "talented" > people to go around. Therefore people try to throw ("talentedly" > programmed) machines at the problem to try to get a better result than > average. But that is all you will get: slightly better code a lot > quicker. Now I do not understand your point. Both man and machine must quite clearly be capable. I was answering a question about speed comparisons of bytecoded versus compiled languages simply and purely from the language perspective, assuming (essentially) the same source code had been compiled in two different compilers. I think your reply throughout demonstrates a different (and valuable) interpretation of the question from the human skills perspective, which I was not seeking to address. > > It is still worth noting that a Perl implementation of Joe's String > > Mmunging System will very likely be faster than one written in C simply > > because Perl's algorithms for string handling will be more efficient than > > Joe's general purpose C algorithms, since more time and expertise has been > > put into them. > > I can only say that I fundementally disagree. A talented implementation > will be faster in execution than perl. Period. Actually, I strongly > suspect that even a pretty shoddy implementation will execute faster > than perl. Perl is the fundamental talented implementation. In fact, if you work out the overheads for something which is heavy on string operations (and hence spends most of its time executing in the opcode, rather than in the interpreter overhead), you will find that the talent that has gone into the Perl interpreter outweighs that which you or I could put into our own libraries. This applies equally well in cryptography. I am quite happy to write some cryptographic applications in Perl because I know that the majority of the program time (>90%) is spent in the bignum library, and the glue language is irrelevant. This is still a minor sidetrack to the original, strictly technical question about optimisation of languages. S. -- Shevek I am the Borg. sub AUTOLOAD{my$i=$AUTOLOAD;my$x=shift;$i=~s/^.*://;print"$x\n";eval qq{*$AUTOLOAD=sub{my\$x=shift;return unless \$x%$i;&{$x}(\$x);};};} foreach my $i (3..65535) { &{'2'}($i); }