Angelo Schneider writes:
 > Jochen Hoenicke wrote:
 > > http://www.informatik.uni-oldenburg.de/~delwi/japhar/ideas
 > > 
 >
 > Hi all,
 > 
 > Jochen seems to focus on method calls in JITing bytecode.
 > 
 > Why do you focus on that and not on overall byte code ->
 > native code jit compilation?

Because only that part needs to know the internals of the data
structure.  For the remaining instructions like iadd or aload I
don't need to know anything about how an object is layouted.

I have nice ideas of generating a jit that will cache most values in
register, so that load instructions aren't translated at all.  This
should give almost optimal code.  I want to code something that
executes java in almost native speed.  Some of that may be just a
dream, maybe it will make the JIT so slow, that it would be better not
compile the methods, but I think it could be worthwile.

A JIT clearly needs to interface with the JVM in some way, and this
interface are method calls, field access, instanceof, checkcast and
array operations.  This is why I focus on this part.

The JIT code on that page is more a test, how one can use the data
structures to efficiently implement the jvm dependent bytecode ops.  I
don't wanted to say, that JITed code must exactly look this way.

 > Jochen, do you like to treat jit compiled methods like
 > native (JNI) methods? Isn't there an other way?

My JITed methods will not use the JNI interface, but use the
internals directly.  The reason is speed.  I want the JIT compiler be
as fast as possible, without going through several layers, just to
call a virtual method or access a field.  It should be _faster_ than
good programmed JNI.

Look at tya, and you see that it uses a lot of undocumented things to
make virtual method calls fast.  It _does not_ create valid JNI code,
and this is the main reason, why it is so fast.

 > I would have expected that jitting is quite easy and 
 > gives in a first approach sufficient speed improvements if
 > you replace the bytecode by jumps into the JMV code.
 > 
 > So the interpreter would have to execute a sequence of
 > JSRs generated by the JIT compiler.

That sounds slow!  It means calling a method for every simple bytecode
instruction, this can't be much faster than interpreting the code.  I
would even say, it's slower than a good hand optimized interpreter
written in assembler.

BTW: My suggestion should also give a speedup for interpreted code.
Currently calling a method from JNI or from the interpreter involves
several checks, just to make sure, that the method is loaded, isn't
native and so on, before the interpreter for that method is started.

I know that this is another big change in the object layout, that can
break many things.  But it is really needed to get a good performance.

  Jochen

PS: I sent two mails from my university account
([EMAIL PROTECTED]) the last days.  They
both didn't make it to the list.  Anyone knows why?

Reply via email to