Michael Hind wrote:
> Steve Blackburn wrote:
> 
> 
>>Hi Will,
>>
>>
>>>As a lurker on this list, I thought I'd share the following article:
>>>
>>>http://www.osnews.com/story.php?news_id=10729
>>>
>>>This might be a good article for those new to JVMs.  It's not as
>>>technical as the various white paper citations being bandied about but
>>>it's a nice introduction to just-in-time-compiling / optimization,
>>>Swing threading, and other JVM performance-related issues.
> 
> 
>>Yes.  I think this is a nice and high level overview.  These subjects
>>(almost exactly) are covered in greater depth in the various links I've
>>posted, most particularly in the talk by Dave Grove and others which I
>>posted most recently.  If any readers found Will's post interesting, I
>>*strongly* encourage you to read the following for more on the *how* and
>>*why*...
> 
> 
> I agree. 
> 
> Dave's talk and the tutorials Steve has referred to also debunk a popular 
> myth that, unfortunately, the osnews article above further promotes: that 
> the JIT kicks in the first time a method is executed.  This is NOT true in 
> any of today's high performing JVMs because the compilation "burp" would 
> be too noticeable in large applications, like eclipse, that have thousands 
> of methods that execute infrequently.  (However, this myth was true in the 
> early Smalltalk, Self, and Java VMs, but that was over a decade ago.)
> 
> Instead, modern JVMs try to balance these compilation burps by using a 
> strategy (selective optimization) that tries to ensure a method really is 
> important enough to invoke the JIT on.  This is often done by counting the 
> number of times a method is called and is sometimes augmented with a loop 
> count for the method (in case the method is called infrequently, but 
> nevertheless a good chunk of execution time is spent in.)  When this cheap 
> profile indicates a method is important, the JIT is used to improve the 
> generated code.  Other systems use infrequent sampling (100/sec) of the 
> call stack to find candidates to be JITed.  Many such systems also use 
> multiple levels of JIT optimizations (similar to a static compiler) 
> depending on how important the method appears at runtime.  Jikes RVM uses 
> the sampling to plug into a simple cost/benefit model to determine what 
> level (if any) of the JIT to use.
> 
> Using a selective compilation strategy typically results in only about 20% 
> of the executed methods being highly optimized.  However, when life is 
> good, these 20% represent a large (>95%) of the execution time, so this 
> strategy has been shown to be better performing that the compile on first 
> execution approach.
> 
> So how are the other ~80% of the methods executed? 
> 
> They are either interpreted or compiled to native code with a very quick, 
> but poor code quality producing compiler (yes, you can think of it as a 
> dumb JIT).  Some systems (Sun HotSpot, IBM DK, IBM J9) use an interpreter, 
> other systems (JRockit, ORP, Jikes RVM) use a cheap compiler.

Mike,

how do the above strategies compare to RISC-ification of CPU bytecode
that happens in modern superscalar CPUs?

do you think there is something to learn there? or has been already?

-- 
Stefano, honestly curious.

Reply via email to