I think the upside about JRockit JVM is that it has very efficient tools for
profiling (e.g., Jrockit Mission Controller), which is very useful for
performance tuning. For SUN JVM, the profiling tools are either primitive or
very inefficient. Any lock with profiling Hadoop with SUN JVM?

Thanks,
-JQ

On Sat, May 9, 2009 at 12:59 AM, Steve Loughran <ste...@apache.org> wrote:

> Grace wrote:
>
>> Thanks all for your replying.
>>
>> I have run several times with different Java options for Map/Reduce
>> tasks. However there is no much difference.
>>
>> Following is the example of my test setting:
>> Test A: -Xmx1024m -server -XXlazyUnlocking -XlargePages
>> -XgcPrio:deterministic -XXallocPrefetch -XXallocRedoPrefetch
>> Test B: -Xmx1024m
>> Test C: -Xmx1024m -XXaggressive
>>
>> Is there any tricky or special setting for Jrockit vm on Hadoop?
>>
>> In the Hadoop Quick Start guides, it says that "JavaTM 1.6.x, preferably
>> from Sun". Is there any concern about the Jrockit performance issue?
>>
>>
> The main thing is that all the big clusters are running (as far as I know),
> Linux (probably RedHat) and Sun Java. This is where the performance and
> scale testing is done. If you are willing to spend time doing the
> experiments and tuning, then I'm sure we can update those guides to say
> "JRockit works, here are some options...".
>
> -steve
>
>
>
>

Reply via email to