Hi Jeremiah,

Thanks very much for the reply. It was great to hear that at least one
other person has had a similar problem. I have a few questions if you
don't mind:

1) Was your Java performance problem related to this java variant of
bug ID 6815915? It's not clear if that bug affected your diagnosis
using dtrace or was the cause of the problem itself. i.e. did setting
the environment variable DTRACE_DOF_INIT_DISABLE fix your performance
problems or just enable you to get better results using dtrace?

2) Running "export DTRACE_DOF_INIT_DISABLE=true" before running my
program in the shell doesn't seem to make any difference to the
execution time. Is this how you implemented the workaround in your JVM
startup scripts?

3) Excuse my ignorance, but what is "guds"?

Cheers,

Kevin.



> Message: 2
> Date: Tue, 30 Mar 2010 08:10:19 PDT
> From: Jeremiah Campbell <jeremiah.campb...@jpmchase.com>
> To: dtrace-discuss@opensolaris.org
> Subject: Re: [dtrace-discuss] Analyzing java class loading with dtrace
> Message-ID: <441700064.151269961849235.javamail.tweb...@sf-app1>
> Content-Type: text/plain; charset=UTF-8
>
> I'm curious if you're seeing a similar problem to what I experienced 
> recently. We had recently migrated some JVM's onto a brand new Solaris 10 
> T5220 and saw horrendous performance. We opened a case with Sun and 
> eventually got to a kernel engineer who discovered we were running into a 
> variant of bug ID 6815915 which is a C++ or libc bug specifically. However, 
> there is a Java variant of it that was affecting us (never did see a specific 
> bug ID for the java variant).
>
> The issue was discovered when looking at at least a level 2 guds and looking 
> at the lockstat-H.out file. We were seeing a bunch of mutex locks bubbling up 
> out of the dtrace helper provider (HotSpot). The work-around was to disable 
> the Dtrace helper functionality (can be turned back on via command line if 
> needed) before starting the application. The work-around was to set an 
> environment variable in your JVM startup scripts (tomcat profile for instance 
> if your JVMs run under tomcat). Here's the specific quote from the Sun 
> engineer;
>
> "The system is under heavy lock contention on dtrace_meta_lock because all 
> java processes are busy in register or destroy dtrace helper functions. Java 
> should not always load DTrace probes. This is likely a Java variation of CR 
> 6815915 which is filed for the C++ runtime libray.
>
> The workaround is to run the application with the environment variable 
> DTRACE_DOF_INIT_DISABLE set. "
>
> The below makes me wonder if you're getting some mutex locks out of Dtrace as 
> well. You might try finding this with guds. I'm no Dtrace guru, so couldn't 
> tell you how to pull the same information out of Dtrace as what we found in 
> guds.
>> Elapsed Times:
>> ? ? ? ? SYSCALL ? ? ? ? ?TIME (ns)
>> ? ? ?...
>> lwp_mutex_timedlock ? ? ? ? 1541179345
>>
>> CPU Times
>> ? ? ? ? SYSCALL ? ? ? ? ?TIME (ns)
>> ? ? ?...
>> lwp_mutex_timedlock ? ? ? ? ? 30490184
_______________________________________________
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org

Reply via email to