Hello,

Wanted to update this thread.
It looks like the periodic spikes could be related to the environment. Most
probably due to a scheduled task on the database or storage. I came to this
conclusion because I was able to see these spikes on other applications
running JBoss.

Thanks
Vinay

On Thu, Mar 1, 2012 at 10:37 AM, Vinay Pothnis <[email protected]>
wrote:
> Sure!
>
> The tests with Jetty 8.1.1 were not fruitful. I still have the same
periodic
> spikes.
> See attached cpu utilization graph.
>
> Unfortunately, the tests with BoneCP would take more time for integration
> and testing.
>
> In the meanwhile, any other ideas would be greatly appreciated!
>
> Thanks!
> Vinay
>
>
> On Wed, Feb 29, 2012 at 3:32 PM, Jeff Andrews <[email protected]>
> wrote:
>>
>> Hi Vinay,
>>
>> I'd be interested in your reesults with 8.1.1 and also with BoneCp.
 Would
>> you keep me in mind to inform
>> me of your results?
>>
>> Thanks,
>>
>> Jeff
>>
>> Sent from my iPad
>>
>> On Feb 29, 2012, at 4:44 PM, Vinay Pothnis <[email protected]>
>> wrote:
>>
>> Yes - I am using DBCP for the connection pool.
>> BoneCP looks quite interesting! Definitely worth checking out - will do
>> after I try out with 8.1.1
>>
>> Thanks!
>> Vinay
>>
>>
>> On Wed, Feb 29, 2012 at 1:38 PM, Joakim Erdfelt <[email protected]>
>> wrote:
>>>
>>> Also, are your hibernate -> jdbc connections to the db served by a
>>> connection pool?
>>>
>>> We've had many positive reports on using bonecp (over c3po) for that
>>> purpose.
>>>
>>> --
>>> Joakim Erdfelt
>>> [email protected]
>>>
>>> http://webtide.com | http://intalio.com
>>> (the people behind jetty and cometd)
>>>
>>>
>>>
>>> On Wed, Feb 29, 2012 at 2:30 PM, Vinay Pothnis <[email protected]>
>>> wrote:
>>>>
>>>> Thanks for the response Jaokim!
>>>> I will give that a try.
>>>>
>>>> -Thanks
>>>> Vinay
>>>>
>>>>
>>>> On Wed, Feb 29, 2012 at 1:24 PM, Joakim Erdfelt <[email protected]>
>>>> wrote:
>>>>>
>>>>> Could you try 8.1.1?
>>>>> It's had some significant updates with regards to nio.
>>>>>
>>>>>
>>>>>
http://repo1.maven.org/maven2/org/eclipse/jetty/jetty-distribution/8.1.1.v20120215/
>>>>>
>>>>> Usually something like the behavior you describe would trigger me
>>>>> asking about your GC setup, but I can see that's configured / tweaked
quite
>>>>> well.
>>>>>
>>>>> --
>>>>> Joakim Erdfelt
>>>>> [email protected]
>>>>>
>>>>> http://webtide.com | http://intalio.com
>>>>> (the people behind jetty and cometd)
>>>>>
>>>>>
>>>>>
>>>>> On Wed, Feb 29, 2012 at 2:17 PM, Vinay Pothnis
>>>>> <[email protected]> wrote:
>>>>>>
>>>>>> Hello,
>>>>>>
>>>>>> I am seeing a periodic CPU spike when the Jetty server is handling
>>>>>> requests. The spike occurs regularly every 5 minutes and uses up
90-100% CPU
>>>>>> for a short duration.
>>>>>> The CPU utilization falls back to normal after that short spike. This
>>>>>> happens only when the server is receiving requests.
>>>>>>
>>>>>> I have taken thread dumps during several spikes and I have not been
>>>>>> able to conclude anything concrete. In the dumps I observed the
following:
>>>>>>
>>>>>> 1. There were hundreds of threads in BLOCKED state waiting for a lock
>>>>>> held by a thread.
>>>>>> 2. The thread that was holding the lock was in turn BLOCKED. But it
>>>>>> was not waiting for any other lock. Example shown below.
>>>>>>
>>>>>> "qtp732533575-307956" prio=10 tid=0x00002aad980c4800 nid=0x3e1f
>>>>>> waiting for monitor entry [0x0000000067f69000]
>>>>>>    java.lang.Thread.State: BLOCKED (on object monitor)
>>>>>> at
>>>>>> org.hibernate.util.SoftLimitMRUCache.get(SoftLimitMRUCache.java:74)
>>>>>> - locked <0x00002aabefafa198> (a
>>>>>> org.hibernate.util.SoftLimitMRUCache)
>>>>>> at
>>>>>>
org.hibernate.engine.query.QueryPlanCache.getHQLQueryPlan(QueryPlanCache.java:88)
>>>>>> at
>>>>>>
org.hibernate.impl.AbstractSessionImpl.getHQLQueryPlan(AbstractSessionImpl.java:156)
>>>>>> at
>>>>>>
org.hibernate.impl.AbstractSessionImpl.getNamedQuery(AbstractSessionImpl.java:82)
>>>>>> at
>>>>>> org.hibernate.impl.SessionImpl.getNamedQuery(SessionImpl.java:1287)
>>>>>> at sun.reflect.GeneratedMethodAccessor202.invoke(Unknown Source)
>>>>>>
>>>>>> 3. The actual lock that the threads were waiting on, varied. It was
>>>>>> not the same in the different spikes every 5 minutes.
>>>>>>
>>>>>> Environment Details:
>>>>>> * Embedded Jetty Version 8.0.4
>>>>>> * Java 1.6.0_17
>>>>>> * Red Hat Enterprise Linux Server release 5.2 (Tikanga)
>>>>>>
>>>>>> JVM Parameters:
>>>>>> -server -Xmx11g -Xms11g -XX:MaxPermSize=256m -XX:+UseParNewGC
>>>>>> -XX:+UseConcMarkSweepGC -XX:NewSize=5g -XX:MaxNewSize=5g
-XX:SurvivorRatio=6
>>>>>> -XX:+PrintGCDetails
>>>>>> -XX:+PrintGCTimeStamps -Dsun.rmi.dgc.client.gcInterval=3600000
>>>>>> -Dsun.rmi.dgc.server.gcInterval=3600000
>>>>>>
>>>>>> I have also attached the CPU usage pattern. Any pointers would be
>>>>>> greatly appreciated.
>>>>>>
>>>>>> Thanks!
>>>>>> Vinay
>>>>>>
>>>>>> _______________________________________________
>>>>>> jetty-users mailing list
>>>>>> [email protected]
>>>>>> https://dev.eclipse.org/mailman/listinfo/jetty-users
>>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> jetty-users mailing list
>>>>> [email protected]
>>>>> https://dev.eclipse.org/mailman/listinfo/jetty-users
>>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> jetty-users mailing list
>>>> [email protected]
>>>> https://dev.eclipse.org/mailman/listinfo/jetty-users
>>>>
>>>
>>>
>>> _______________________________________________
>>> jetty-users mailing list
>>> [email protected]
>>> https://dev.eclipse.org/mailman/listinfo/jetty-users
>>>
>>
>> _______________________________________________
>> jetty-users mailing list
>> [email protected]
>> https://dev.eclipse.org/mailman/listinfo/jetty-users
>>
>>
>> _______________________________________________
>> jetty-users mailing list
>> [email protected]
>> https://dev.eclipse.org/mailman/listinfo/jetty-users
>>
>
_______________________________________________
jetty-users mailing list
[email protected]
https://dev.eclipse.org/mailman/listinfo/jetty-users

Reply via email to