RE: Rogue https threads

2008-12-05 Thread Robert J Morman
Thank you.  I was just getting that response from our vendor as well.

Bob 

-Original Message-
From: Filip Hanik - Dev Lists [mailto:[EMAIL PROTECTED] 
Sent: Thursday, December 04, 2008 4:59 PM
To: Tomcat Users List
Subject: Re: Rogue https threads

a simple upgrade to 6.0.18 would most likely solve your problem

Filip
Robert J Morman wrote:
 Good afternoon.
  
 We have a problem with our production tomcat server in that the CPU 
 will climb after a restart from 1-2% to 100%.  The rate of climb 
 corresponds to the amount of traffic we receive (the more we have, the

 faster it climbs).  I noticed a couple days ago (by using Lambda 
 Probe), that we are getting 'rogue' https threads.  These are threads 
 that are stuck in the Service stage for a particular request.  I 
 notice that as these threads become stuck persistently-serviced, the 
 CPU seems to jump about 6% at a time (for each thread).  Once we hit 
 99-100% CPU, we have about
 15 of these and we are required to restart tomcat (as it's not 
 responding with much priority).  Lmbda Probe notes that there are no 
 Current Busy threads, even though it shows these as being Serviced.
  
 Is there a way to get these to time out?
  
 Specifics:
 Tomcat 6.0.16
 Java 1_5_16
  
 Conf connection snippets:
  
 Connector port=80 protocol=HTTP/1.1 
connectionTimeout=2 
 enableLookups=false
 maxThreads=100
 minSpareThreads=5
 maxSpareThreads=40
redirectPort=443 /
  
  Connector protocol=org.apache.coyote.http11.Http11NioProtocol
connectionTimeout=6
port=443 minSpareThreads=5 maxSpareThreads=15
enableLookups=false disableUploadTimeout=false 
acceptCount=100  maxThreads=100
scheme=https secure=true SSLEnabled=true
keystoreFile=e:\apache\tomcat6\.keystore
 keystorePass=changeit
clientAuth=false sslProtocol=TLS/
  
 Session timeout is set to 30 minutes in web.xml.
  
 Bob Morman
 EMCSA, MCSA
 Enterprise Systems Manager
 ASM International Headquarters
 http://www.asminternational.org
 blocked::http://www.asminternational.org/
 440/338-5151 x5478
  
 The No. 1 reference on metals casting is back with new ways to improve
energy efficiency, productivity and product performance!  Read free
sample articles from the all-new ASM Handbook, Volume 15: Casting, and
take advantage of special pre-publication prices before Dec. 15.  For
more on ASM Handbooks:
http://asmcommunity.asminternational.org/portal/site/www/MatInformation/
Handbooks/.

   


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 
The No. 1 reference on metals casting is back with new ways to improve energy 
efficiency, productivity and product performance!  Read free sample articles 
from the all-new ASM Handbook, Volume 15: Casting, and take advantage of 
special pre-publication prices before Dec. 15.  For more on ASM Handbooks: 
http://asmcommunity.asminternational.org/portal/site/www/MatInformation/Handbooks/.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Rogue https threads

2008-12-04 Thread Robert J Morman
Good afternoon.
 
We have a problem with our production tomcat server in that the CPU will
climb after a restart from 1-2% to 100%.  The rate of climb corresponds
to the amount of traffic we receive (the more we have, the faster it
climbs).  I noticed a couple days ago (by using Lambda Probe), that we
are getting 'rogue' https threads.  These are threads that are stuck in
the Service stage for a particular request.  I notice that as these
threads become stuck persistently-serviced, the CPU seems to jump about
6% at a time (for each thread).  Once we hit 99-100% CPU, we have about
15 of these and we are required to restart tomcat (as it's not
responding with much priority).  Lmbda Probe notes that there are no
Current Busy threads, even though it shows these as being Serviced.
 
Is there a way to get these to time out?
 
Specifics:
Tomcat 6.0.16
Java 1_5_16
 
Conf connection snippets:
 
Connector port=80 protocol=HTTP/1.1 
   connectionTimeout=2 
enableLookups=false
maxThreads=100
minSpareThreads=5
maxSpareThreads=40
   redirectPort=443 /
 
 Connector protocol=org.apache.coyote.http11.Http11NioProtocol
   connectionTimeout=6
   port=443 minSpareThreads=5 maxSpareThreads=15
   enableLookups=false disableUploadTimeout=false 
   acceptCount=100  maxThreads=100
   scheme=https secure=true SSLEnabled=true
   keystoreFile=e:\apache\tomcat6\.keystore
keystorePass=changeit
   clientAuth=false sslProtocol=TLS/
 
Session timeout is set to 30 minutes in web.xml.
 
Bob Morman
EMCSA, MCSA
Enterprise Systems Manager
ASM International Headquarters
http://www.asminternational.org
blocked::http://www.asminternational.org/ 
440/338-5151 x5478
 
The No. 1 reference on metals casting is back with new ways to improve energy 
efficiency, productivity and product performance!  Read free sample articles 
from the all-new ASM Handbook, Volume 15: Casting, and take advantage of 
special pre-publication prices before Dec. 15.  For more on ASM Handbooks: 
http://asmcommunity.asminternational.org/portal/site/www/MatInformation/Handbooks/.


Questions regarding MaxPermGen

2008-10-28 Thread Robert J Morman
Good afternoon.  We run a portal solution on top of Tomcat 6.0.16 (and
Java 1_5_16).  We are running out of PermGen space for several instances
of tomcat, which I believe could be some bad code we've received from
our development team.
 
To test a theory, I'd like to expand the size of our PermGen memory
space from the 64m default to 128m (and possibly higher).  I know by
increasing this setting, I could just be delaying the inevitable.  I am
running Lambda Probe (LP), allowing me to see the various memory
allocations and I see (as well in the logs), PermGen cap out and then
continuous GC occurring.  
 
I have a 2-part question.
 
1. In Tomcat6w.exe, I set one java_opt to include -XX:MaxPermGen=128m,
but the tomcat service then does not start up.  When I change it to be
-DXX:MaxPermGen=128m (like a lot of our other settings , it starts up
just fine.  Yet.. Lambda probe still shows that I only have a 64m cap on
PermGen.  Am I setting this properly?  (everything I read says yes, but
I'm not getting confirmation via LP).
 
2.  How can I verify PermGen max size has been increased other than
using Lambda probe (to keep LP honest).
 
Below are all the java_opts I currently have set.  Some are specific to
the portal environment we run.
 
-Dcatalina.home=E:\apache\Tomcat6
-Dcatalina.base=E:\apache\Tomcat6
-Djava.endorsed.dirs=E:\apache\Tomcat6\common\endorsed
-Djava.io.tmpdir=E:\apache\Tomcat6\temp
-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
-Djava.util.logging.config.file=E:\apache\Tomcat6\conf\logging.propertie
s
-Dcom.vignette.workingDir=E:\Vignette\cds\inst-vgninst\vcm-vgninst\cdsvc
s\stage-prod\cds-prod\as\conf
-Dcom.vignette.portal.installdir.path=C:\Vignette\Portal
-Dcom.sun.management.jmxremote
-DXX:MaxPermGen=256m
 
For perspective, I am an administrator, not a developer.  
 
Bob Morman
EMCSA, MCSA
Enterprise Systems Manager
ASM International Headquarters
http://www.asminternational.org
blocked::http://www.asminternational.org/
 
It's the microelectronics FA event of the year in North America's Best Big 
City, as Portland welcomes the International Symposium for Testing and Failure 
Analysis (ISTFA), sponsored by the Electronic Device Failure Analysis Society.  
Dates are Nov. 2-6, 2008.  For details: www.istfa.org.

Please consider the environment before printing this e-mail.


RE: Questions regarding MaxPermGen

2008-10-28 Thread Robert J Morman
Thank you (Charles/Ranier) for your help.  I had mt'd the option name,
but found that I just had it set too high.  It wasn't taking 256m and
would start if I set it lower.  I understand '-D' now as well.

Now to see if we have some bad leaks or we just needed more breathing
room.

Bob

-Original Message-
From: Rainer Jung [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, October 28, 2008 2:52 PM
To: Tomcat Users List
Subject: Re: Questions regarding MaxPermGen

Hi,

Robert J Morman schrieb:
 Good afternoon.  We run a portal solution on top of Tomcat 6.0.16 (and

 Java 1_5_16).  We are running out of PermGen space for several 
 instances of tomcat, which I believe could be some bad code we've 
 received from our development team.
  
 To test a theory, I'd like to expand the size of our PermGen memory 
 space from the 64m default to 128m (and possibly higher).  I know by 
 increasing this setting, I could just be delaying the inevitable.  I 
 am running Lambda Probe (LP), allowing me to see the various memory 
 allocations and I see (as well in the logs), PermGen cap out and then 
 continuous GC occurring.
  
 I have a 2-part question.
  
 1. In Tomcat6w.exe, I set one java_opt to include -XX:MaxPermGen=128m,

MaxPermGen is wrong, MaxPermSize is right.

Have fun with

http://blogs.sun.com/watt/resource/jvm-options-list.html

 but the tomcat service then does not start up.  When I change it to be

 -DXX:MaxPermGen=128m (like a lot of our other settings , it starts up

-D sets system properties (an analogy of environment variables in the
platform independant Java world), but there will be no code interested
in the system property you used, so it simply won't change anything.

 just fine.  Yet.. Lambda probe still shows that I only have a 64m cap 
 on PermGen.  Am I setting this properly?  (everything I read says yes,

 but I'm not getting confirmation via LP).

Since -D... doesn't change memory settings, you will still run with 64MB
max.

Regards,

Rainer

-
To start a new topic, e-mail: users@tomcat.apache.org To unsubscribe,
e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 
It's the microelectronics FA event of the year in North America's Best Big 
City, as Portland welcomes the International Symposium for Testing and Failure 
Analysis (ISTFA), sponsored by the Electronic Device Failure Analysis Society.  
Dates are Nov. 2-6, 2008.  For details: www.istfa.org.

Please consider the environment before printing this e-mail.

-
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]