RE: DO NOT REPLY [Bug 5181] - HttpConnector [8080] No processor available, rejecting this connection

2002-03-20 Thread GOMEZ Henri

excellent technical analyze.

should be present in tomcat faq 

-
Henri Gomez ___[_]
EMAIL : [EMAIL PROTECTED](. .) 
PGP KEY : 697ECEDD...oOOo..(_)..oOOo...
PGP Fingerprint : 9DF8 1EA8 ED53 2F39 DC9B 904A 364F 80E6 



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, March 20, 2002 4:26 AM
To: [EMAIL PROTECTED]
Subject: DO NOT REPLY [Bug 5181] - HttpConnector [8080] No processor
available, rejecting this connection


DO NOT REPLY TO THIS EMAIL, BUT PLEASE POST YOUR BUG 
RELATED COMMENTS THROUGH THE WEB INTERFACE AVAILABLE AT
http://nagoya.apache.org/bugzilla/show_bug.cgi?id=5181.
ANY REPLY MADE TO THIS MESSAGE WILL NOT BE COLLECTED AND 
INSERTED IN THE BUG DATABASE.

http://nagoya.apache.org/bugzilla/show_bug.cgi?id=5181

HttpConnector [8080] No processor available, rejecting this connection





--- Additional Comments From [EMAIL PROTECTED]  2002-03-20 
03:26 ---
I have not run into this problem using the Tomcat HTTP 
connector, but I have
seen similar problems when using mod_jk to connect to Tomcat 
via AJP on a
server with heavy load.

In my case, after alot of detective work, I determined that 
Tomcat itself
was not the problem.

There are alot of things which can affect the ability of 
Tomcat to handle
a request regardless of whether they come from its own HTTP 
connector or
from Apache via AJP.

You may have already looked at one or more of the following 
issues, I will
include everything just for completeness.

The first thing I found is that JVM garbage collection can 
have a significant
intermittent effect on Tomcat.  When GC occurs processing by 
Tomcat freezes,
yet the OS will continue to accept requests on the port.  When 
GC has completed,
Tomcat will try to handle all pending requests.  If the GC 
took a significant
amount of time, this can cause a cascading affect where Tomcat 
runs out of
processors to handle requests.  I made the mistake of setting 
the JVM -Xmx
too large.  The JVM ended up using more memory than the OS 
would keep in
physical memory, when a Full GC occurred, performing GC on 
objects swapped
out to disk caused GC to take a significant amount of time.  
In my case,
70 seconds.  Decreasing the -Xmx to make sure the JVM stack was always
resident in physical memory fixed the problem.

JVM Memory Usage and Garbage Collection
---

It is very important to tune the JVM startup options for GC 
and JVM memory 
usage for a production server.

1. Make sure you are running Tomcat with a JVM that supports 
Hotspot -server,
   I use 1.3.1_02.

2. Use incremental GC, the -xincgc java startup option.

3. Try running Tomcat with the -verbose:gc java arg so you can collect
   data on GC.

4. Make sure the OS is keeping all JVM stack in physical memory and not
   swapping it out to disk.  Reduce -Xmx if this is a problem.

5. Try setting -Xms and -Xmx to the same size.

6. Search the fine web for articles on JVM GC and JVM 
performance tuning.

After researching and testing all of the above I significantly 
reduced the
maximum time for GC's.  99% of my GC's now run in  .05 sec, 
of the remaining,
most run at  1 sec, no more than 5-10 times a day do I see a 
GC  1 sec,
and they never exceed 5 sec.

dB access by applications
-

If your applications uses a db, make sure you set it's 
connection timeout
to a value  the max GC time you see.  Otherwise you will start seeing
db connection failures.  I set my db connection timeouts to 10 seconds.

A problem with your database, or if you frequently reach the maxiumum
connections you allow in a db connection pool can cause the type of
problems you see.  If the db connections fail, or your connection pool
is exhaused, each servlet which is waiting for a connection (remember
I recommended 10 seconds) will eat up an HTTP or AJP processor 
for 10 seconds.
This can cause a cascading effect where you see alot of processors used
by Tomcat.

Check your web applications for thread locking problems, or 
long delays.
---

Tomcat can't do anything useful by itself, its the applications you
install that provide the content.  There could very well be 
thread locking
problems or other bugs which cause delays in a servlet 
handling a request.
This can cause Tomcat to appear to fail due to runaway use of 
Processors.

Increase maxProcessors
--

Increase your maxProcessors to handle intermittent cascading 
of requests
due to GC, etc.  I set my maxProcessors to 2X max concurrent requests
I see under heavy load.

Proposition for a change to Processors to help debug these problems


Adding code to Processors so that they dump a stack trace for each
existing thread when the pool of processors is exhausted could provide
valuable information for 

Re: DO NOT REPLY [Bug 5181] - HttpConnector [8080] No processor available, rejecting this connection

2002-03-20 Thread Remy Maucherat

 excellent technical analyze.

 should be present in tomcat faq 

Yes. AFAIK, there's no FAQ, though :-(

Remy


--
To unsubscribe, e-mail:   mailto:[EMAIL PROTECTED]
For additional commands, e-mail: mailto:[EMAIL PROTECTED]




Re: DO NOT REPLY [Bug 5181] - HttpConnector [8080] No processor available, rejecting this connection

2002-03-20 Thread Punky Tse

How about create a new doc titled Tunning/Troubleshooting and add to
Tomcat doc?

Punky

- Original Message -
From: GOMEZ Henri [EMAIL PROTECTED]
To: Tomcat Developers List [EMAIL PROTECTED]
Sent: Thursday, March 21, 2002 4:14 AM
Subject: RE: DO NOT REPLY [Bug 5181] - HttpConnector [8080] No processor
available, rejecting this connection


excellent technical analyze.

should be present in tomcat faq 

-
Henri Gomez ___[_]
EMAIL : [EMAIL PROTECTED](. .)
PGP KEY : 697ECEDD...oOOo..(_)..oOOo...
PGP Fingerprint : 9DF8 1EA8 ED53 2F39 DC9B 904A 364F 80E6



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, March 20, 2002 4:26 AM
To: [EMAIL PROTECTED]
Subject: DO NOT REPLY [Bug 5181] - HttpConnector [8080] No processor
available, rejecting this connection


DO NOT REPLY TO THIS EMAIL, BUT PLEASE POST YOUR BUG
RELATED COMMENTS THROUGH THE WEB INTERFACE AVAILABLE AT
http://nagoya.apache.org/bugzilla/show_bug.cgi?id=5181.
ANY REPLY MADE TO THIS MESSAGE WILL NOT BE COLLECTED AND
INSERTED IN THE BUG DATABASE.

http://nagoya.apache.org/bugzilla/show_bug.cgi?id=5181

HttpConnector [8080] No processor available, rejecting this connection





--- Additional Comments From [EMAIL PROTECTED]  2002-03-20
03:26 ---
I have not run into this problem using the Tomcat HTTP
connector, but I have
seen similar problems when using mod_jk to connect to Tomcat
via AJP on a
server with heavy load.

In my case, after alot of detective work, I determined that
Tomcat itself
was not the problem.

There are alot of things which can affect the ability of
Tomcat to handle
a request regardless of whether they come from its own HTTP
connector or
from Apache via AJP.

You may have already looked at one or more of the following
issues, I will
include everything just for completeness.

The first thing I found is that JVM garbage collection can
have a significant
intermittent effect on Tomcat.  When GC occurs processing by
Tomcat freezes,
yet the OS will continue to accept requests on the port.  When
GC has completed,
Tomcat will try to handle all pending requests.  If the GC
took a significant
amount of time, this can cause a cascading affect where Tomcat
runs out of
processors to handle requests.  I made the mistake of setting
the JVM -Xmx
too large.  The JVM ended up using more memory than the OS
would keep in
physical memory, when a Full GC occurred, performing GC on
objects swapped
out to disk caused GC to take a significant amount of time.
In my case,
70 seconds.  Decreasing the -Xmx to make sure the JVM stack was always
resident in physical memory fixed the problem.

JVM Memory Usage and Garbage Collection
---

It is very important to tune the JVM startup options for GC
and JVM memory
usage for a production server.

1. Make sure you are running Tomcat with a JVM that supports
Hotspot -server,
   I use 1.3.1_02.

2. Use incremental GC, the -xincgc java startup option.

3. Try running Tomcat with the -verbose:gc java arg so you can collect
   data on GC.

4. Make sure the OS is keeping all JVM stack in physical memory and not
   swapping it out to disk.  Reduce -Xmx if this is a problem.

5. Try setting -Xms and -Xmx to the same size.

6. Search the fine web for articles on JVM GC and JVM
performance tuning.

After researching and testing all of the above I significantly
reduced the
maximum time for GC's.  99% of my GC's now run in  .05 sec,
of the remaining,
most run at  1 sec, no more than 5-10 times a day do I see a
GC  1 sec,
and they never exceed 5 sec.

dB access by applications
-

If your applications uses a db, make sure you set it's
connection timeout
to a value  the max GC time you see.  Otherwise you will start seeing
db connection failures.  I set my db connection timeouts to 10 seconds.

A problem with your database, or if you frequently reach the maxiumum
connections you allow in a db connection pool can cause the type of
problems you see.  If the db connections fail, or your connection pool
is exhaused, each servlet which is waiting for a connection (remember
I recommended 10 seconds) will eat up an HTTP or AJP processor
for 10 seconds.
This can cause a cascading effect where you see alot of processors used
by Tomcat.

Check your web applications for thread locking problems, or
long delays.
---

Tomcat can't do anything useful by itself, its the applications you
install that provide the content.  There could very well be
thread locking
problems or other bugs which cause delays in a servlet
handling a request.
This can cause Tomcat to appear to fail due to runaway use of
Processors.

Increase maxProcessors
--

Increase your maxProcessors to handle intermittent cascading
of requests
due to GC, etc.  I set my maxProcessors to 2X max concurrent requests
I see under heavy load.

Proposition 

RE: DO NOT REPLY [Bug 5181] - HttpConnector [8080] No processor available, rejecting this connection

2002-03-20 Thread GOMEZ Henri

good idea, and adding the technical analisys of glenn is
mandatory ;)

-
Henri Gomez ___[_]
EMAIL : [EMAIL PROTECTED](. .) 
PGP KEY : 697ECEDD...oOOo..(_)..oOOo...
PGP Fingerprint : 9DF8 1EA8 ED53 2F39 DC9B 904A 364F 80E6 



-Original Message-
From: Punky Tse [mailto:[EMAIL PROTECTED]]
Sent: Thursday, March 21, 2002 3:57 AM
To: Tomcat Developers List
Subject: Re: DO NOT REPLY [Bug 5181] - HttpConnector [8080] No 
processor
available, rejecting this connection


How about create a new doc titled Tunning/Troubleshooting and add to
Tomcat doc?

Punky

- Original Message -
From: GOMEZ Henri [EMAIL PROTECTED]
To: Tomcat Developers List [EMAIL PROTECTED]
Sent: Thursday, March 21, 2002 4:14 AM
Subject: RE: DO NOT REPLY [Bug 5181] - HttpConnector [8080] No 
processor
available, rejecting this connection


excellent technical analyze.

should be present in tomcat faq 

-
Henri Gomez ___[_]
EMAIL : [EMAIL PROTECTED](. .)
PGP KEY : 697ECEDD...oOOo..(_)..oOOo...
PGP Fingerprint : 9DF8 1EA8 ED53 2F39 DC9B 904A 364F 80E6



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, March 20, 2002 4:26 AM
To: [EMAIL PROTECTED]
Subject: DO NOT REPLY [Bug 5181] - HttpConnector [8080] No processor
available, rejecting this connection


DO NOT REPLY TO THIS EMAIL, BUT PLEASE POST YOUR BUG
RELATED COMMENTS THROUGH THE WEB INTERFACE AVAILABLE AT
http://nagoya.apache.org/bugzilla/show_bug.cgi?id=5181.
ANY REPLY MADE TO THIS MESSAGE WILL NOT BE COLLECTED AND
INSERTED IN THE BUG DATABASE.

http://nagoya.apache.org/bugzilla/show_bug.cgi?id=5181

HttpConnector [8080] No processor available, rejecting this connection





--- Additional Comments From [EMAIL PROTECTED]  2002-03-20
03:26 ---
I have not run into this problem using the Tomcat HTTP
connector, but I have
seen similar problems when using mod_jk to connect to Tomcat
via AJP on a
server with heavy load.

In my case, after alot of detective work, I determined that
Tomcat itself
was not the problem.

There are alot of things which can affect the ability of
Tomcat to handle
a request regardless of whether they come from its own HTTP
connector or
from Apache via AJP.

You may have already looked at one or more of the following
issues, I will
include everything just for completeness.

The first thing I found is that JVM garbage collection can
have a significant
intermittent effect on Tomcat.  When GC occurs processing by
Tomcat freezes,
yet the OS will continue to accept requests on the port.  When
GC has completed,
Tomcat will try to handle all pending requests.  If the GC
took a significant
amount of time, this can cause a cascading affect where Tomcat
runs out of
processors to handle requests.  I made the mistake of setting
the JVM -Xmx
too large.  The JVM ended up using more memory than the OS
would keep in
physical memory, when a Full GC occurred, performing GC on
objects swapped
out to disk caused GC to take a significant amount of time.
In my case,
70 seconds.  Decreasing the -Xmx to make sure the JVM stack was always
resident in physical memory fixed the problem.

JVM Memory Usage and Garbage Collection
---

It is very important to tune the JVM startup options for GC
and JVM memory
usage for a production server.

1. Make sure you are running Tomcat with a JVM that supports
Hotspot -server,
   I use 1.3.1_02.

2. Use incremental GC, the -xincgc java startup option.

3. Try running Tomcat with the -verbose:gc java arg so you can collect
   data on GC.

4. Make sure the OS is keeping all JVM stack in physical 
memory and not
   swapping it out to disk.  Reduce -Xmx if this is a problem.

5. Try setting -Xms and -Xmx to the same size.

6. Search the fine web for articles on JVM GC and JVM
performance tuning.

After researching and testing all of the above I significantly
reduced the
maximum time for GC's.  99% of my GC's now run in  .05 sec,
of the remaining,
most run at  1 sec, no more than 5-10 times a day do I see a
GC  1 sec,
and they never exceed 5 sec.

dB access by applications
-

If your applications uses a db, make sure you set it's
connection timeout
to a value  the max GC time you see.  Otherwise you will start seeing
db connection failures.  I set my db connection timeouts to 
10 seconds.

A problem with your database, or if you frequently reach the maxiumum
connections you allow in a db connection pool can cause the type of
problems you see.  If the db connections fail, or your connection pool
is exhaused, each servlet which is waiting for a connection (remember
I recommended 10 seconds) will eat up an HTTP or AJP processor
for 10 seconds.
This can cause a cascading effect where you see alot of 
processors used
by Tomcat.

Check your web applications for thread locking problems, or
long delays.
--