Thanks for your reply Chris.  My responses are inline.

> > Shaun,
> > 
> > On 7/8/2009 1:35 PM, Shaun Qualheim wrote:
> >>    *Tomcat 4.1.27-LE
> > 
> > You might consider upgrading at some point. 4.1 is getting ready to be
> > retired, and the 3 (yes 3!) versions since then all have significant
> > performance improvements that may help your situation.

Yeah, it's on the burner and scheduled for December.  Unfortunately, it out of 
my control until recently :).

> > 
> >> Apache is listening on ports 80 and 81 using http. Port 80
> >> immediately rewrites everything to an https:// url. That https:// url
> >> goes to the load balancer on 443 and is passed to the appliance on port 81.
> > 
> > Wait... what?
> > 
> > client -- http:80 --> httpd -- https:443 --> lb -- http:81 --> Tomcat?

Basically.  We could clear things up a little bit though, just going to 
https:// right away:

client -- https:443 --> lb --> http:81 --> apache --> mod_jk --> 1 of 3 tomcats.


> > 
> > You said you have 3 TC instances. Is that 1 instance on each of 3
> > separate physical servers, so port 81 is used on them all?

Apache listens on port 81 (http) on each physical server -- mod_jk load 
balances each of 3 toms on each physical server.  Each of 3 physical servers 
has 3 tomcats running.

> > Why bother
> > with SSL after the request is within your network? 

We don't -- at least as soon as it hits the load balancer.

> > Also, why have httpd
> > forward all traffic to a load-balancer instead of just doing the
> > load-balancing itself?

How do you mean?  (Sorry, I can think of a few things that you might mean here, 
but I don't want to assume.)

> > 
> > Maybe I have misunderstood your setup, but it seems overly complicated.
> > 

Could be.  I do appreciate your suggestions on how to make it simpler.

> >> When we use the our application with all http (port 80 doesn't
> >> rewrite,) the system works fine when we run a load test. We can push
> >> 2100 concurrent users out of each server.
> > 
> > Okay.
> > 
> >> However... using the same setup beyond the load balancer, we are
> >> only about to get to about 2500 concurrent users across the three
> >> servers (about 800 per server) before we start seeing very long
> >> delays (1-2 minutes where we should be seeing a few seconds) on
> >> miscellaneous functions throughout the application.
> > 
> > Does the load-balancer have any kind of traffic shaping or
> > bandwidth/connection limiting configured?

No.  

> > 
> > What do your elements look like in Tomcat's server.xml?
> > 

Here's an example of the connector.  I can provide more if it would help?

    <Connector className="org.apache.coyote.tomcat4.CoyoteConnector"
               port="7019" minProcessors="256" maxProcessors="1024"
               enableLookups="false" redirectPort="7443"
               acceptCount="10" debug="0" connectionTimeout="120000"
               useURIValidationHack="false" scheme="https" proxyPort="443"
               protocolHandlerClassName="org.apache.jk.server.JkCoyoteHandler"/>


> >> We're pretty perplexed on why the sudden slowdown happens at about
> >> 800 users per server. It works fine when we're http only. We don't see
> >> anything that stands out in the apache or catalina logs that would seem
> >> to be concerning (broken pipes, abnormal timeouts, etc.) I would greatly
> >> appreciate any help anyone can offer us.
> > 
> > So, does it look like you are hitting a wall (like there aren't enough
> > connections allowed) or does the application/server start to experience
> > an actual slowdown (like high CPU load, lots of paging, etc.)?
> > 

If I had to guess, I'd say the former.  When we're monitoring system resources 
using nmon during the load tests, everything else seems fine.

Thank you much for your help :)

Shaun



      

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to