We are runing a cluster of 3 apache servers and 2 tomcat servers
connected via AJP w/Oracle on backend.

The cluster has been performing very well but we've had a recent load
spike that's causing the tomcat servers to start swapping pretty
hardcore despite JVM limitations.

What is the -Xmx option limiting? Threads? Defined services? Instances
of Tomcat?

I would've thought that it would limit the entire tomcat instance, but
we have been far exceeding the 768mb limit we set.

We're connecting to Oracle on the back end via the JDBC thin client.
When the site starts swapping, performance on Oracle queries goes
exponentially downhill. A non-db page takes about 1 second to load
before swapping, vs 5 seconds to load when its swapping. On the other
hand, a db intensive page takes about 5 seconds to load normally, vs
about 40-50 seconds when it begins to swap. That number begins to crawl
quickly up until it exceeds the 5 minute max execution time and Tomcat
cuts the request off.

The servers are basically identically configured P1.8ghz machines with
1gb ram each. The connector line from server.xml is:

          <Connector
className="org.apache.coyote.tomcat4.CoyoteConnector"
            port="8009" minProcessors="5" maxProcessors="100"
            enableLookups="true" redirectPort="8443"
            acceptCount="100" debug="0" connectionTimeout="-1"
 
protocolHandlerClassName="org.apache.jk.server.JkCoyoteHandler"
            />

And we're running Apache 2.0.47 w/mod_jk1 and AJP1.3. the
workers.properties is set to nonweighted balancing.

Are there any options to tune tomcat to reduce memory footpritn and to
let it queue more? We were initially running more maxProcessors but I
turned it down hoping to alleviate the congestion. Tried turning it up
thinking maybe the accept queue was the problem too, but that made it
worse.

Thanks,

Cris



---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to