Re: Production server tuning

2003-07-31 Thread David Rees
Bill Barker wrote:
Antonio Fiol Bonnín [EMAIL PROTECTED] wrote:
However, I am worried about what you say about Apache 2.0.x and the
'worker' MPM. Could you please tell me about the real-world 
inconveniences of having 3/4 Apache 1.3.X with 2/3 tomcats behind?
The mod_jk loadbalancer doesn't work well with pre-fork (including
Apache 1.3.x on *nix systems).  Since your not using the mod_jk
loadbalancer, it shouldn't matter if you are using 1.3.x or 2.0.x.
I'm curious, what are the issues with loadbalancing in mod_jk with a 
pre-forking Apache?

-Dave

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Production server tuning

2003-07-31 Thread Bill Barker

David Rees [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
 Bill Barker wrote:
  Antonio Fiol Bonnín [EMAIL PROTECTED] wrote:
  However, I am worried about what you say about Apache 2.0.x and the
  'worker' MPM. Could you please tell me about the real-world
  inconveniences of having 3/4 Apache 1.3.X with 2/3 tomcats behind?
 
  The mod_jk loadbalancer doesn't work well with pre-fork (including
  Apache 1.3.x on *nix systems).  Since your not using the mod_jk
  loadbalancer, it shouldn't matter if you are using 1.3.x or 2.0.x.

 I'm curious, what are the issues with loadbalancing in mod_jk with a
 pre-forking Apache?


Basically it comes down to the fact that the children don't talk to one
another, so each one has its own idea of the relative loads.  This usually
results in a distribution (for the two-Tomcat case) somewhere between 70-30
and 80-20 (although people on this list have reported even more skewed
distributions).  It should get even more skewed as you increase the number
of Tomcats.

mod_jk2 already has the scoreboard (aka shm) in place to allow for the
children to coordinate this, but at the moment isn't using it for
loadbalancing (and so, is just as broken as mod_jk).  I can't add much more
except that patches are always welcome ;-)

 -Dave




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Production server tuning

2003-07-31 Thread David Rees
Bill Barker wrote:
I'm curious, what are the issues with loadbalancing in mod_jk with a
pre-forking Apache?
Basically it comes down to the fact that the children don't talk to one
another, so each one has its own idea of the relative loads.  This usually
results in a distribution (for the two-Tomcat case) somewhere between 70-30
and 80-20 (although people on this list have reported even more skewed
distributions).  It should get even more skewed as you increase the number
of Tomcats.
mod_jk2 already has the scoreboard (aka shm) in place to allow for the
children to coordinate this, but at the moment isn't using it for
loadbalancing (and so, is just as broken as mod_jk).  I can't add much more
except that patches are always welcome ;-)
Thanks, that explains the situation perfectly.  Just to be clear, if you 
are using Apache 2 with threaded workers, threaded workers spawned from 
the same process do share the loadbalancing information correctly at 
this time?

-Dave

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Production server tuning

2003-07-30 Thread Antonio Fiol Bonnín
Kwok Peng Tuck wrote:

I'm no expert in load balancing and stuff like that, but shouldn't you 
load balance tomcat as well ? 


If I could do everything I wanted to...

No. I am trying, but I cannot do that yet. Budget is not ready and 
purchase timings are far too long.

Thanks anyway.

Antonio





Antonio Fiol Bonnín wrote:

Hello,

We have already gone live, and we actually spend too much time dead. 
I hope some of you can help me a bit about the problems we have.

Architecture:
3 Apache web servers (1.3.23) behind a replicated load balancer in DMZ
1 Tomcat server (4.1.9) behind firewall, in secure zone.
1 Firewall in between.
Some facts I observed:
- Under high load, server sometimes hangs from the user's point of 
view (connection not refused, but nothing comes out of them.
- Under low load, I netstat and I still see lots of ESTABLISHED 
connections between the web servers and the Tomcat server.

For the first case, I reckon I might have found the cause:
Apache MaxClients is set to 200, and Tomcat maxProcessors was set to 
something about 150. Taking into account that there are 3 Apache, 
that means 200 x 3 = 600 clients -- tomcat chokes. Just raised 
maxProcessors to 601 ;-)

For the second one, I have really no clue:
Apache MaxSpareServers is set to 10. I see more than 30 ESTABLISHED 
connections even with extremely load.

Could someone point me to either
- a solution (or part thereof, or hints, or ...)
- a good tomcat tuning resource
?
I hope I can find a solution for this soon... The Directors are 
starting to think that buying WebLogic is the solution to our 
nightmares. They think they only need to throw money at the problem. 
Please help me show them they are wrong before they spend the money.

Thank you very much.

Antonio Fiol




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Production server tuning

2003-07-30 Thread Antonio Fiol Bonnn
john d. haro wrote:

What is the load on your web servers? 

Very low.


Could you repurpose a web server and
load balance the app server instead? 

Web servers are light machines (uniprocessor, low memory, ...), while 
app server is a heavyweight (two nice fast processors, 4Gb RAM, ...). 
Web servers are Linux Intel, App server is Solaris/Sparc.

Sadly I dont know anything about load balancing Tomcat and I'm usually
doing BEA WL or WAS setups.  I'm here to learn people... dont flame me.
No flames. Thank you for your .02

[...] dont you
performance test before going 'live'?
We went live even before we were sure that everything was functionally 
correct. We have been very lucky it has. Now we're live-optimizing ;-(

 That is how we 'tune' our systems...
run some serious load tests on your setup in a mirrored QA or staging
environment prior to go-live.  Then watch closely, use a code profiler or
similar tools to see where the bottlenecks are.  Play with the database and
app server configurations...
We will try to do that before our next update.

Sometimes turning up things like connections will actually seriously degrade
performance for a myriad of other reasons.
Could you please elaborate a bit more on that?

Apache JMeter running on several machines is a good load tester.

Why did you say several machines? Do you mean JMeter may be the 
bottleneck if run single-instance?

 If you
have the $$ and the project is important enough... we have had great results
from the Compuware suite of products, TrueTime etc.
I don't think so.

Good luck

Thank you.

Antonio Fiol


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Production server tuning

2003-07-30 Thread Antonio Fiol Bonnín
Bill Barker wrote:

In theory, I'd go with Kwok's recommendation:  one Apache with it's own
load-balancer, and 3 Tomcats instead of 3 Apaches.  However, in the
real-world, this would require you to upgrade to Apache 2.0.x with the
'worker' MPM.
We cannot repurpose machines. Three web servers (a fourth one is in the 
way, don't ask why) are a need for other projects, as Apache servers are 
shared. And technically, I don't think it is viable either (see previous 
post).

However, I am worried about what you say about Apache 2.0.x and the 
'worker' MPM. Could you please tell me about the real-world 
inconveniences of having 3/4 Apache 1.3.X with 2/3 tomcats behind?


Yes, for your current config, you need to have your maxProcessors somewhere
near 600 to handle peak load.  For part two, go to each of your Apache
machines and run:
 $ ps -ef | grep httpd | wc -l
Done that. Very varying, depending on time of day. But we set MaxClients 
to 200 knowing what we were doing. We used to have 100 and it was not 
enough. Raised to 150 and still not enough. It was during a peak period, 
but I don't think we should lower it back.



Add the numbers together, and subtract three (one for each of the Apache
'controller' processes).  If the system has been running for awhile, this
should be about the same as the number of connections to your Tomcat server
on 8009, since mod_jk holds the connection open (by default) for the
lifetime of the Apache child.
The problem is that the connection is kept open even if unused, isn't it?

I mean: If I do not connect to my web-app, does it start the connections?

 The threads that are waiting for Apache to
talk to them are blocked pending input, so aren't affecting Tomcat's
performance in any way.
Except maybe memory requirement??


Since you are using 4.1.9, I'm assuming that you are using the AjpConnector
(instead of the newer CoyoteConnector).
I think it is CoyoteConnector, but I'd have to check to be sure.

We'll be moving to 4.1.26 as soon as we have time to test our app on it. 
Stuck on 4.1.9 because of client cert auth problem.


With the AjpConnector, you can set the attribute
'connectionTimeout=xx-ms' to have Tomcat drop the connection to Apache
after xx milliseconds have gone by without traffic.
Does that apply to CoyoteConnector? Is it really useful?

For tuning, I like OptimizeIt (but it costs).

It helped me once upon a time. But I'm in a different company now.

 I'm sure that other people
will offer there opinions.
Yes, I heard of JProbe. Never tested. Any insights? How is it compared 
to (3 years ago) OptimizeIt?

Thank you very much for your answer, Bill. I think it was really useful.

Antonio Fiol


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Production server tuning

2003-07-30 Thread Bill Barker

Antonio Fiol Bonnín [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED]
 Bill Barker wrote:
 
 In theory, I'd go with Kwok's recommendation:  one Apache with it's own
 load-balancer, and 3 Tomcats instead of 3 Apaches.  However, in the
 real-world, this would require you to upgrade to Apache 2.0.x with the
 'worker' MPM.
 
 
 We cannot repurpose machines. Three web servers (a fourth one is in the 
 way, don't ask why) are a need for other projects, as Apache servers are 
 shared. And technically, I don't think it is viable either (see previous 
 post).
 
 However, I am worried about what you say about Apache 2.0.x and the 
 'worker' MPM. Could you please tell me about the real-world 
 inconveniences of having 3/4 Apache 1.3.X with 2/3 tomcats behind?
 

The mod_jk loadbalancer doesn't work well with pre-fork (including Apache 1.3.x on 
*nix systems).  Since your not using the mod_jk loadbalancer, it shouldn't matter if 
you are using 1.3.x or 2.0.x.

 
 Yes, for your current config, you need to have your maxProcessors somewhere
 near 600 to handle peak load.  For part two, go to each of your Apache
 machines and run:
   $ ps -ef | grep httpd | wc -l
 
 
 Done that. Very varying, depending on time of day. But we set MaxClients 
 to 200 knowing what we were doing. We used to have 100 and it was not 
 enough. Raised to 150 and still not enough. It was during a peak period, 
 but I don't think we should lower it back.
 
 
 
 Add the numbers together, and subtract three (one for each of the Apache
 'controller' processes).  If the system has been running for awhile, this
 should be about the same as the number of connections to your Tomcat server
 on 8009, since mod_jk holds the connection open (by default) for the
 lifetime of the Apache child.
 
 The problem is that the connection is kept open even if unused, isn't it?
 
 I mean: If I do not connect to my web-app, does it start the connections?

It will (by default) open the connection the first time that an Apache child gets a 
request for your web-app.  After that the connection stays open for the lifetime of 
the Apache child.

You can change the default by setting the connectionTimeout attribute on the Connector 
to a positive value (in milliseconds): e.g. connectionTime=6.  This will cause 
the Tomcat thread to drop the connection to Apache if it doesn't recieve another 
request in the specified time (e.g. in the example 1 minute), and terminate.  
Generally, this hurts performance.  However on one of my Linux 7.x boxes it improved 
the stability by reducing the average total thread count in Tomcat.

 
   The threads that are waiting for Apache to
 talk to them are blocked pending input, so aren't affecting Tomcat's
 performance in any way.
 
 Except maybe memory requirement??
 
 
 Since you are using 4.1.9, I'm assuming that you are using the AjpConnector
 (instead of the newer CoyoteConnector).
 
 I think it is CoyoteConnector, but I'd have to check to be sure.
 
 We'll be moving to 4.1.26 as soon as we have time to test our app on it. 
 Stuck on 4.1.9 because of client cert auth problem.
 
 
  With the AjpConnector, you can set the attribute
 'connectionTimeout=xx-ms' to have Tomcat drop the connection to Apache
 after xx milliseconds have gone by without traffic.
 
 Does that apply to CoyoteConnector? Is it really useful?
 
 For tuning, I like OptimizeIt (but it costs).
 
 It helped me once upon a time. But I'm in a different company now.
 
   I'm sure that other people
 will offer there opinions.
 
 Yes, I heard of JProbe. Never tested. Any insights? How is it compared 
 to (3 years ago) OptimizeIt?
 
 
 Thank you very much for your answer, Bill. I think it was really useful.
 
 
 Antonio Fiol
 

smime.p7s
Description: S/MIME cryptographic signature


Re: Production server tuning

2003-07-29 Thread Bill Barker
In theory, I'd go with Kwok's recommendation:  one Apache with it's own
load-balancer, and 3 Tomcats instead of 3 Apaches.  However, in the
real-world, this would require you to upgrade to Apache 2.0.x with the
'worker' MPM.

Yes, for your current config, you need to have your maxProcessors somewhere
near 600 to handle peak load.  For part two, go to each of your Apache
machines and run:
  $ ps -ef | grep httpd | wc -l
Add the numbers together, and subtract three (one for each of the Apache
'controller' processes).  If the system has been running for awhile, this
should be about the same as the number of connections to your Tomcat server
on 8009, since mod_jk holds the connection open (by default) for the
lifetime of the Apache child.  The threads that are waiting for Apache to
talk to them are blocked pending input, so aren't affecting Tomcat's
performance in any way.

Since you are using 4.1.9, I'm assuming that you are using the AjpConnector
(instead of the newer CoyoteConnector).  If you are using the WarpConnector,
you are on your own ;-).  With the AjpConnector, you can set the attribute
'connectionTimeout=xx-ms' to have Tomcat drop the connection to Apache
after xx milliseconds have gone by without traffic.

For tuning, I like OptimizeIt (but it costs).  I'm sure that other people
will offer there opinions.

Antonio Fiol Bonnín [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
 Hello,

 We have already gone live, and we actually spend too much time dead. I
 hope some of you can help me a bit about the problems we have.

 Architecture:
 3 Apache web servers (1.3.23) behind a replicated load balancer in DMZ
 1 Tomcat server (4.1.9) behind firewall, in secure zone.
 1 Firewall in between.

 Some facts I observed:
 - Under high load, server sometimes hangs from the user's point of view
 (connection not refused, but nothing comes out of them.
 - Under low load, I netstat and I still see lots of ESTABLISHED
 connections between the web servers and the Tomcat server.

 For the first case, I reckon I might have found the cause:
 Apache MaxClients is set to 200, and Tomcat maxProcessors was set to
 something about 150. Taking into account that there are 3 Apache, that
 means 200 x 3 = 600 clients -- tomcat chokes. Just raised maxProcessors
 to 601 ;-)

 For the second one, I have really no clue:
 Apache MaxSpareServers is set to 10. I see more than 30 ESTABLISHED
 connections even with extremely load.

 Could someone point me to either
 - a solution (or part thereof, or hints, or ...)
 - a good tomcat tuning resource
 ?

 I hope I can find a solution for this soon... The Directors are starting
 to think that buying WebLogic is the solution to our nightmares. They
 think they only need to throw money at the problem. Please help me show
 them they are wrong before they spend the money.

 Thank you very much.

 Antonio Fiol





-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: Production server tuning

2003-07-29 Thread john d. haro
What is the load on your web servers?  Could you repurpose a web server and
load balance the app server instead?  Most of the systems I set up replicate
application servers and come to think of it I almost never get enough load
on a single Apache HTTP server to need more than one box. (I'm speaking
about load and NOT failover).

Sadly I don’t know anything about load balancing Tomcat and I'm usually
doing BEA WL or WAS setups.  I'm here to learn people... don’t flame me.

Here is another couple .02 for you - hindsight is 20/20 but don’t you
performance test before going 'live'?  That is how we 'tune' our systems...
run some serious load tests on your setup in a mirrored QA or staging
environment prior to go-live.  Then watch closely, use a code profiler or
similar tools to see where the bottlenecks are.  Play with the database and
app server configurations... 

Sometimes turning up things like connections will actually seriously degrade
performance for a myriad of other reasons.

Apache JMeter running on several machines is a good load tester.  If you
have the $$ and the project is important enough... we have had great results
from the Compuware suite of products, TrueTime etc.

Good luck

John Haro

-Original Message-
From: Kwok Peng Tuck [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, July 29, 2003 12:47 AM
To: Tomcat Users List
Subject: Re: Production server tuning

I'm no expert in load balancing and stuff like that, but shouldn't you 
load balance tomcat as well ?

Antonio Fiol Bonnín wrote:

 Hello,

 We have already gone live, and we actually spend too much time dead. I 
 hope some of you can help me a bit about the problems we have.

 Architecture:
 3 Apache web servers (1.3.23) behind a replicated load balancer in DMZ
 1 Tomcat server (4.1.9) behind firewall, in secure zone.
 1 Firewall in between.

 Some facts I observed:
 - Under high load, server sometimes hangs from the user's point of 
 view (connection not refused, but nothing comes out of them.
 - Under low load, I netstat and I still see lots of ESTABLISHED 
 connections between the web servers and the Tomcat server.

 For the first case, I reckon I might have found the cause:
 Apache MaxClients is set to 200, and Tomcat maxProcessors was set to 
 something about 150. Taking into account that there are 3 Apache, that 
 means 200 x 3 = 600 clients -- tomcat chokes. Just raised 
 maxProcessors to 601 ;-)

 For the second one, I have really no clue:
 Apache MaxSpareServers is set to 10. I see more than 30 ESTABLISHED 
 connections even with extremely load.

 Could someone point me to either
 - a solution (or part thereof, or hints, or ...)
 - a good tomcat tuning resource
 ?

 I hope I can find a solution for this soon... The Directors are 
 starting to think that buying WebLogic is the solution to our 
 nightmares. They think they only need to throw money at the problem. 
 Please help me show them they are wrong before they spend the money.

 Thank you very much.

 Antonio Fiol



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Production server tuning

2003-07-29 Thread srevilak
 From: Antonio Fiol Bonnín fiol.bonnin () terra ! es
 Subject: Production server tuning

 For the first case, I reckon I might have found the cause:
 Apache MaxClients is set to 200, and Tomcat maxProcessors was set to
 something about 150. Taking into account that there are 3 Apache, that
 means 200 x 3 = 600 clients -- tomcat chokes. Just raised maxProcessors
 to 601 ;-)

In terms of setting MaxClients, look at the size of each web server
process (via ps), and compare that with the amount of available memory
on the machine.  You can set MaxClients to a high a number as will fit
in physical ram.

You don't want to set MaxClients too high -- if your machine starts
paging heavily, that will really slow things down.


 For the second one, I have really no clue:
 Apache MaxSpareServers is set to 10. I see more than 30 ESTABLISHED
 connections even with extremely load.

This is a case where mod_status is really helpful.  Add this to your
apache configuration.

   ExtendedStatus On
 
   Location /server-status
   SetHandler server-status
   Order deny,allow
   Deny from all
   Allow from localhost
   /Location

With that in place,

  lynx -dump http://localhost/server-status?auto
  # omit ?auto for more verbose details

Look at the scoreboard section of the output.  You should see a series
of the letters K, W, R.  K is server in keepalive state.  W is a
server in that is writing a response.  R is a server that is reading a
response.

Start collecting the output, say, at 10 minute intervals.  Once you've
got a representative set of data, start looking through it:

  - Are there a lot of servers in KeepAlive state?  If so, try
decreasing KeepAliveTimeout.

  - Is the number of busy server processes near MaxClients?  If so,
consider raising MaxClients (if the machine has enough physical
ram to handle it).  Otherwise, consider adding another web server
to your front end.

  - If the bottleneck is on the backend, adding another application
server is a completely reasonable thing to do.

Other things to look at:

  - monitor memory usage (vmstat, or some such).  If your machines are
paging, add ram.  Do this on web and application servers.

  - montior load (aka uptime).  If the loads are consistently high,
your best bet is probably to throw more hardware at the problem :)

  - Are you logging response times?  (Apaches %T, or something at the
application level in your servlets/jsp pages).  Is there a
particular servlet, or jsp page that's taking excessively long?

  - Are you doing reverse dns lookups (HostnameLookups On).  If so,
turn them off.

As far as apache tuning,

  http://httpd.apache.org/docs/misc/perf-tuning.html

Is a good start.  I don't have a good tomcat reference, though.

hth.

-- 
Steve

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Production server tuning

2003-07-29 Thread John Turner
This is good info.  Thanks for posting!!

John

[EMAIL PROTECTED] wrote:

From: Antonio Fiol Bonnín fiol.bonnin () terra ! es
Subject: Production server tuning


For the first case, I reckon I might have found the cause:
Apache MaxClients is set to 200, and Tomcat maxProcessors was set to
something about 150. Taking into account that there are 3 Apache, that
means 200 x 3 = 600 clients -- tomcat chokes. Just raised maxProcessors
to 601 ;-)


In terms of setting MaxClients, look at the size of each web server
process (via ps), and compare that with the amount of available memory
on the machine.  You can set MaxClients to a high a number as will fit
in physical ram.
You don't want to set MaxClients too high -- if your machine starts
paging heavily, that will really slow things down.


For the second one, I have really no clue:
Apache MaxSpareServers is set to 10. I see more than 30 ESTABLISHED
connections even with extremely load.


This is a case where mod_status is really helpful.  Add this to your
apache configuration.
   ExtendedStatus On
 
   Location /server-status
   SetHandler server-status
   Order deny,allow
   Deny from all
   Allow from localhost
   /Location
With that in place,

  lynx -dump http://localhost/server-status?auto
  # omit ?auto for more verbose details
Look at the scoreboard section of the output.  You should see a series
of the letters K, W, R.  K is server in keepalive state.  W is a
server in that is writing a response.  R is a server that is reading a
response.
Start collecting the output, say, at 10 minute intervals.  Once you've
got a representative set of data, start looking through it:
  - Are there a lot of servers in KeepAlive state?  If so, try
decreasing KeepAliveTimeout.
  - Is the number of busy server processes near MaxClients?  If so,
consider raising MaxClients (if the machine has enough physical
ram to handle it).  Otherwise, consider adding another web server
to your front end.
  - If the bottleneck is on the backend, adding another application
server is a completely reasonable thing to do.
Other things to look at:

  - monitor memory usage (vmstat, or some such).  If your machines are
paging, add ram.  Do this on web and application servers.
  - montior load (aka uptime).  If the loads are consistently high,
your best bet is probably to throw more hardware at the problem :)
  - Are you logging response times?  (Apaches %T, or something at the
application level in your servlets/jsp pages).  Is there a
particular servlet, or jsp page that's taking excessively long?
  - Are you doing reverse dns lookups (HostnameLookups On).  If so,
turn them off.
As far as apache tuning,

  http://httpd.apache.org/docs/misc/perf-tuning.html

Is a good start.  I don't have a good tomcat reference, though.

hth.



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Production server tuning

2003-07-28 Thread Antonio Fiol Bonnín
Hello,

We have already gone live, and we actually spend too much time dead. I 
hope some of you can help me a bit about the problems we have.

Architecture:
3 Apache web servers (1.3.23) behind a replicated load balancer in DMZ
1 Tomcat server (4.1.9) behind firewall, in secure zone.
1 Firewall in between.
Some facts I observed:
- Under high load, server sometimes hangs from the user's point of view 
(connection not refused, but nothing comes out of them.
- Under low load, I netstat and I still see lots of ESTABLISHED 
connections between the web servers and the Tomcat server.

For the first case, I reckon I might have found the cause:
Apache MaxClients is set to 200, and Tomcat maxProcessors was set to 
something about 150. Taking into account that there are 3 Apache, that 
means 200 x 3 = 600 clients -- tomcat chokes. Just raised maxProcessors 
to 601 ;-)

For the second one, I have really no clue:
Apache MaxSpareServers is set to 10. I see more than 30 ESTABLISHED 
connections even with extremely load.

Could someone point me to either
- a solution (or part thereof, or hints, or ...)
- a good tomcat tuning resource
?
I hope I can find a solution for this soon... The Directors are starting 
to think that buying WebLogic is the solution to our nightmares. They 
think they only need to throw money at the problem. Please help me show 
them they are wrong before they spend the money.

Thank you very much.

Antonio Fiol


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Production server tuning

2003-07-28 Thread Kwok Peng Tuck
I'm no expert in load balancing and stuff like that, but shouldn't you 
load balance tomcat as well ?

Antonio Fiol Bonnín wrote:

Hello,

We have already gone live, and we actually spend too much time dead. I 
hope some of you can help me a bit about the problems we have.

Architecture:
3 Apache web servers (1.3.23) behind a replicated load balancer in DMZ
1 Tomcat server (4.1.9) behind firewall, in secure zone.
1 Firewall in between.
Some facts I observed:
- Under high load, server sometimes hangs from the user's point of 
view (connection not refused, but nothing comes out of them.
- Under low load, I netstat and I still see lots of ESTABLISHED 
connections between the web servers and the Tomcat server.

For the first case, I reckon I might have found the cause:
Apache MaxClients is set to 200, and Tomcat maxProcessors was set to 
something about 150. Taking into account that there are 3 Apache, that 
means 200 x 3 = 600 clients -- tomcat chokes. Just raised 
maxProcessors to 601 ;-)

For the second one, I have really no clue:
Apache MaxSpareServers is set to 10. I see more than 30 ESTABLISHED 
connections even with extremely load.

Could someone point me to either
- a solution (or part thereof, or hints, or ...)
- a good tomcat tuning resource
?
I hope I can find a solution for this soon... The Directors are 
starting to think that buying WebLogic is the solution to our 
nightmares. They think they only need to throw money at the problem. 
Please help me show them they are wrong before they spend the money.

Thank you very much.

Antonio Fiol


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]