Hi André,

> > 
> > The reason why I am not suspecting mysql was that the
> mysql
> > log does indicate that it is getting all the requests
> and
> > it is servicing them. As I have stated before, some of
> the
> > users though are not getting images.
> > 
> Can you explain this a bit ? When you say that some users
> are not getting images, what happens then ? Isn't there some
> error message in an Apache logfile ?

It is as if there is no image available. You could see the
image place holder in the browser without the image.

> I also presume (maybe wrongly) that these are not real
> users with real browsers. What are you using as a client to
> test this, and does it leave a trace of why it is not
> getting an image ?

These are real users using either FF or IE browsers.

> Then some basic calculations :
> 
> 70 users X 50 images in a page = 3,500 requests to Apache.
> Also, as a minimum, 70 simultaneous TCP connections to
> Apache, assuming your Apache can handle as many.
> 
> 70 users X 50 images X 2KB/image = 7000 KB = +- 7,000,000
> bytes = +- 70,000,000 bit.
> On a local network able to carry 100Mbit/s, say at 50%
> efficiency, this would take about 1.5 seconds.
> So this should not be a case where you overwhelm the
> network bandwidth, or are my calculations above off the
> mark for some reason ?

Your not off the mark, but one problem scenario we are 
thinking is that, user rapid traversal of pages. Each
page has the ability to jump to any one of the 50 pages
and we have noticed that some times eager users are
clicking and generating several page requests. This
behavior could exacerbate server load with far more
number of requests than we are calculating.

> 
> Some additional questions about your Apache server
> configuration (and sorry if I missed some in an earlier
> response) :
> 
> - which MPM version are you using ?
> and can you copy here the settings for that MPM ?
> You can see which MPM is used by entering :
> .../apache2ctl -l  (L lowercase)
> (It will list a "prefork.c" or a "worker.c" or something).

We are preforking

> The corresponding settings from your apache2.conf (or
> httpd.conf) are usually easy to find, under a comment like
> this one :
> ## Server-Pool Size Regulation (MPM specific)

Default values, as we are not setting these explicitly.

> - what are the values used for the following parameters :
>     - KeepAlive
>     - KeepAliveTimeout
>     - MaxKeepAliveRequests
>     - TimeOut

Again default values:

KeepAlive On
KeepAliveTimeout 5
MaxKeepAliveRequests 100
Timeout 300

> What I am trying to figure out above, is how many
> processes/threads on the Apache side you really have
> available to process the client requests.
> 
> This is because of the following generic kind of
> hypothetical scenario :
> 
> - imagine your Apache is configured so that it can have at
> most 50 processes or threads simultaneously to handle
> requests.
> - the first 50 clients connect, get their home page, which
> contains links to in-line images
> - because they are using KeepAlive connections, these 50
> clients do not release their TCP connection to the server,
> but use the same one to start sending their requests for
> images
> - on the server side, the given process which sent to a
> given client it's homepage, is also keeping the connection
> open, so it is "stuck" with this client, and cannot server
> another client's request.
> - as long as this client keeps up with sending more
> requests for images, it will keep this server process locked
> up for himself. That is, as long as it never exceeds the
> KeepAliveTimeOut or MaxKeepAliveRequests.
> Since each client has 50-odd images to get, this can take a
> while, particularly since the browser also has to do some
> work to process and display these images.
> - now comes client #51.  Because all server-side
> processes are tied up, his connection request is not
> answered right away.  Instead, it goes into the TCP
> wait queue for port #80.  That is in general not a
> problem, since the browser will wait several minutes before
> giving up.
> - But this queue has a limited size.  If more than a
> certain number of connection requests pile up there without
> being acknowledged, at some point the next connection
> request will be refused.
> The browser experiencing a "connection refused" for an
> inline image, will just display a broken image symbol
> instead, and try for the next one.

The above is a likely scenario, but we had slightly
different counter intuitive (at least to me) experience.
We made available site for users coming from internet
and of the 70 we had 98% success rate and little trouble.

However, in a LAN environment with we had trouble serving
the same 70 users. We seem to hit a wall may be around 50.

> 
> Of course, things will not be as tidy as outlined above,
> and there will be clients above the 50th getting their
> homepage and some of their images, but then some of the
> first 50 may be unable to make a new connection to obtain
> some other images, etc..
> 
> The point is, the lower the number of real server-side
> processes are available, and the higher the
> KeepAliveTimeOut, the more likely you are to get into the
> above kind of scenario.
> One reason is that, when one particular client is done with
> his requests, the connection will nevertheless stay alive
> with its server-side correspondent, during the number of
> seconds specified in the KeepAliveTimeout, without achieving
> anything useful anymore.
> I see for example that in the 2.2 documentation, this
> timeout is indicated as having a default of 5 seconds, which
> seems more or less reasonable for usual cases.  But in
> the standard configuration that the Linux Debian package
> installed on one of my servers, it is set at 15 seconds,
> which in your case would really be detrimental.

Our apache is locally compiled and we use the default
configuration parameters (file) that come with the source
code.

> The thing is also, that I can still not imagine that Apache
> would be overwhelmed with 3500 requests totalling 7 MB of
> content, so there must be something rather flagrant amiss.

There are two issues we need to address (and may be more that
I don't understand): (1) number of image reads off the disk
and (2) number of http requests to service the image requests.
We are actively looking for some way to reduce the image
requests, but immediately we may not have an option to change
that layout. 

You seem to think that the image reads off the disk may not
be an issue here. If that is the case then we may be we have
to look to hardware and some tweaking of the configuration
parameters, to improve our numbers.

Thanks





---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org

Reply via email to