> There are several things you can do to improve the state of things.
> The first and foremost is to add caching in front of the server, using
> an accelerator proxy. (i.e. squid running in accelerator mode.) In
> this way, you have a program which receives the user's request, checks
> to see if it's a request that it already has a response for, checks
> whether that response is still valid, and then checks to see whether
> or not it's permitted to respond on the server's behalf...almost
> entirely without bothering the main web server. This process is far,
> far, far faster than having the request hit the serving application's
> main code.
>


I was under the impression that Apache coded sensibly enough to handle
incoming requests as least as well as Squid would. Agree with everything
else tho.

OP should look into what's required on the back end to process those 6
requests, as it superficially appears that a very small number of requests
is generating a huge amount of work, and that means the site would be easy
to DoS.

Reply via email to