2009/11/18 Jason Garber <[email protected]>:
> Hi Daan,
> Just a note on performance.  I'm running a WSGI 3 application with a page
> which includes dynamic HTML generation and a couple PostgreSQL queries per
> request, etc...
> With 1 process and 5 threads, the server hosting it will sustain 800-1000
> requests per second, using:  ab -n 10000 -c 30

In hindsight the default of 1 process and with 15 threads is indeed
more than what well designed sites would need, albeit that it does
provide a good buffer. For mod_wsgi 2.X that was traded off with
slightly higher thread memory usage because of way threads were used.
For mod_wsgi 3.0 a new scheme is used whereby always attempted to use
most recently used thread for a new request. This means that unless
actually needed, extra threads in the pool will never call into Python
and will not incur any additional per thread memory overhead that
Python may impose.

As mentioned before, how many threads actually use can depend a lot on
what percentage of long requests you have.

> http://<site-running-on-localhost>/path/to/page
> Keep in mind that latency could affect those numbers significantly in the
> real world, but still, you can handle a LOT of traffic with few
> processes/threads if your application is well written.

Such latency issues around slow clients can be largely eliminated
through use of nginx front end proxy to Apache. Although nginx will
also handle static files better than Apache and leave Apache/mod_wsgi
just to handle dynamic requests, that isn't even the benefit we would
get in case I am talking about.

Specifically, nginx helps with latency and slow clients because for
POST requests it will buffer up request content, so long as not over a
default of 1MB (I think), and only when it has whole request headers
and content will it pass request on to Apache/mod_wsgi. This means
that if client is slow to deliver up request, doesn't affect
Apache/mod_wsgi.

Similarly on the response, the implicit buffering within sockets from
daemon process back through Apache worker process and through to nginx
allow Apache/mod_wsgi to release the request and the connection
quicker, with nginx then doing the potentially slow job of dribbling
the response back to the slow client.

The effect of the two means that Apache/mod_wsgi is involved for as
little time as possible and thus can better utilise more limited
resources. Thus don't need to configure as many processes/threads to
handle same load as nginx will take on burden of slow clients and it
being asynchronous does a better job of handling that for many
connections in startup or responding state.

In OP's setup though since they don't have control of their Apache
even less likelihood they can get a nginx front end proxy for it
going. :-)

Graham

> On Tue, Nov 17, 2009 at 11:09 AM, Daan Davidsz <[email protected]>
> wrote:
>>
>> Thank you very much, this seems like the configuration I was looking
>> for. I've tweaked it a bit and e-mailed it to the administrators.
>>
>> Daan
>>
>> --
>>
>> You received this message because you are subscribed to the Google Groups
>> "modwsgi" group.
>> To post to this group, send email to [email protected].
>> To unsubscribe from this group, send email to
>> [email protected].
>> For more options, visit this group at
>> http://groups.google.com/group/modwsgi?hl=.
>>
>>
>
> --
>
> You received this message because you are subscribed to the Google Groups
> "modwsgi" group.
> To post to this group, send email to [email protected].
> To unsubscribe from this group, send email to
> [email protected].
> For more options, visit this group at
> http://groups.google.com/group/modwsgi?hl=.
>

--

You received this message because you are subscribed to the Google Groups 
"modwsgi" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/modwsgi?hl=.


Reply via email to