Thanks Graham.

Really appreciate your detailed explanation.

On Apr 18, 11:51 pm, Graham Dumpleton <graham.dumple...@gmail.com>
wrote:
> 2009/4/18 Andy <selforgani...@gmail.com>:
>
>
>
> > You made a good point that the C code in Apache will be running in
> > addition to the Python code. My concern though is the amount of
> > concurrency allowed within Python.
>
> > Say I set up a Django site with mod_wsgi in daemon mode using 1
> > process & 30 threads. If a Python thread makes a database call or a
> > memcached call and blocks, can any other python threads run? Or would
> > they all block because they'll all be waiting for the GIL which the
> > thread that made the DB call never released?
>
> Yes other threads should be able to run. This is because any C
> extension module is meant to release the GIL before calling a
> potentially blocking operation, or even a potentially time consuming
> operation in foreign language, where no access is made to Python data
> structures. If C extension modules do not do this, then they are
> breaking the conventions regarding the GIL and could be argued to be
> buggy.
>
> > I guess what I'm really trying to understand is - is a multithreaded
> > Django daemon process really possible given the Python GIL and all
> > those blocking DB calls?
>
> Yes it is.
>
> > By the way, does mod_wsgi work with event MPM? If yes, any particular
> > reasons you recommended using worker MPM instead of event MPM?
>
> The event MPM is only an experimental MPM in Apache 2.2 so I haven't
> tried it. It has been progressed to a supported MPM for Apache 2.4, so
> I will need to look at it at one point. I believe though I have seen
> people say they were already using it with mod_wsgi. So, even though
> hasn't been tested, does supposedly work.
>
> Graham
>
> > Thanks.
>
> > On Apr 17, 7:41 am, Graham Dumpleton <graham.dumple...@gmail.com>
> > wrote:
> >> 2009/4/17 Andy <selforgani...@gmail.com>:
>
> >> > A few questions about mod_wsgi:
>
> >> > - I've seen blog posts recommending using worker MPM with mod_wsgi.
> >> > How does Python's GIL affect this?
> >> > Let's say I have a quad core machine. Do i need to make sure Apache
> >> > spawns at least 4 processes to take advantage of all 4 cores (assuming
> >> > I'm using the mod_wsgi embedded mode)?
> >> > For example, if I set:
> >> > MaxClients          100
> >> > ThreadsPerChild      50
> >> > In this case a max of only 2 processes are created - does it mean my
> >> > django app wouldn't be able to take advantage of the 3rd and 4th cores
> >> > because of Python's GIL?
> >> > Like wise, if using daemon mode, do I need to set
> >> > WSGIDaemonProcess example.com processes=4
> >> > to use all 4 cores?
>
> >> I wouldn't be overly concerned about the GIL and try to guess what
> >> configuration may be better than another. I have talked about this a
> >> bit before in:
>
> >>  http://blog.dscpl.com.au/2007/07/web-hosting-landscape-and-modwsgi.html
>
> >> In that I said:
>
> >> """In addition to the low overhead, there are also other positive
> >> benefits deriving from how Apache works when using this mode. The
> >> first is that Apache uses multiple child processes to handle requests.
> >> As a result, any contention for the Python GIL within the context of a
> >> single process is not an issue, as each process will be independent.
> >> Thus there is no impediment when using multi processor systems.
>
> >> That said, the GIL is not as big a deal as some people make out, even
> >> when using Apache with only one multi-threaded child process for
> >> accepting requests. This is because the code which handles accepting
> >> of requests, determines which Apache handler should process the
> >> request, along with the code for reading the request content and
> >> writing out the response content, is all written in C and is in no way
> >> linked to Python. As a consequence there are large sections of code
> >> where the GIL is not being held. On top of that, the same web server
> >> may also be serving up static files where again the GIL doesn't even
> >> come into the picture. So, more than enough opportunity for making
> >> good use of those multiple processors."""
>
> >> I was actually talking about embedded mode there, but just as
> >> pertinent to daemon mode. In particular, in daemon mode the Apache
> >> server child processes are still doing work at the same time as they
> >> are doing the proxying of the request to the daemon mode process. So,
> >> you are going to have multiple processes trying to do stuff anyway.
> >> There isn't much point trying to match number of daemon processes to
> >> number of cores purely based on concerns about the GIL.
>
> >> This doesn't mean you shouldn't try and tune the Apache MPM settings
> >> and daemon mode settings to see what works best, but to do that you
> >> really need to have your actual application running and be hitting it
> >> with realistic traffic patterns. That is, no point just using 'ab' at
> >> maximum throttle against a single URL as in practice your site is
> >> never going to be pushed to the maximum. If you are running out of
> >> grunt even for typical traffic, then you need to upgrade your system
> >> to give it more headroom to deal with real spikes in traffic.
>
> >> > - Between embedded mode & daemon mode, which uses less memory?
>
> >> Worker MPM with daemon mode. See the dangers of using prefork and
> >> especially embedded mode in:
>
> >>  http://blog.dscpl.com.au/2009/03/load-spikes-and-excessive-memory-usa...
>
> >> You sacrifice Apache's ability to create additional processes to
> >> handle demand, but frankly for fat Python web applications that is
> >> arguably a stupid feature as it compounds problems. Namely, just when
> >> you start to get a spike in requests, Apache tries to create more of
> >> your fat applications, which just load the machine even more and slow
> >> things down. In worst case the slowing down makes Apache thinks it
> >> needs even more processes and it can spiral out of control and choke
> >> your whole server. So, for fat Python web applications you are just
> >> better off using multithreading and making sure you have enough
> >> processes/threads to handle expected demand to begin with.
>
> >> > Assuming worker MPM is used.
> >> > I'll be using either shared hosting or VPS, so reducing memory use is
> >> > very important
>
> >> > - With mod_python, it's recommended to put a reverse proxy (eg. nginx)
> >> > in front of the fat Apache & serve static content from the reverse
> >> > proxy, does the same recommendation still applies to mod_wsgi?
>
> >> Yes. And turn keep alive off on Apache to ensure connections released
> >> straight away. Keep alive is generally only effective for static media
> >> files which nginx would then be handling. By disabling keep alive you
> >> get better utilisation of available connections and lower memory usage
> >> in Apache server child processes.
>
> >> > If daemon mode is used, will the front end Apache process act as an
> >> > effective reverse proxy?
>
> >> Apache is effectively acting as an internal proxy for the daemon
> >> processes. Even so, you are still better off pushing static media to a
> >> nginx in front of that. The overhead of the extra internal hop within
> >> Apache to get to the daemon processes is so small you would never see
> >> it within context of typical request times for Python web
> >> applications.
>
> >> > - What about mod_wsgi for nginx - how does that compared to Apache's
> >> > mod_wsgi? Would it be less memory intensive?
>
> >> I can't really comment on that except to say that it doesn't matter
> >> what WSGI hosting mechanism you use, be it Apache, nginx or a pure
> >> Python web server such as Paste serve of CherryPy WSGI server. For
> >> each process running your Python web application, each system is still
> >> going to use about the same amount of memory. This is because the
> >> underlying Python interpreter memory usage should always be the same
> >> and your Python web applications is also going to always use about the
> >> same amount of memory as well. Any small differences that there may be
> >> would relate to how the web server aspect of the system uses memory
> >> differently, but the differences aren't generally going to be
> >> significant as long as you set the servers up properly. No particular
> >> system provides some magic bullet that somehow nullifies how much
> >> memory your actual Python web application uses and that is where the
> >> bulk of your memory will be used.
>
> >> Graham
>
>
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"modwsgi" group.
To post to this group, send email to modwsgi@googlegroups.com
To unsubscribe from this group, send email to 
modwsgi+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/modwsgi?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to