Brilliant; thank you for your prompt, thorough and helpful response Graham.

2009/9/17 Graham Dumpleton <[email protected]>

>
> 2009/9/17 Simon Meers <[email protected]>:
> > I have a dozen or so Django-powered sites running in a modwsgi setup
> using
> > two WSGIProcessGroups (depending on the timezone required -- UTC/local).
> > This is working very well -- thank you :)
> >
> > I discovered an issue yesterday however, when I began doing some image
> > processing using PIL -- one of the processes jumped from using 30MB to
> 150MB
> > of RAM. It does not appear to be a memory leak as such, and does not
> > increase much beyond this with repetition, but just loading the relevant
> > libraries, etc. This would be fine except that I have so far existed very
> > happily running all these sites from within my modest 80MB (RAM)
> WebFaction
> > account, and hogging this much RAM for too long will cause my processes
> to
> > be killed (and rightly so).
> >
> > This morning I discovered the handy maximum-requests and
> inactivity-timeout
> > WSGIDaemonProcess options, which do a pretty good job of ensuring that
> the
> > processes get restarted regularly enough to avoid being killed.
> >
> > However I'm wondering if there is a better way of handling this; rather
> than
> > waiting for inactivity or a certain number of requests, it would be nice
> to
> > be able to cleanly restart the process when the resource usage breached a
> > predefined threshold.
>
> Although UNIX environments have setrlimit() and can specify
> RLIMIT_DATA/RLIMIT_AS/RLIMIT_RSS to control memory available, there is
> not way of having a signal be sent to the process when some soft limit
> is exceeded such that process could be restarted like one can get with
> RLIMIT_CPU. What is therefore required is some sort of separate
> background task, either thread or external process, that can somehow
> determine when memory limit is reached and for that to generate the
> signal. Problem is that there is no portable way of finding out how
> much memory is used from within a process using system C APIs. Thus
> why I don't provide such an inbuilt feature.
>
> > I could not find any such option in the documentation;
> > I don't think that stack-size is quite what I'm after? I could run a cron
> > job to monitor the RAM usage and issue apache stop/start commands, but
> this
> > seems to lead to a brief period of  "Service Temporarily Unavailable"
> status
> > during the process.
>
> You are getting "Service Temporarily Unavailable" because you are
> stopping/starting whole of Apache.
>
> If you are using 'ps' to work out how memory used, then ensure you are
> using display-name option to WSGIDaemonProcess with argument
> '%{GROUP}'. This will result in 'ps' process list showing:
>
>  (wsgi:groupname)
>
> where groupname is name given as first argument to WSGIDaemonProcess
>
> You can then look for any WSGI daemon process and if over your defined
> limit, send it a SIGTERM signal using kill command. That will cause
> the WSGI daemon process to be restarted without affecting main Apache
> processes. You wouldn't then expect to see the "Service Temporarily
> Unavailable" as main Apache processes will have effect of queuing up
> the requests until WSGI daemon process ready to handle them again.
>
> > The image processing tasks are performed occasionally by administrators,
> not
> > during general use of the site.
> >
> > Any suggestions?
>
> Except for above, can only suggest having another WSGIDaemonProcess
> group and delegate just the URLs which are only occasionally used and
> which cause transient memory usage spike, and then set more aggressive
> maximum-requests and inactivity-timeout on just those URLs. For
> example:
>
>  WSGIDaemonProcess main
>  WSGIDaemonProcess bloat maximum-requests=100 inactivity-timeout=20
>
>  WSGIScriptAlias / /some/path/application.wsgi
>  WSGIProcessGroup main
>
>  <Location /memory/hungry/url>
>  WSGIProcessGroup bloat
>  </Location>
>
> In other words, partition a single Django instance across multiple
> daemon process group based on subset of URLs.
>
> > Also, is there a way of seeing a memory usage breakdown to pinpoint
> problem
> > areas?
>
> Not really.
>
> Graham
>
> >
>

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"modwsgi" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/modwsgi?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to