On Nov 25, 6:58 pm, David Sissitka <[EMAIL PROTECTED]> wrote:
> Hello,
>
> > then why do you recommend that apache be restarted on 17,37 and 57 minutes? 
> > In fact *you* yourself install that cronjob on all django sites.?
>
> The cron job you're referring to:
>
> 1) Exists because if you've a spike in CPU or memory usage that
> affects others on the same server your processes will be killed. If we
> kill your processes the cron job ensures that your website won't be
> down for too long.
> 2) Only restarts your instance of Apache if it's down.
>
> > debug is false everywhere, apache is tweaked as per your recommendations 
> > and static media is served through the system wide apache. My client had to 
> > upgrade to 80 MB from 40 MB for a site that is 90% admin and has at the 
> > most, three users at a time.
>
> A memory leak isn't the only reason that your instance of Apache could
> be using more than 40 MB of memory, off of the top of my head I can
> think of nearly half a dozen reasons and I'm just WebFaction's Django
> monkey. With that said the memory usage related Apache/Django tips
> that we've in our blog are by no means a one size fits all solution
> for decreasing Apache's memory usage.

One should possibly heed my response to prior question about memory
usage.

  
http://groups.google.com.au/group/django-users/browse_frm/thread/7670886c04ac082d

In short, if you are running Django in worker MPM for Apache, you can
be subject to sudden memory increases when you get concurrent requests
against resources which chew up a lot of transient memory. Receiving a
lot of concurrent POST requests with large content data can be one
trigger if the framework reads such POST content all into memory at
the same time.

The problem is that more often than not people don't understand the
dynamics of how the underlying hosting system actually works and the
implication of using multithreading. Thus, they tend to blame the
hosting system, rather than taking preventative measures in their own
application to avoid it.

I believe that what is going to have to start happening is that the
web frameworks themselves will have to start having protections built
into them in some way to counter such problems. For example, the web
framework could temporarily delay a POST request in a multithreaded
server where there are already some specified number of requests
already running, or the POST content size for existing requests has
exceeded some set value. Similarly, where users know that specific URL
handlers will consume a lot of transient memory, they need to be given
a way of putting restrictions on the number of concurrent requests
that might be allowed against that resource or such a group of
resources at one time.

As long as people want to run big Python web applications in memory
constrained web hosting configurations, where Apache worker MPM with a
small number of processes is the only viable solution for Apache, in
contrast to prefork MPM which isn't, this is going to continue to be
an issue. The only solutions are going to be to somehow have the web
applications themselves throttle concurrent requests, or at least try
and temper the issue with hosting solutions which allow more
flexibility in defining how often web application processes should be
recycled in order to reclaim memory, or which are able to themselves
monitor memory usage and force process restarts when threshold values
are exceeded.

Graham
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To post to this group, send email to django-users@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-users?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to