On Wed, 2008-01-30 at 11:03 -0600, James Bennett wrote:
> On Jan 30, 2008 8:57 AM, Mark Green <[EMAIL PROTECTED]> wrote:
> > Well, "Build for failure". Temporary overload can happen at any
> > time and I'd expect django to behave exceptionally bad in that
> > case as it is.
> 
> Running out of resources is never a good thing for any system.

Obviously.

> > Disclaimer: I haven't actually tested this behaviour but I've seen it
> > in JDBC apps before we added pooling and don't know why django should
> > be different. These apps would basically "chop off" (i.e. return errors
> > for) the excess percentile of requests. Naturally the affected users
> > would use their "reload"-button and there we have a nice death spiral...
> 
> And if it just slows down you don't think they'll do the same thing?

Ahem, there's a huge difference between being confronted with
a spinner/progress bar or an error page. The former speaks
"Please wait", the latter speaks "Try again".

> > Not really. My desire is to make each individual django instance
> > play well when things get crowded. Making them aware of each other,
> > or even making all database clients aware of each other, sounds
> > like an interesting project but is not what I'm after here.
> 
> But in order to know that things are "crowded", each one has to know
> what all the others are doing. And any non-Django application using
> the same database *also* has to include its own copy of all that
> configuration.

I guess I still haven't made clear what I mean.
I expect *each individual instance* of django to behave gracefully
when it can't get through to the database. No group-knowledge is
needed for a plain old connection retry or (better) connection pooling.

> > Well, there is a point where a single instance of the external
> > utility doesn't cut it anymore. The only way to go seems to be
> > one pgpool instance per django instance (for performance and
> > to avoid the single point of failure).
> 
> Again: you're repeatedly changing the topic from connection pooling to
> failover. When you decide you want to talk about one or the other for
> more than a few sentences at a time, let me know.

Erm. I have not mentioned failover a single time. I'm just trying
to point out where your "let pgpool handle it"-strategy seems to
fall down.

> > Maybe I'm blowing all this out of proportion
> 
> Almost certainly.
> 
> > but I wonder
> > if any of the high-traffic, multi-server django sites ever
> > ran into it?
> 
> Not really. If you're hitting the max on your DB you have more
> immediate problems than whether your users see an error page or an
> eternal "Loading..." bar.

I'm not talking about maxing out the db constantly. I'm talking about
scratching the limits during peak hours which is something that I'm
pretty sure almost every bigger site has expirienced at least once
(cf. "growing pains").

During these peak hours there's a huge difference between users randomly
getting an error page and users randomly having to wait a little longer.


-mark



--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To post to this group, send email to django-users@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-users?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to