Re: all on one server release, ballpark?

2007-06-26 Thread Graham Dumpleton

On Jun 26, 11:00 pm, "Nimrod A. Abing" <[EMAIL PROTECTED]> wrote:
> On 6/26/07, Malcolm Tredinnick <[EMAIL PROTECTED]> wrote:
>
> > I don't want to sound discouraging, but if the answer is at all critical
> > to your operation, you can't trust any numbers you get here. They will
> > not have the same usage patterns as yours. Benchmark, benchmark,
> > benchmark is the only way.
>
> FWIW, I use this:
>
> http://www.joedog.org/JoeDog/Siege
>
> to do stress and load testing as well as benchmarking.
>
> In most of my cases, even a modest machine (1GB memory, dual-core
> Intel [EMAIL PROTECTED]) can handle up to 1,000 hits per minute. By hits I 
> mean
> hits to the Apache server runningmod_pythonwith the Django app. The
> first choking point you're likely to come up with is bandwidth limits.
> But as Malcolm said above, your use case may not be the same as
> everyone else's and you should benchmark your own app to determine
> what your server can handle.

As soon as you add in any degree of database access I would expect
that request rate to drop off somewhat. It is usually the database
which is the bottleneck, not bandwidth limitations. This is part of
the reason for using memcached as that can alleviate database access
issues to a degree.

One other factor to consider is whether prefork or worker MPM is being
used. If you can validate that your application is working okay if run
under worker MPM, then the fact that less Apache child processes are
used with this MPM means that you can reduce your overall memory
consumption. This may be a factor if your application is quite fat and
thus your box is becoming memory constrained even with moderate levels
of traffic. If prefork is used and a large burst in traffic arrives
which causes Apache to start creating additional child processes, in a
system which is already memory constrained, it could result in
physical memory all being used up with processes then having to be
swapped out, thus bogging things down.

PS. Yes Malcolm, I know I have not yet created a ticket about the
documentation and worker vs prefork. Been so busy of late, have a few
dozen things to catch up on my in box. :-)

Graham


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To post to this group, send email to django-users@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-users?hl=en
-~--~~~~--~~--~--~---



Re: all on one server release, ballpark?

2007-06-26 Thread Nimrod A. Abing

On 6/26/07, Malcolm Tredinnick <[EMAIL PROTECTED]> wrote:
> I don't want to sound discouraging, but if the answer is at all critical
> to your operation, you can't trust any numbers you get here. They will
> not have the same usage patterns as yours. Benchmark, benchmark,
> benchmark is the only way.

FWIW, I use this:

http://www.joedog.org/JoeDog/Siege

to do stress and load testing as well as benchmarking.

In most of my cases, even a modest machine (1GB memory, dual-core
Intel [EMAIL PROTECTED]) can handle up to 1,000 hits per minute. By hits I mean
hits to the Apache server running mod_python with the Django app. The
first choking point you're likely to come up with is bandwidth limits.
But as Malcolm said above, your use case may not be the same as
everyone else's and you should benchmark your own app to determine
what your server can handle.

HTH.
-- 
_nimrod_a_abing_

http://abing.gotdns.com/
http://www.preownedcar.com/

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To post to this group, send email to django-users@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-users?hl=en
-~--~~~~--~~--~--~---



Re: all on one server release, ballpark?

2007-06-26 Thread Malcolm Tredinnick

On Tue, 2007-06-26 at 13:40 +0200, Bram - Smartelectronix wrote:
> hey guys,
> 
> 
> due to some unfortunate events we will have to release our current 
> django-based site on one server instead of two. We were planning to put 
> postgres on another machine, but that's not possible for now.
> 
> The machine is a dual opteron 1.8GHz (dualcore) with 4GB of RAM. Running 
> postgres, django in mod_python + Apache2, lighttpd for media (on a 
> different IP, but on the same machine) *and* memcached for anonymous users!
> 
> Can anyone give me some kind of ballpark of what we will be able to 
> handle with this? As in request/second or whatever you might think of. I 
> don't errors of 80%, it's just to have some idea ;-)

Seriously, these questions are almost impossible to answer. Things that
will cause variance in the answer: How much computation is done in your
views? How much data is retrieved in each request? How big is the
database (will it all fit into buffer cache? how long do typical queries
take?) How much data will be memcached in typical use (depends on number
of different pages and caching time)? What happens if you don't use
memcache or vastly change what is cahced? What's the typical number of
requests to the media server? What sort of HTTP caching benefits are you
going to get for your media (lots of repeat requests for the same data)?

I don't want to sound discouraging, but if the answer is at all critical
to your operation, you can't trust any numbers you get here. They will
not have the same usage patterns as yours. Benchmark, benchmark,
benchmark is the only way.

Regards,
Malcolm


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To post to this group, send email to django-users@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-users?hl=en
-~--~~~~--~~--~--~---