hi Graham:
i'm using the nginx+fcgi . And my startup command is below
python manage.py runfcgi host=127.0.0.1 port=3033 method=prefork
pidfile=/var/run/fcgi.pid minspare=5 maxspare=30 maxchildren=60
maxrequests=200
p.s I've invite you on gtalk, hope your response.
On Thu, Mar 31, 2011 at 6:
ldren=60
maxrequests=200
it will fork 30 process , and every process will cost nearly the same memory
(60M-70M)
On Thu, Mar 31, 2011 at 5:55 AM, Diederik van der Boor wrote:
> Op woensdag 30 maart 2011 17:48:38 schreef 付毅:
> > hi Xavier Ordoquy
> > So , you mean 60M per python pr
hi Xavier Ordoquy
So , you mean 60M per python process is in a normal status? I really
encouter this kind of website before
On Wed, Mar 30, 2011 at 11:35 PM, Xavier Ordoquy wrote:
>
> Le 30 mars 2011 à 15:41, 付毅 a écrit :
>
> > i don't think 100 instance means to 100 CPUs
hi Bennett:
i attache my ps output file, just see the RSS column, that is the real
memory usage for every instance
On Wed, Mar 30, 2011 at 9:51 PM, James Bennett wrote:
> On Wed, Mar 30, 2011 at 8:41 AM, 付毅 wrote:
> > i don't think 100 instance means to 100 CPUs. I just want to u
when I want to use it , just Dynamic Load
them as need?? Any suggestions?
On Wed, Mar 30, 2011 at 9:41 PM, 付毅 wrote:
> i don't think 100 instance means to 100 CPUs. I just want to use http_load
> to test pressure to my web server. if i make the 100 cocurrent request per
> second
i don't think 100 instance means to 100 CPUs. I just want to use http_load
to test pressure to my web server. if i make the 100 cocurrent request per
second with 100 cocurrent connection. I will make 100 python process
instance when I deploy my web project in prefork environment. Any others
agree w
no. i don't use any cache, and I 've try to remove the app module which I
doubt cost memory. I found that the most memory eater is the app module who
has very large models.py
On Wed, Mar 30, 2011 at 8:32 PM, moham...@efazati.org
wrote:
> do you have any cache ing?
>
> On 03/30/2011 04:33 PM, bir
7 matches
Mail list logo