On Thursday, March 31, 2011 4:57:20 PM UTC+11, bird sky wrote:
>
> hi Graham:
> i'm using the nginx+fcgi . And my startup command is below
>
> python manage.py runfcgi host=127.0.0.1 port=3033 method=prefork
> pidfile=/var/run/fcgi.pid minspare=5 maxspare=30 maxchildren=60
> maxrequests=20
Le 31 mars 2011 à 07:57, 付毅 a écrit :
> hi Graham:
> i'm using the nginx+fcgi . And my startup command is below
>
> python manage.py runfcgi host=127.0.0.1 port=3033 method=prefork
> pidfile=/var/run/fcgi.pid minspare=5 maxspare=30 maxchildren=60
> maxrequests=200
>
> p.s I've invite you
hi Graham:
i'm using the nginx+fcgi . And my startup command is below
python manage.py runfcgi host=127.0.0.1 port=3033 method=prefork
pidfile=/var/run/fcgi.pid minspare=5 maxspare=30 maxchildren=60
maxrequests=200
p.s I've invite you on gtalk, hope your response.
On Thu, Mar 31, 2011 at 6:
hi Diederik:
I think you can see something from my attach file, that indict i've
60M-70M per python process cost. That is because I start the flup with
command below:
python manager.py runfcgi host=127.0.0.1 port=3033 method=prefork
pidfile=/var/run/fcgi.pid minspare=5 maxspare=30 maxchildren=6
Why are you using prefork MPM and running Django embedded that way.
Prefork MPM may be fine for PHP, but it is a poor solution for fat Python
web applications unless you are prepared to give it the necessary memory
resources and configure Apache properly specifically for that single Python
web
Op woensdag 30 maart 2011 17:48:38 schreef 付毅:
> hi Xavier Ordoquy
> So , you mean 60M per python process is in a normal status? I really
> encouter this kind of website before
Please note this is for the entire website, not per apache instance.
There is one Django process (via mod_wsgi, or standa
I think 60M is fine. For us, with nginx in front of gunicorn, we can get
many simultaneous connections per process.
-Adam
--
You received this message because you are subscribed to the Google Groups
"Django users" group.
To post to this group, send email to django-users@googlegroups.com.
To u
Hi,
Sounds correct to me.
Regards,
Xavier.
Le 30 mars 2011 à 17:48, 付毅 a écrit :
> hi Xavier Ordoquy
> So , you mean 60M per python process is in a normal status? I really encouter
> this kind of website before
>
> On Wed, Mar 30, 2011 at 11:35 PM, Xavier Ordoquy wrote:
>
> Le 30 mars 2011
hi Xavier Ordoquy
So , you mean 60M per python process is in a normal status? I really
encouter this kind of website before
On Wed, Mar 30, 2011 at 11:35 PM, Xavier Ordoquy wrote:
>
> Le 30 mars 2011 à 15:41, 付毅 a écrit :
>
> > i don't think 100 instance means to 100 CPUs. I just want to use
> ht
Le 30 mars 2011 à 15:41, 付毅 a écrit :
> i don't think 100 instance means to 100 CPUs. I just want to use http_load to
> test pressure to my web server. if i make the 100 cocurrent request per
> second with 100 cocurrent connection. I will make 100 python process instance
> when I deploy my web
hi Bennett:
i attache my ps output file, just see the RSS column, that is the real
memory usage for every instance
On Wed, Mar 30, 2011 at 9:51 PM, James Bennett wrote:
> On Wed, Mar 30, 2011 at 8:41 AM, 付毅 wrote:
> > i don't think 100 instance means to 100 CPUs. I just want to use
> http_load
On Wed, Mar 30, 2011 at 8:41 AM, 付毅 wrote:
> i don't think 100 instance means to 100 CPUs. I just want to use http_load
> to test pressure to my web server. if i make the 100 cocurrent request per
> second with 100 cocurrent connection. I will make 100 python process
> instance when I deploy my we
I've found one of my app module has a large model. there are 207 models in
models.py. When I add this app module into INSTALL_APPS, the process
instance memory increase nearly 10M more.
I think can I move these model classes out of my models.py file and move
them in to another .py file. And when
i don't think 100 instance means to 100 CPUs. I just want to use http_load
to test pressure to my web server. if i make the 100 cocurrent request per
second with 100 cocurrent connection. I will make 100 python process
instance when I deploy my web project in prefork environment. Any others
agree w
Hi,
> Because
> if an instance cost 60M memory, when I deploy my project in prefork
> web server with 100 instance, i will cost 6GB memory. I don't think
> this is a normal state.
I hardly see the need for 100 instances.
Could you elaborate on that need ?
Imagine those 100 instances are processin
no. i don't use any cache, and I 've try to remove the app module which I
doubt cost memory. I found that the most memory eater is the app module who
has very large models.py
On Wed, Mar 30, 2011 at 8:32 PM, moham...@efazati.org
wrote:
> do you have any cache ing?
>
> On 03/30/2011 04:33 PM, bir
do you have any cache ing?
On 03/30/2011 04:33 PM, bird sky wrote:
Hello Everybody:
I encounter a problem that my Django project, it has more than 60
app modules, and some models are very large, more than 30 fields . And
when I startup my project, regardless of in development server, fast
Hello Everybody:
I encounter a problem that my Django project, it has more than 60
app modules, and some models are very large, more than 30 fields . And
when I startup my project, regardless of in development server, fast
cgi(flup),or mod_wsgi. i found it cost at least 60M memory per
instance
18 matches
Mail list logo