Dear Ashraf,
1. Get yourself propper hardware, that would solve perhaps by 80% your
problems.
2. And with your comment "So please help to scale the application for
concurrent users."
is very unpolite.
Nobody has to help you, and people in the open source world don't make
support for commercial vendors unless they are PAID for it.
If you want to get it run NOW, then look for a company that workout your
problems, and make a deal with them.
Otherwise, you are old enough to workout the know-how yourself.
Here is a good starting point:
http://nginx.org/en/docs/http/load_balancing.html
https://uwsgi-docs.readthedocs.io/en/latest/Broodlord.html
https://uwsgi-docs.readthedocs.io/en/latest/Fastrouter.html
If you have worked out yourself things, and don't get it run and ask
"GENTLY" what you have done wrong, then you get help.
best, Tamer
On 01.03.19 07:27, Ashraf Mohamed wrote:
Hi ,
I have a flask application which is running in nginx server and i am
unbale to serve the application for more then 20 users (concurrently)
as its gets break.
*_Error:_*
app: 0|req: 1/35] x.x.x.x () {44 vars in 5149 bytes} [Thu Feb 7
14:01:42 2019] GET /url/edit/7e08e5c4-11cf-485b-9b05-823fd4006a60 =>
generated 0 bytes in 69000 msecs (HTTP/2.0 200) 4 headers in 0 bytes
(1 switches on core 0)
*
*
*_OS version:_*
ubuntu 16.04 (aws)
*_CPU:_*
2 Core with 4 GB RAM
*_WebServer:_*
nginx version: nginx/1.15.0
*_APP Architecture:_*
I have 2 application running on different servers app 1(using for
frontend ) and app 2( using for REST API Calls) both are flask
applications
*
*
*_app1 uWSGI config :_*
*
*
[uwsgi]
module = wsgi
master = true
processes = 3
socket = app.sock
chmod-socket = 777
vacuum = true
die-on-term = true
logto = test.log
buffer-size=7765535
worker-reload-mercy = 240
thunder-lock = true
async=10
ugreen
listen = 950
enable-threads= True
*_app 1 nginx config_*
*_
_*
user root;
worker_processes 5;
events {
worker_connections 4000;
}
http {
server {
limit_req zone=mylimit burst=20 nodelay;
limit_req_status 444;
listen 80 backlog=1000;
listen [::]:80;
server_name domain name;
location /static {
alias /home/ubuntu/flaskapp/app/static;
}
location / {
include uwsgi_params;
uwsgi_read_timeout 120;
client_max_body_size 1000M;
uwsgi_pass unix:///home/ubuntu/flaskapp/app.sock;
}
}
}
*_app 2 uWsgi config:_*
[uwsgi]
module = wsgi
master = true
processes = 5
socket = app2.sock
chmod-socket = 777
vacuum = true
die-on-term = true
logto = sptms.log
async = 10
ugreen
worker-reload-mercy = 240
enable-threads = true
thunder-lock = true
listen=2000
buffer-size=65535
no-defer-accept=true
stats=stats.sock
memory-report = true
*_app 2 nginx config :_*
*_
_*
worker_processes 1;
events {
worker_connections 1024;
}
http {
access_log /var/log/nginx/access.log;
proxy_connect_timeout 2000;
proxy_read_timeout 2000;
fastcgi_read_timeout 2000;
error_log /var/log/nginx/error.log info;
include mime.types;
gzip on;
server {
listen 80 backlog=2048;
server_name x.x.x.x;
location / {
include uwsgi_params;
uwsgi_pass unix:///home/ubuntu/app/app2.sock;
#keepalive_timeout 155s;
}
}
}
So please help to scale the application for concurrent users.
Thanks
Ashraf
_______________________________________________
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi
_______________________________________________
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi