I saw a relatively/mostly related issue on the repo and so I feel I should 
give an answer...

web2py is a web framework: we're not in the business of managing processes 
nor webservers. We're in the business of realizing an app that does what 
you want in response to a web request.
For development purposes (and relatively small deployments) web2py includes 
a single process threaded webserver, that is rocket.

NOBODY says that you MUST run web2py under rocket. For all matter and 
purposes, in production we (or, for better saying, people who manage 
production-grade environments) highly encourage uwsgi + nginx on unix and 
fastcgi+iis on windows.

web2py is compatible with the wsgi standards which ATM is the only largely 
accepted way to run python web-serving apps. Usually frontends are I/O 
bound and not CPU bound: threading is the best choice there, so the rocket 
threaded webserver is just a matter of providing a "sane default".

web2py's scheduler is just something you can offload CPU (or long-running) 
tasks to an external process. Webservers DO enforce a timeout on each and 
every request (again, it's  a matter of "sane defaults" on their end) and 
so the "pattern" of having a facility that offloads those kind of things to 
"something else" has gained traction.

web2py (and the scheduler) just adhere to "standards" and "sane defaults". 
I wouldn't be startled by the statement "should my web-serving process 
consume 1/10th of the CPU resources given to external processes" because 
the role of the web-serving process is reacting to user inputs. If the 
reaction is a web-page, usually the time taken to generate the page is far 
less than the time it takes to transfer it (or, for better saying, the time 
web2py spends in rendering the page is far less than the time taken to 
retrieve results from the db and shipping that result to the user's 
browser). If the reaction of a user input is to calculate the prime numbers 
from 10^56 to 10^101 ... well, you need to ship that work elsewhere because 
running it inside a web-serving app would only mean forcing the user 
waiting for a reply wayyyyy too much.

Back to the "how should web2py be handled"..... you can choose whatever you 
like, according to your own app's needs and number of users (or, again for 
better saying, number of requests to serve per second): USUALLY you'd want 
something like uwsgi+nginx or iis+fastcgi to do the heavy work of spawning 
web2py processes as needed and having those serve the needs of the users 
requesting pages. They do an excellent job and provide nice facilities 
(like buffering) to make everything as speedier as possible.
Long running tasks can OPTIONALLY be handled by an external process (like 
the scheduler) to avoid hitting the "sane limits" defaults imposed by the 
majority of webservers (like that a request should be handled in AT MOST 60 
seconds, which, if you reeeally think it through, is a REEEEALLY long time 
to wait for whatever webpage the user asks for).

 

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to