There are different issues:
1) speed
2) scalability with number of users
3) scalability with data size
4) scalability with complexity of queries

1) Anthony has posted lots of advice in this direction. Out of the box 
web2py is not the fastest framework because it does more. For example you 
cannot turn of parting of http headers, client address validation, cookies 
parsing, and session handling. This means you have about 2ms per request 
that you cannot get rid of.

2) You scale as any other framework with multiple web servers behind a load 
balancer.

3) Only depends on the database you use. If you have lots of data I 
recommend postgresql. not really a web2py issue.

4) You need caching. Web2py has multiple caching mechanisms. You must use 
them. Sometimes you need more and you want a distributed cache like redis. 
Not really a web2py specific issue either.



On Saturday, 1 September 2012 11:48:01 UTC-5, Webtechie wrote:
>
>
> I would like to use web2py for a web application which has large databases 
> (really large) , expects high volume of traffic . Are there any ways to 
> make web2py apps run faster ? (like really faster ) , (looking for 
> solutions apart from pooling more hardware and replacing Cpython wth pypy , 
> running on a non-blocking server like tornado ) . How can i optimise web2py 
> for my needs ? are web2py applications scalable ?
>

-- 



Reply via email to