yes, i manage a (seemingly to me) large application.  30-40 request per 
second average sustained 24 hours a day.  that app is the data access API 
for an iOS app plus an accompanying website.  some thoughts:
 - we use google app engine.  on the up side it serves all my requests, on 
the downside we pay money in hosting to make up for bad programming.
 - we are using a class based models approach.   i'm interested in trying 
the new lazy tables feature and perhaps switching to that.
 - we use memcache when possible. (it is possible to use it more we need to 
work on that)
 - we are starting to use the google edge cache for pages/API responses 
that are not user specific.  we can use more of this, but i believe those 
requests served by the cache are counted in our request numbers.
 - some % of our API requests return somewhat static JSON - in this case we 
generate the JSON when it changes (a few times a week), upload to amazon 
S3, and then wrote a piece of router middleware to redirect the request 
before web2py even is invoked....so we have some "creative" things in there 
to have high request numbers that are not quite hitting web2py itself.

i'm happy to talk more about specific experiences if there are more 
specific questions.

On Saturday, September 1, 2012 11:58:46 AM UTC-7, David Marko wrote:
>
> Hi all, i'm also curious on this. Can howesc share his experience ... Or 
> others ?  We are planing project for estimated 1mil views per working hours 
> (in 12 hours in day). I know that there are many aspects but generaly would 
> be encouraging to hear real life data with architecture info. How many 
> server do you use, do you use some round robin proxy etc. .... 

-- 



Reply via email to