Thanks massimo sir,

At first, I was getting mean response time of 200 ms per request (ab -
n 500 -c 20)
I did compile (link "compile" in admin page), mean time per request
reduced to 61 ms.
After that I did, migrate=False for all tables, mean time came down to
57.1 ms.
I'll try caching today and post that results also.

-----

va...@varun-laptop:~$ ab -c 20 -n 500 https://localhost/lsilab/default/login
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Finished 500 requests


Server Software:        Apache/2.2.12
Server Hostname:        localhost
Server Port:            443
SSL/TLS Protocol:       TLSv1/SSLv3,DHE-RSA-AES256-SHA,1024,256

Document Path:          /lsilab/default/login
Document Length:        5611 bytes

Concurrency Level:      20
Time taken for tests:   28.633 seconds
Complete requests:      500
Failed requests:        0
Write errors:           0
Total transferred:      3005798 bytes
HTML transferred:       2805500 bytes
Requests per second:    17.46 [#/sec] (mean)
Time per request:       1145.315 [ms] (mean)
Time per request:       57.266 [ms] (mean, across all concurrent
requests)
Transfer rate:          102.52 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       41  423 151.1    450     976
Processing:   129  705 228.8    662    1589
Waiting:      129  651 215.9    609    1588
Total:        471 1128 249.1   1093    1829

Percentage of the requests served within a certain time (ms)
  50%   1093
  66%   1195
  75%   1277
  80%   1322
  90%   1502
  95%   1577
  98%   1693
  99%   1724
 100%   1829 (longest request)
va...@varun-laptop:~$


On Jan 2, 3:15 pm, waTR <r...@devshell.org> wrote:
> I agree. The bottleneck would likely be the DB, not web2py or Django
> or any other framework/language. Therefore, the key here is to use
> caching and smart db design (plus some ajax to break big DB load tasks
> down to smaller ones).
>
> On Jan 1, 6:27 pm, mdipierro <mdipie...@cs.depaul.edu> wrote:
>
>
>
> > It really depends on what "Can this handle requests from 100 users
> > every moment?" means.
> > The bottleneck is probably the db.
>
> > Before youbenchmarkand/or go to production make sure you:
> > - bytecode compile your app
> > - set all your models to migrate = False
>
> > Massimo
>
> > On Dec 31 2009, 9:00 pm, vvk <varunk.ap...@gmail.com> wrote:
>
> > > Hi all,
>
> > > I've to write a portal for my college in next semester. I want to
> > > discuss regarding scalabilty of web2py.
>
> > > Configuration of Deployment:
> > > Apache + Mod WSGI + Postgresql + https only
>
> > > Can this handle requests from 100 users every moment?
> > > What might be expected load on system given that machines here are of
> > > server class, can they handle this load, kindly suggest minimum
> > > configuration for this to work ?
> > > My application is a small one, having ten tables and conforms to 3NF,
> > > any suggestions here, regarding controllers or DB ?
> > > I'm testing my Inventory application today for scalability on a
> > > machine (AMD Athlon Processor 3000+, 512 MB RAM), will post results
> > > today.
>
> > > ----
> > > Varun
-- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.


Reply via email to