Re: [web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-22 Thread Timothy Farrell
I said, "I have no object to gradual rollover." but meant to say "I have 
no objection to gradual rollover."


I mean that I'm misspelling words like you typically do.  It was meant 
in jest. ;-P


My mom was a stickler for proper pronunciation (being in Oklahoma you 
can see how that might be important).


But I digress, rocket now has online documentation (though it is of 
little applicability to web2py). http://packages.python.org/rocket/


On 3/22/2010 10:37 AM, mdipierro wrote:

Not sure what than means. Hope it is good.

On Mar 22, 10:31 am, Timothy Farrell  wrote:
   

*objection

Gosh Massimo, you're wearing off on me.

On 3/22/2010 9:49 AM, Timothy Farrell wrote:

 

I have no object to gradual rollover.  One way that could satisfy from
all angles is to have HTTPS configurations default to use Rocket while
regular connections use Cherrypy.  This would accomplish:
   
 

- revealing it to a smaller portion of the web2py user-ship at first
- remove the requirement of compiling both pyOpenSSL and the ssl module
   
 

On 3/22/2010 9:44 AM, mdipierro wrote:
   

I have no objection to having an option and I would take a patch in
this direction but: 1) I prefer to have rocket as default (else we
will never know if there is some obscure problem with it) and, 2) they
should both use ssl and not openssl so I do not have to redo the re-
packaging.
 
 

Right now we have one problem that needs to be fixed first. web2py.exe
-h does not work for 1.76.5.3.b
 
 

On Mar 22, 9:28 am, Jonathan Lundellwrote:
 

On Mar 22, 2010, at 5:55 AM, Timothy Farrell wrote:
   
 
web2py could support both but the benefits get lost quickly.  
web2py is designed to be simple, asking the user to pick which

bundled web server they would like to use is too much in my opinion.
 

No need to ask; there'd be a silent default.
   
 

I'm thinking mainly of an overlapped transition.
   
 

Short or Tall?
Caf or Decaf?
Sugar?
Milk? (steamed?)
Cinnamon?
For here or To-go?
How would you like your web2py today?
On 3/20/2010 12:39 PM, Jonathan Lundell wrote:
 

On Mar 20, 2010, at 9:58 AM, Timothy Farrell wrote:
   

Vasile Ermicioi, put in a vote for Rocket to be included in
web2py because I'm in the web2py community and there is still
plenty of room for Rocket to be optimized (which I noted).
 

I like the idea of built-in servers as plugins (not formally, but
the general idea of supporting more than one in a simply
configured way). The downside is that we won't have as focused
testing of any one server, but it's compensated for by how much
easier it would be to include a new server in the release without
running the risk of breaking existing installations.
As I've said, I don't think that ultimate performance need be a
high priority for the built-in server; rather, ease of use and
rock-solid stability are the priorities. And I think I like
relying on the SSL package.
My inclination: enable easy server switching. Keep CherryPy the
default for at least one more release, but make Rocket and sneaky
easy to ask for from at startup. That'll give those of us who are
interested easy access to Rocket in a low-risk way. And then at
some point, possibly very soon, switch the default to Rocket,
retaining an easy option for the others as a fallback.
   

--
You received this message because you are subscribed to the Google
Groups "web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to
web2py+unsubscr...@googlegroups.com.
For more options, visit this group
athttp://groups.google.com/group/web2py?hl=en.
 
   


--
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



[web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-22 Thread mdipierro
Not sure what than means. Hope it is good.

On Mar 22, 10:31 am, Timothy Farrell  wrote:
> *objection
>
> Gosh Massimo, you're wearing off on me.
>
> On 3/22/2010 9:49 AM, Timothy Farrell wrote:
>
> > I have no object to gradual rollover.  One way that could satisfy from
> > all angles is to have HTTPS configurations default to use Rocket while
> > regular connections use Cherrypy.  This would accomplish:
>
> > - revealing it to a smaller portion of the web2py user-ship at first
> > - remove the requirement of compiling both pyOpenSSL and the ssl module
>
> > On 3/22/2010 9:44 AM, mdipierro wrote:
> >> I have no objection to having an option and I would take a patch in
> >> this direction but: 1) I prefer to have rocket as default (else we
> >> will never know if there is some obscure problem with it) and, 2) they
> >> should both use ssl and not openssl so I do not have to redo the re-
> >> packaging.
>
> >> Right now we have one problem that needs to be fixed first. web2py.exe
> >> -h does not work for 1.76.5.3.b
>
> >> On Mar 22, 9:28 am, Jonathan Lundell  wrote:
> >>> On Mar 22, 2010, at 5:55 AM, Timothy Farrell wrote:
>
>  web2py could support both but the benefits get lost quickly.  
>  web2py is designed to be simple, asking the user to pick which
>  bundled web server they would like to use is too much in my opinion.
> >>> No need to ask; there'd be a silent default.
>
> >>> I'm thinking mainly of an overlapped transition.
>
>  Short or Tall?
>  Caf or Decaf?
>  Sugar?
>  Milk? (steamed?)
>  Cinnamon?
>  For here or To-go?
>  How would you like your web2py today?
>  On 3/20/2010 12:39 PM, Jonathan Lundell wrote:
> > On Mar 20, 2010, at 9:58 AM, Timothy Farrell wrote:
> >> Vasile Ermicioi, put in a vote for Rocket to be included in
> >> web2py because I'm in the web2py community and there is still
> >> plenty of room for Rocket to be optimized (which I noted).
> > I like the idea of built-in servers as plugins (not formally, but
> > the general idea of supporting more than one in a simply
> > configured way). The downside is that we won't have as focused
> > testing of any one server, but it's compensated for by how much
> > easier it would be to include a new server in the release without
> > running the risk of breaking existing installations.
> > As I've said, I don't think that ultimate performance need be a
> > high priority for the built-in server; rather, ease of use and
> > rock-solid stability are the priorities. And I think I like
> > relying on the SSL package.
> > My inclination: enable easy server switching. Keep CherryPy the
> > default for at least one more release, but make Rocket and sneaky
> > easy to ask for from at startup. That'll give those of us who are
> > interested easy access to Rocket in a low-risk way. And then at
> > some point, possibly very soon, switch the default to Rocket,
> > retaining an easy option for the others as a fallback.
>  --
>  You received this message because you are subscribed to the Google
>  Groups "web2py-users" group.
>  To post to this group, send email to web...@googlegroups.com.
>  To unsubscribe from this group, send email to
>  web2py+unsubscr...@googlegroups.com.
>  For more options, visit this group
>  athttp://groups.google.com/group/web2py?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



Re: [web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-22 Thread Timothy Farrell

*objection

Gosh Massimo, you're wearing off on me.

On 3/22/2010 9:49 AM, Timothy Farrell wrote:
I have no object to gradual rollover.  One way that could satisfy from 
all angles is to have HTTPS configurations default to use Rocket while 
regular connections use Cherrypy.  This would accomplish:


- revealing it to a smaller portion of the web2py user-ship at first
- remove the requirement of compiling both pyOpenSSL and the ssl module


On 3/22/2010 9:44 AM, mdipierro wrote:

I have no objection to having an option and I would take a patch in
this direction but: 1) I prefer to have rocket as default (else we
will never know if there is some obscure problem with it) and, 2) they
should both use ssl and not openssl so I do not have to redo the re-
packaging.

Right now we have one problem that needs to be fixed first. web2py.exe
-h does not work for 1.76.5.3.b

On Mar 22, 9:28 am, Jonathan Lundell  wrote:

On Mar 22, 2010, at 5:55 AM, Timothy Farrell wrote:

web2py could support both but the benefits get lost quickly.  
web2py is designed to be simple, asking the user to pick which 
bundled web server they would like to use is too much in my opinion.

No need to ask; there'd be a silent default.

I'm thinking mainly of an overlapped transition.




Short or Tall?
Caf or Decaf?
Sugar?
Milk? (steamed?)
Cinnamon?
For here or To-go?
How would you like your web2py today?
On 3/20/2010 12:39 PM, Jonathan Lundell wrote:

On Mar 20, 2010, at 9:58 AM, Timothy Farrell wrote:
Vasile Ermicioi, put in a vote for Rocket to be included in 
web2py because I'm in the web2py community and there is still 
plenty of room for Rocket to be optimized (which I noted).
I like the idea of built-in servers as plugins (not formally, but 
the general idea of supporting more than one in a simply 
configured way). The downside is that we won't have as focused 
testing of any one server, but it's compensated for by how much 
easier it would be to include a new server in the release without 
running the risk of breaking existing installations.
As I've said, I don't think that ultimate performance need be a 
high priority for the built-in server; rather, ease of use and 
rock-solid stability are the priorities. And I think I like 
relying on the SSL package.
My inclination: enable easy server switching. Keep CherryPy the 
default for at least one more release, but make Rocket and sneaky 
easy to ask for from at startup. That'll give those of us who are 
interested easy access to Rocket in a low-risk way. And then at 
some point, possibly very soon, switch the default to Rocket, 
retaining an easy option for the others as a fallback.

--
You received this message because you are subscribed to the Google 
Groups "web2py-users" group.

To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group 
athttp://groups.google.com/group/web2py?hl=en.




--
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



Re: [web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-22 Thread Timothy Farrell
I have no object to gradual rollover.  One way that could satisfy from 
all angles is to have HTTPS configurations default to use Rocket while 
regular connections use Cherrypy.  This would accomplish:


- revealing it to a smaller portion of the web2py user-ship at first
- remove the requirement of compiling both pyOpenSSL and the ssl module


On 3/22/2010 9:44 AM, mdipierro wrote:

I have no objection to having an option and I would take a patch in
this direction but: 1) I prefer to have rocket as default (else we
will never know if there is some obscure problem with it) and, 2) they
should both use ssl and not openssl so I do not have to redo the re-
packaging.

Right now we have one problem that needs to be fixed first. web2py.exe
-h does not work for 1.76.5.3.b

On Mar 22, 9:28 am, Jonathan Lundell  wrote:
   

On Mar 22, 2010, at 5:55 AM, Timothy Farrell wrote:

 

web2py could support both but the benefits get lost quickly.  web2py is 
designed to be simple, asking the user to pick which bundled web server they 
would like to use is too much in my opinion.
   

No need to ask; there'd be a silent default.

I'm thinking mainly of an overlapped transition.



 

Short or Tall?
Caf or Decaf?
Sugar?
Milk? (steamed?)
Cinnamon?
For here or To-go?
   
 

How would you like your web2py today?
   
 

On 3/20/2010 12:39 PM, Jonathan Lundell wrote:
   

On Mar 20, 2010, at 9:58 AM, Timothy Farrell wrote:
 
 

Vasile Ermicioi, put in a vote for Rocket to be included in web2py because I'm 
in the web2py community and there is still plenty of room for Rocket to be 
optimized (which I noted).
   
 

I like the idea of built-in servers as plugins (not formally, but the general 
idea of supporting more than one in a simply configured way). The downside is 
that we won't have as focused testing of any one server, but it's compensated 
for by how much easier it would be to include a new server in the release 
without running the risk of breaking existing installations.
 
 

As I've said, I don't think that ultimate performance need be a high priority 
for the built-in server; rather, ease of use and rock-solid stability are the 
priorities. And I think I like relying on the SSL package.
 
 

My inclination: enable easy server switching. Keep CherryPy the default for at 
least one more release, but make Rocket and sneaky easy to ask for from at 
startup. That'll give those of us who are interested easy access to Rocket in a 
low-risk way. And then at some point, possibly very soon, switch the default to 
Rocket, retaining an easy option for the others as a fallback.
 
 

--
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group 
athttp://groups.google.com/group/web2py?hl=en.
   
   


--
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



[web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-22 Thread mdipierro
I have no objection to having an option and I would take a patch in
this direction but: 1) I prefer to have rocket as default (else we
will never know if there is some obscure problem with it) and, 2) they
should both use ssl and not openssl so I do not have to redo the re-
packaging.

Right now we have one problem that needs to be fixed first. web2py.exe
-h does not work for 1.76.5.3.b

On Mar 22, 9:28 am, Jonathan Lundell  wrote:
> On Mar 22, 2010, at 5:55 AM, Timothy Farrell wrote:
>
> > web2py could support both but the benefits get lost quickly.  web2py is 
> > designed to be simple, asking the user to pick which bundled web server 
> > they would like to use is too much in my opinion.
>
> No need to ask; there'd be a silent default.
>
> I'm thinking mainly of an overlapped transition.
>
>
>
> > Short or Tall?
> > Caf or Decaf?
> > Sugar?
> > Milk? (steamed?)
> > Cinnamon?
> > For here or To-go?
>
> > How would you like your web2py today?
>
> > On 3/20/2010 12:39 PM, Jonathan Lundell wrote:
> >> On Mar 20, 2010, at 9:58 AM, Timothy Farrell wrote:
>
> >>> Vasile Ermicioi, put in a vote for Rocket to be included in web2py 
> >>> because I'm in the web2py community and there is still plenty of room for 
> >>> Rocket to be optimized (which I noted).
>
> >> I like the idea of built-in servers as plugins (not formally, but the 
> >> general idea of supporting more than one in a simply configured way). The 
> >> downside is that we won't have as focused testing of any one server, but 
> >> it's compensated for by how much easier it would be to include a new 
> >> server in the release without running the risk of breaking existing 
> >> installations.
>
> >> As I've said, I don't think that ultimate performance need be a high 
> >> priority for the built-in server; rather, ease of use and rock-solid 
> >> stability are the priorities. And I think I like relying on the SSL 
> >> package.
>
> >> My inclination: enable easy server switching. Keep CherryPy the default 
> >> for at least one more release, but make Rocket and sneaky easy to ask for 
> >> from at startup. That'll give those of us who are interested easy access 
> >> to Rocket in a low-risk way. And then at some point, possibly very soon, 
> >> switch the default to Rocket, retaining an easy option for the others as a 
> >> fallback.
>
> > --
> > You received this message because you are subscribed to the Google Groups 
> > "web2py-users" group.
> > To post to this group, send email to web...@googlegroups.com.
> > To unsubscribe from this group, send email to 
> > web2py+unsubscr...@googlegroups.com.
> > For more options, visit this group 
> > athttp://groups.google.com/group/web2py?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



Re: [web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-22 Thread Jonathan Lundell
On Mar 22, 2010, at 5:55 AM, Timothy Farrell wrote:

> web2py could support both but the benefits get lost quickly.  web2py is 
> designed to be simple, asking the user to pick which bundled web server they 
> would like to use is too much in my opinion.

No need to ask; there'd be a silent default.

I'm thinking mainly of an overlapped transition.

> 
> Short or Tall?
> Caf or Decaf?
> Sugar?
> Milk? (steamed?)
> Cinnamon?
> For here or To-go?
> 
> How would you like your web2py today?
> 
> 
> On 3/20/2010 12:39 PM, Jonathan Lundell wrote:
>> On Mar 20, 2010, at 9:58 AM, Timothy Farrell wrote:
>> 
>>   
>>> Vasile Ermicioi, put in a vote for Rocket to be included in web2py because 
>>> I'm in the web2py community and there is still plenty of room for Rocket to 
>>> be optimized (which I noted).
>>> 
>> I like the idea of built-in servers as plugins (not formally, but the 
>> general idea of supporting more than one in a simply configured way). The 
>> downside is that we won't have as focused testing of any one server, but 
>> it's compensated for by how much easier it would be to include a new server 
>> in the release without running the risk of breaking existing installations.
>> 
>> As I've said, I don't think that ultimate performance need be a high 
>> priority for the built-in server; rather, ease of use and rock-solid 
>> stability are the priorities. And I think I like relying on the SSL package.
>> 
>> My inclination: enable easy server switching. Keep CherryPy the default for 
>> at least one more release, but make Rocket and sneaky easy to ask for from 
>> at startup. That'll give those of us who are interested easy access to 
>> Rocket in a low-risk way. And then at some point, possibly very soon, switch 
>> the default to Rocket, retaining an easy option for the others as a fallback.
>> 
>>   
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "web2py-users" group.
> To post to this group, send email to web...@googlegroups.com.
> To unsubscribe from this group, send email to 
> web2py+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/web2py?hl=en.
> 


-- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



Re: [web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-22 Thread Timothy Farrell
web2py could support both but the benefits get lost quickly.  web2py is 
designed to be simple, asking the user to pick which bundled web server 
they would like to use is too much in my opinion.


Short or Tall?
Caf or Decaf?
Sugar?
Milk? (steamed?)
Cinnamon?
For here or To-go?

How would you like your web2py today?


On 3/20/2010 12:39 PM, Jonathan Lundell wrote:

On Mar 20, 2010, at 9:58 AM, Timothy Farrell wrote:

   

Vasile Ermicioi, put in a vote for Rocket to be included in web2py because I'm 
in the web2py community and there is still plenty of room for Rocket to be 
optimized (which I noted).
 

I like the idea of built-in servers as plugins (not formally, but the general 
idea of supporting more than one in a simply configured way). The downside is 
that we won't have as focused testing of any one server, but it's compensated 
for by how much easier it would be to include a new server in the release 
without running the risk of breaking existing installations.

As I've said, I don't think that ultimate performance need be a high priority 
for the built-in server; rather, ease of use and rock-solid stability are the 
priorities. And I think I like relying on the SSL package.

My inclination: enable easy server switching. Keep CherryPy the default for at 
least one more release, but make Rocket and sneaky easy to ask for from at 
startup. That'll give those of us who are interested easy access to Rocket in a 
low-risk way. And then at some point, possibly very soon, switch the default to 
Rocket, retaining an easy option for the others as a fallback.

   


--
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



Re: [web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-22 Thread Timothy Farrell
chmark settings 
put one request per connection.  A real-world setting is something around 10, 
which Nicholas Piel uses.
   
 

Kuba released another round of tests that follow Nicholas Piel's HTTP/1.1 tests 
(10 requests per connection).  The results showed Rocket as performing slightly 
faster.
   
 

Now, let's talk about pound.  I've not used pound for any tests before so this 
was all new information to me.  The first test showed 4 instances of Rocket 
behind pound to be slower than 4 instances of Cherrypy behind pound on a 
Quad-core machine.  There are several possible explanations for this.  All of 
the explanations require more development on Rocket to work around.  The 
difference in performance would not be a show-stopper for me, but others may 
disagree.
   
 

I've asked Kuba to retest 4xRocket vs. 4xCherrypy with the second test 
configuration.
   
 

Vasile Ermicioi, put in a vote for Rocket to be included in web2py because I'm 
in the web2py community and there is still plenty of room for Rocket to be 
optimized (which I noted).
   
 

Now you're up-to-date.
   
     

-tim
   
 

-Original Message-----
From: "mdipierro"
Sent: Friday, March 19, 2010 9:01pm
To: "web2py-users"
Subject: [web2py] Re: benchmarking: rocket vs pound with four rockets
   
 

had a long day, can somebody provide an execute summary of all the
tests?
   
 

On Mar 19, 3:33 pm, Timothy Farrell  wrote:
   

Thank you Kuba.  Would you mind re-running the 4x pound test like this also?
 
 

On 3/19/2010 3:09 PM, Kuba Kucharski wrote:
 
 

One instance of each, with 10 calls in a connection as it is closer to
reallife scenario:
(numbers speak for themselves)
   
 

CHERRYPY:
   
 

r...@kubatron:/home/kuba/httperf-0.9.0/src# ./httperf --hog --server
192.168.0.1 --port=8000 ==uri=/vae/default/benchmark2
--num-conns=1 --num-calls=10
httperf --hog --client=0/1 --server=192.168.0.1 --port=8000 --uri=/
--send-buffer=4096 --recv-buffer=16384 --num-conns=1
--num-calls=10
   
 

Maximum connect burst length: 1
   
 

Total: connections 1 requests 10 replies 10 test-duration 67.659 s
   
 

Connection rate: 147.8 conn/s (6.8 ms/conn,<=1 concurrent connections)
Connection time [ms]: min 6.2 avg 6.8 max 10.5 median 6.5 stddev 0.2
Connection time [ms]: connect 0.1
Connection length [replies/conn]: 10.000
   
 

Request rate: 1478.0 req/s (0.7 ms/req)
Request size [B]: 64.0
   
 

Reply rate [replies/s]: min 1474.7 avg 1478.0 max 1480.3 stddev 2.0 (13 samples)
Reply time [ms]: response 0.6 transfer 0.0
Reply size [B]: header 205.0 content 66.0 footer 2.0 (total 273.0)
Reply status: 1xx=0 2xx=0 3xx=10 4xx=0 5xx=0
   
 

CPU time [s]: user 25.67 system 41.99 (user 37.9% system 62.1% total 100.0%)
Net I/O: 483.5 KB/s (4.0*10^6 bps)
   
 

Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0
   
 

ROCKET:
   
 

r...@kubatron:/home/kuba/httperf-0.9.0/src# ./httperf --hog --server
192.168.0.1 --port=8000 ==uri=/vae/default/benchmark2
--num-conns=1 --num-calls=10
httperf --hog --client=0/1 --server=192.168.0.1 --port=8000 --uri=/
--send-buffer=4096 --recv-buffer=16384 --num-conns=1
--num-calls=10
Maximum connect burst length: 1
   
 

Total: connections 1 requests 10 replies 10 test-duration 64.760 s
   
 

Connection rate: 154.4 conn/s (6.5 ms/conn,<=1 concurrent connections)
Connection time [ms]: min 5.9 avg 6.5 max 72.7 median 6.5 stddev 1.0
Connection time [ms]: connect 0.1
Connection length [replies/conn]: 10.000
   
 

Request rate: 1544.2 req/s (0.6 ms/req)
Request size [B]: 64.0
   
 

Reply rate [replies/s]: min 1526.9 avg 1544.2 max 1555.9 stddev 8.6 (12 samples)
Reply time [ms]: response 0.6 transfer 0.0
Reply size [B]: header 216.0 content 66.0 footer 0.0 (total 282.0)
Reply status: 1xx=0 2xx=0 3xx=10 4xx=0 5xx=0
   
 

CPU time [s]: user 24.18 system 40.58 (user 37.3% system 62.7% total 100.0%)
Net I/O: 521.8 KB/s (4.3*10^6 bps)
   
 

Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0
   
 

--
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group 
athttp://groups.google.com/group/web2py?hl=en.
   


 
   


--
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this gr

[web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-20 Thread mdipierro
There may be because of the extra logic for parsing http headers,
looking for models, controllers, views etc.
I do not know. It would be nice to quantify the web2py overhead.

Massimo


On Mar 20, 8:57 pm, Kuba Kucharski  wrote:
> you expect overhead from this? ;)
>
> def benchmark2():
>
>    return dict(data="test")

-- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



Re: [web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-20 Thread Kuba Kucharski
you expect overhead from this? ;)

def benchmark2():

   return dict(data="test")

-- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



[web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-20 Thread mdipierro
oops. Sorry. It would be interesting to know how much overhead does
web2py add.

On Mar 20, 7:44 pm, Kuba Kucharski  wrote:
> > I am assuming that in all your tests you did not use web2py. I
>
> wrong assumption. I even published my model&controller at the
> beginning of this thread.

-- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



Re: [web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-20 Thread Kuba Kucharski
> I am assuming that in all your tests you did not use web2py. I

wrong assumption. I even published my model&controller at the
beginning of this thread.

-- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



[web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-20 Thread mdipierro
I am assuming that in all your tests you did not use web2py. I am
assuming you just tested some "hello word" wsgi application. You can
find gluon/sneaky.py in the web2py source and there is a wsgi hello
world example in there.

Massimo

On Mar 20, 4:07 pm, Kuba Kucharski  wrote:
> ALL POWER I CAN GET FROM quad core Xeon @ 2.33GHz
>
> ONLY SOME STABLE RECORDS HERE:
>
> Request rate: 929.0 req/s (1.1 ms/req) QUAD CHERRYPY
> Request rate: 877.6 req/s (1.1 ms/req) QUAD ROCKET
>
> Request rate: 1478.0 req/s (0.7 ms/req) CHERRYPY SOLO
> Request rate: 1544.2 req/s (0.6 ms/req) ROCKET SOLO
>
> QUAD SLOWER? Yes. But when I enforce rate up as much as it can perform
> on my machine:
>
> Request rate: 3096.9 req/s (0.3 ms/req) QUAD CHERRYPY (--rate=310)
> Request rate: 2566.4 req/s (0.4 ms/req) QUAD ROCKET (--rate=260)
>
> This is probably the limit of my hardware.
> Rather unrealistic scenario. So, conclusions:
>
> - we should not use "ab"
> - use only one instance
> - rocket is ok
>
> Massimo, how to switch to Sneaky?
>
> --
> Kuba

-- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



Re: [web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-20 Thread Kuba Kucharski
> ach!  I meant to say:  web2py.com

nice one.

yes. stability and funcionality over speed. I just wanted to learn
where are the borders(and how to benchmark properly).

-- 
Kuba

-- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



[web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-20 Thread Yarko Tymciurak
gt; > accurate.
>
> > > The difference in situations revolves around how many HTTP requests are 
> > > pipelined over a single connection.  ApacheBench puts them all in a few 
> > > connections, httperf allows for configuring this.  Kuba's benchmark 
> > > settings put one request per connection.  A real-world setting is 
> > > something around 10, which Nicholas Piel uses.
>
> > > Kuba released another round of tests that follow Nicholas Piel's HTTP/1.1 
> > > tests (10 requests per connection).  The results showed Rocket as 
> > > performing slightly faster.
>
> > > Now, let's talk about pound.  I've not used pound for any tests before so 
> > > this was all new information to me.  The first test showed 4 instances of 
> > > Rocket behind pound to be slower than 4 instances of Cherrypy behind 
> > > pound on a Quad-core machine.  There are several possible explanations 
> > > for this.  All of the explanations require more development on Rocket to 
> > > work around.  The difference in performance would not be a show-stopper 
> > > for me, but others may disagree.
>
> > > I've asked Kuba to retest 4xRocket vs. 4xCherrypy with the second test 
> > > configuration.
>
> > > Vasile Ermicioi, put in a vote for Rocket to be included in web2py 
> > > because I'm in the web2py community and there is still plenty of room for 
> > > Rocket to be optimized (which I noted).
>
> > > Now you're up-to-date.
>
> > > -tim
>
> > > -Original Message-
> > > From: "mdipierro" 
> > > Sent: Friday, March 19, 2010 9:01pm
> > > To: "web2py-users" 
> > > Subject: [web2py] Re: benchmarking: rocket vs pound with four rockets
>
> > > had a long day, can somebody provide an execute summary of all the
> > > tests?
>
> > > On Mar 19, 3:33 pm, Timothy Farrell  wrote:
> > > > Thank you Kuba.  Would you mind re-running the 4x pound test like this 
> > > > also?
>
> > > > On 3/19/2010 3:09 PM, Kuba Kucharski wrote:
>
> > > > > One instance of each, with 10 calls in a connection as it is closer to
> > > > > reallife scenario:
> > > > > (numbers speak for themselves)
>
> > > > > CHERRYPY:
>
> > > > > r...@kubatron:/home/kuba/httperf-0.9.0/src# ./httperf --hog --server
> > > > > 192.168.0.1 --port=8000 ==uri=/vae/default/benchmark2
> > > > > --num-conns=1 --num-calls=10
> > > > > httperf --hog --client=0/1 --server=192.168.0.1 --port=8000 --uri=/
> > > > > --send-buffer=4096 --recv-buffer=16384 --num-conns=1
> > > > > --num-calls=10
>
> > > > > Maximum connect burst length: 1
>
> > > > > Total: connections 1 requests 10 replies 10 test-duration 
> > > > > 67.659 s
>
> > > > > Connection rate: 147.8 conn/s (6.8 ms/conn,<=1 concurrent connections)
> > > > > Connection time [ms]: min 6.2 avg 6.8 max 10.5 median 6.5 stddev 0.2
> > > > > Connection time [ms]: connect 0.1
> > > > > Connection length [replies/conn]: 10.000
>
> > > > > Request rate: 1478.0 req/s (0.7 ms/req)
> > > > > Request size [B]: 64.0
>
> > > > > Reply rate [replies/s]: min 1474.7 avg 1478.0 max 1480.3 stddev 2.0 
> > > > > (13 samples)
> > > > > Reply time [ms]: response 0.6 transfer 0.0
> > > > > Reply size [B]: header 205.0 content 66.0 footer 2.0 (total 273.0)
> > > > > Reply status: 1xx=0 2xx=0 3xx=10 4xx=0 5xx=0
>
> > > > > CPU time [s]: user 25.67 system 41.99 (user 37.9% system 62.1% total 
> > > > > 100.0%)
> > > > > Net I/O: 483.5 KB/s (4.0*10^6 bps)
>
> > > > > Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
> > > > > Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0
>
> > > > > ROCKET:
>
> > > > > r...@kubatron:/home/kuba/httperf-0.9.0/src# ./httperf --hog --server
> > > > > 192.168.0.1 --port=8000 ==uri=/vae/default/benchmark2
> > > > > --num-conns=1 --num-calls=10
> > > > > httperf --hog --client=0/1 --server=192.168.0.1 --port=8000 --uri=/
> > > > > --send-buffer=4096 --recv-buffer=16384 --num-conns=1
> > > > > --num-calls=10
> > > > > Maximum connect burst length: 1
>
> > > > > Total: connections 1 re

[web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-20 Thread Yarko Tymciurak
uests per connection).  The results showed Rocket as 
> > performing slightly faster.
>
> > Now, let's talk about pound.  I've not used pound for any tests before so 
> > this was all new information to me.  The first test showed 4 instances of 
> > Rocket behind pound to be slower than 4 instances of Cherrypy behind pound 
> > on a Quad-core machine.  There are several possible explanations for this.  
> > All of the explanations require more development on Rocket to work around.  
> > The difference in performance would not be a show-stopper for me, but 
> > others may disagree.
>
> > I've asked Kuba to retest 4xRocket vs. 4xCherrypy with the second test 
> > configuration.
>
> > Vasile Ermicioi, put in a vote for Rocket to be included in web2py because 
> > I'm in the web2py community and there is still plenty of room for Rocket to 
> > be optimized (which I noted).
>
> > Now you're up-to-date.
>
> > -tim
>
> > -Original Message-
> > From: "mdipierro" 
> > Sent: Friday, March 19, 2010 9:01pm
> > To: "web2py-users" 
> > Subject: [web2py] Re: benchmarking: rocket vs pound with four rockets
>
> > had a long day, can somebody provide an execute summary of all the
> > tests?
>
> > On Mar 19, 3:33 pm, Timothy Farrell  wrote:
> > > Thank you Kuba.  Would you mind re-running the 4x pound test like this 
> > > also?
>
> > > On 3/19/2010 3:09 PM, Kuba Kucharski wrote:
>
> > > > One instance of each, with 10 calls in a connection as it is closer to
> > > > reallife scenario:
> > > > (numbers speak for themselves)
>
> > > > CHERRYPY:
>
> > > > r...@kubatron:/home/kuba/httperf-0.9.0/src# ./httperf --hog --server
> > > > 192.168.0.1 --port=8000 ==uri=/vae/default/benchmark2
> > > > --num-conns=1 --num-calls=10
> > > > httperf --hog --client=0/1 --server=192.168.0.1 --port=8000 --uri=/
> > > > --send-buffer=4096 --recv-buffer=16384 --num-conns=1
> > > > --num-calls=10
>
> > > > Maximum connect burst length: 1
>
> > > > Total: connections 1 requests 10 replies 10 test-duration 
> > > > 67.659 s
>
> > > > Connection rate: 147.8 conn/s (6.8 ms/conn,<=1 concurrent connections)
> > > > Connection time [ms]: min 6.2 avg 6.8 max 10.5 median 6.5 stddev 0.2
> > > > Connection time [ms]: connect 0.1
> > > > Connection length [replies/conn]: 10.000
>
> > > > Request rate: 1478.0 req/s (0.7 ms/req)
> > > > Request size [B]: 64.0
>
> > > > Reply rate [replies/s]: min 1474.7 avg 1478.0 max 1480.3 stddev 2.0 (13 
> > > > samples)
> > > > Reply time [ms]: response 0.6 transfer 0.0
> > > > Reply size [B]: header 205.0 content 66.0 footer 2.0 (total 273.0)
> > > > Reply status: 1xx=0 2xx=0 3xx=10 4xx=0 5xx=0
>
> > > > CPU time [s]: user 25.67 system 41.99 (user 37.9% system 62.1% total 
> > > > 100.0%)
> > > > Net I/O: 483.5 KB/s (4.0*10^6 bps)
>
> > > > Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
> > > > Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0
>
> > > > ROCKET:
>
> > > > r...@kubatron:/home/kuba/httperf-0.9.0/src# ./httperf --hog --server
> > > > 192.168.0.1 --port=8000 ==uri=/vae/default/benchmark2
> > > > --num-conns=1 --num-calls=10
> > > > httperf --hog --client=0/1 --server=192.168.0.1 --port=8000 --uri=/
> > > > --send-buffer=4096 --recv-buffer=16384 --num-conns=1
> > > > --num-calls=10
> > > > Maximum connect burst length: 1
>
> > > > Total: connections 1 requests 10 replies 10 test-duration 
> > > > 64.760 s
>
> > > > Connection rate: 154.4 conn/s (6.5 ms/conn,<=1 concurrent connections)
> > > > Connection time [ms]: min 5.9 avg 6.5 max 72.7 median 6.5 stddev 1.0
> > > > Connection time [ms]: connect 0.1
> > > > Connection length [replies/conn]: 10.000
>
> > > > Request rate: 1544.2 req/s (0.6 ms/req)
> > > > Request size [B]: 64.0
>
> > > > Reply rate [replies/s]: min 1526.9 avg 1544.2 max 1555.9 stddev 8.6 (12 
> > > > samples)
> > > > Reply time [ms]: response 0.6 transfer 0.0
> > > > Reply size [B]: header 216.0 content 66.0 footer 0.0 (total 282.0)
> > > > Reply status: 1xx=0 2xx=0 3xx=10 4xx=0 5xx=0
>
> > > > CPU time [s]: user 24.18 system 40.58 (user 37.3% system 62.7% total 
> > > > 100.0%)
> > > > Net I/O: 521.8 KB/s (4.3*10^6 bps)
>
> > > > Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
> > > > Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0
>
> > --
> > You received this message because you are subscribed to the Google Groups 
> > "web2py-users" group.
> > To post to this group, send email to web...@googlegroups.com.
> > To unsubscribe from this group, send email to 
> > web2py+unsubscr...@googlegroups.com.
> > For more options, visit this group 
> > athttp://groups.google.com/group/web2py?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



Re: [web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-20 Thread Kuba Kucharski
ALL POWER I CAN GET FROM quad core Xeon @ 2.33GHz

ONLY SOME STABLE RECORDS HERE:

Request rate: 929.0 req/s (1.1 ms/req) QUAD CHERRYPY
Request rate: 877.6 req/s (1.1 ms/req) QUAD ROCKET

Request rate: 1478.0 req/s (0.7 ms/req) CHERRYPY SOLO
Request rate: 1544.2 req/s (0.6 ms/req) ROCKET SOLO

QUAD SLOWER? Yes. But when I enforce rate up as much as it can perform
on my machine:

Request rate: 3096.9 req/s (0.3 ms/req) QUAD CHERRYPY (--rate=310)
Request rate: 2566.4 req/s (0.4 ms/req) QUAD ROCKET (--rate=260)

This is probably the limit of my hardware.
Rather unrealistic scenario. So, conclusions:

- we should not use "ab"
- use only one instance
- rocket is ok

Massimo, how to switch to Sneaky?

-- 
Kuba

-- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



Re: [web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-20 Thread Jonathan Lundell
On Mar 20, 2010, at 9:58 AM, Timothy Farrell wrote:

> Vasile Ermicioi, put in a vote for Rocket to be included in web2py because 
> I'm in the web2py community and there is still plenty of room for Rocket to 
> be optimized (which I noted).

I like the idea of built-in servers as plugins (not formally, but the general 
idea of supporting more than one in a simply configured way). The downside is 
that we won't have as focused testing of any one server, but it's compensated 
for by how much easier it would be to include a new server in the release 
without running the risk of breaking existing installations.

As I've said, I don't think that ultimate performance need be a high priority 
for the built-in server; rather, ease of use and rock-solid stability are the 
priorities. And I think I like relying on the SSL package.

My inclination: enable easy server switching. Keep CherryPy the default for at 
least one more release, but make Rocket and sneaky easy to ask for from at 
startup. That'll give those of us who are interested easy access to Rocket in a 
low-risk way. And then at some point, possibly very soon, switch the default to 
Rocket, retaining an easy option for the others as a fallback.

-- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



[web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-20 Thread mdipierro
Thanks this is clear. I should clarify my position on this.

I always liked cherrypy because it is known to be fast and because
many use it to it has been tested. The problem with cherrypy is that
the code is not as clean as Rocket's.

I also like Rocket because its code is very clean and readable and
because the developer (you) is a member of this community.

Some time ago I tried to rewrite Cherrypy in sneaky but I did not have
time to bring it to production quality. Rocket reminds me of sneaky
both as goals and as design and makes me feel I do not need work on
sneaky any more and I can remove it from web2py.

Speed is an important issue but not the only issue.

Another important issue is long term support. Are you Tim committed to
supporting Rocket long term?

With the numbers I have seen I still lean towards rockets, but I would
like to see more benchmakrs with pound (and/or haproxy).

I would also like to hear more opinions from other users on this
matter.

Even if we default to one of the two, we could setup web2py to give
users a choice (at least for a while). There may be problems with
openssl vs ssl, but I think they can be resolved. Eventually I think
we better make a choice and pack only one of the two.

Massimo

P.S. Binary web servers are not an option.

On Mar 20, 11:58 am, "Timothy Farrell"  wrote:
> Summary:
>
> First, I'll speak in the context of a single instance of Rocket.  I'll talk 
> about pound in a bit.
>
> ApacheBench, which I used to test Rocket, unfairly accentuates the benefits 
> of Rocket.  httperf allows for a much fairer test.  The httperf configuration 
> that Kuba used tested a non-standard situation (while applicable to a project 
> he's working on) that accentuates a known weakness of Rocket over Cherrypy.  
> Even though the single-instance test was inconclusive the multi-instance test 
> implied that Rocket would be slower in the single instance.
>
> Because my tests and Kuba's tests focused on polar opposite situations, the 
> numbers were different.
>
> Nicholas Piel tested version 1.0.1 which did not include epoll support so his 
> initial conclusions, while correct for the time, are no longer accurate.
>
> The difference in situations revolves around how many HTTP requests are 
> pipelined over a single connection.  ApacheBench puts them all in a few 
> connections, httperf allows for configuring this.  Kuba's benchmark settings 
> put one request per connection.  A real-world setting is something around 10, 
> which Nicholas Piel uses.
>
> Kuba released another round of tests that follow Nicholas Piel's HTTP/1.1 
> tests (10 requests per connection).  The results showed Rocket as performing 
> slightly faster.
>
> Now, let's talk about pound.  I've not used pound for any tests before so 
> this was all new information to me.  The first test showed 4 instances of 
> Rocket behind pound to be slower than 4 instances of Cherrypy behind pound on 
> a Quad-core machine.  There are several possible explanations for this.  All 
> of the explanations require more development on Rocket to work around.  The 
> difference in performance would not be a show-stopper for me, but others may 
> disagree.
>
> I've asked Kuba to retest 4xRocket vs. 4xCherrypy with the second test 
> configuration.
>
> Vasile Ermicioi, put in a vote for Rocket to be included in web2py because 
> I'm in the web2py community and there is still plenty of room for Rocket to 
> be optimized (which I noted).
>
> Now you're up-to-date.
>
> -tim
>
> -Original Message-
> From: "mdipierro" 
> Sent: Friday, March 19, 2010 9:01pm
> To: "web2py-users" 
> Subject: [web2py] Re: benchmarking: rocket vs pound with four rockets
>
> had a long day, can somebody provide an execute summary of all the
> tests?
>
> On Mar 19, 3:33 pm, Timothy Farrell  wrote:
> > Thank you Kuba.  Would you mind re-running the 4x pound test like this also?
>
> > On 3/19/2010 3:09 PM, Kuba Kucharski wrote:
>
> > > One instance of each, with 10 calls in a connection as it is closer to
> > > reallife scenario:
> > > (numbers speak for themselves)
>
> > > CHERRYPY:
>
> > > r...@kubatron:/home/kuba/httperf-0.9.0/src# ./httperf --hog --server
> > > 192.168.0.1 --port=8000 ==uri=/vae/default/benchmark2
> > > --num-conns=1 --num-calls=10
> > > httperf --hog --client=0/1 --server=192.168.0.1 --port=8000 --uri=/
> > > --send-buffer=4096 --recv-buffer=16384 --num-conns=1
> > > --num-calls=10
>
> > > Maximum connect burst length: 1
>
> > > Total: connections 1 requests 10 replies 10 test-duration 
> > > 67.659 s
>

RE: [web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-20 Thread Timothy Farrell
Summary:

First, I'll speak in the context of a single instance of Rocket.  I'll talk 
about pound in a bit.

ApacheBench, which I used to test Rocket, unfairly accentuates the benefits of 
Rocket.  httperf allows for a much fairer test.  The httperf configuration that 
Kuba used tested a non-standard situation (while applicable to a project he's 
working on) that accentuates a known weakness of Rocket over Cherrypy.  Even 
though the single-instance test was inconclusive the multi-instance test 
implied that Rocket would be slower in the single instance.

Because my tests and Kuba's tests focused on polar opposite situations, the 
numbers were different.

Nicholas Piel tested version 1.0.1 which did not include epoll support so his 
initial conclusions, while correct for the time, are no longer accurate.

The difference in situations revolves around how many HTTP requests are 
pipelined over a single connection.  ApacheBench puts them all in a few 
connections, httperf allows for configuring this.  Kuba's benchmark settings 
put one request per connection.  A real-world setting is something around 10, 
which Nicholas Piel uses.

Kuba released another round of tests that follow Nicholas Piel's HTTP/1.1 tests 
(10 requests per connection).  The results showed Rocket as performing slightly 
faster.

Now, let's talk about pound.  I've not used pound for any tests before so this 
was all new information to me.  The first test showed 4 instances of Rocket 
behind pound to be slower than 4 instances of Cherrypy behind pound on a 
Quad-core machine.  There are several possible explanations for this.  All of 
the explanations require more development on Rocket to work around.  The 
difference in performance would not be a show-stopper for me, but others may 
disagree.

I've asked Kuba to retest 4xRocket vs. 4xCherrypy with the second test 
configuration.

Vasile Ermicioi, put in a vote for Rocket to be included in web2py because I'm 
in the web2py community and there is still plenty of room for Rocket to be 
optimized (which I noted).

Now you're up-to-date.

-tim

-Original Message-
From: "mdipierro" 
Sent: Friday, March 19, 2010 9:01pm
To: "web2py-users" 
Subject: [web2py] Re: benchmarking: rocket vs pound with four rockets

had a long day, can somebody provide an execute summary of all the
tests?

On Mar 19, 3:33 pm, Timothy Farrell  wrote:
> Thank you Kuba.  Would you mind re-running the 4x pound test like this also?
>
> On 3/19/2010 3:09 PM, Kuba Kucharski wrote:
>
> > One instance of each, with 10 calls in a connection as it is closer to
> > reallife scenario:
> > (numbers speak for themselves)
>
> > CHERRYPY:
>
> > r...@kubatron:/home/kuba/httperf-0.9.0/src# ./httperf --hog --server
> > 192.168.0.1 --port=8000 ==uri=/vae/default/benchmark2
> > --num-conns=1 --num-calls=10
> > httperf --hog --client=0/1 --server=192.168.0.1 --port=8000 --uri=/
> > --send-buffer=4096 --recv-buffer=16384 --num-conns=1
> > --num-calls=10
>
> > Maximum connect burst length: 1
>
> > Total: connections 1 requests 10 replies 10 test-duration 
> > 67.659 s
>
> > Connection rate: 147.8 conn/s (6.8 ms/conn,<=1 concurrent connections)
> > Connection time [ms]: min 6.2 avg 6.8 max 10.5 median 6.5 stddev 0.2
> > Connection time [ms]: connect 0.1
> > Connection length [replies/conn]: 10.000
>
> > Request rate: 1478.0 req/s (0.7 ms/req)
> > Request size [B]: 64.0
>
> > Reply rate [replies/s]: min 1474.7 avg 1478.0 max 1480.3 stddev 2.0 (13 
> > samples)
> > Reply time [ms]: response 0.6 transfer 0.0
> > Reply size [B]: header 205.0 content 66.0 footer 2.0 (total 273.0)
> > Reply status: 1xx=0 2xx=0 3xx=10 4xx=0 5xx=0
>
> > CPU time [s]: user 25.67 system 41.99 (user 37.9% system 62.1% total 100.0%)
> > Net I/O: 483.5 KB/s (4.0*10^6 bps)
>
> > Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
> > Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0
>
> > ROCKET:
>
> > r...@kubatron:/home/kuba/httperf-0.9.0/src# ./httperf --hog --server
> > 192.168.0.1 --port=8000 ==uri=/vae/default/benchmark2
> > --num-conns=1 --num-calls=10
> > httperf --hog --client=0/1 --server=192.168.0.1 --port=8000 --uri=/
> > --send-buffer=4096 --recv-buffer=16384 --num-conns=1
> > --num-calls=10
> > Maximum connect burst length: 1
>
> > Total: connections 1 requests 10 replies 10 test-duration 
> > 64.760 s
>
> > Connection rate: 154.4 conn/s (6.5 ms/conn,<=1 concurrent connections)
> > Connection time [ms]: min 5.9 avg 6.5 max 72.7 median 6.5 stddev 1.0
> > Connection time [ms]: connect 0.1
> > Connection length

[web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-19 Thread mdipierro
had a long day, can somebody provide an execute summary of all the
tests?

On Mar 19, 3:33 pm, Timothy Farrell  wrote:
> Thank you Kuba.  Would you mind re-running the 4x pound test like this also?
>
> On 3/19/2010 3:09 PM, Kuba Kucharski wrote:
>
> > One instance of each, with 10 calls in a connection as it is closer to
> > reallife scenario:
> > (numbers speak for themselves)
>
> > CHERRYPY:
>
> > r...@kubatron:/home/kuba/httperf-0.9.0/src# ./httperf --hog --server
> > 192.168.0.1 --port=8000 ==uri=/vae/default/benchmark2
> > --num-conns=1 --num-calls=10
> > httperf --hog --client=0/1 --server=192.168.0.1 --port=8000 --uri=/
> > --send-buffer=4096 --recv-buffer=16384 --num-conns=1
> > --num-calls=10
>
> > Maximum connect burst length: 1
>
> > Total: connections 1 requests 10 replies 10 test-duration 
> > 67.659 s
>
> > Connection rate: 147.8 conn/s (6.8 ms/conn,<=1 concurrent connections)
> > Connection time [ms]: min 6.2 avg 6.8 max 10.5 median 6.5 stddev 0.2
> > Connection time [ms]: connect 0.1
> > Connection length [replies/conn]: 10.000
>
> > Request rate: 1478.0 req/s (0.7 ms/req)
> > Request size [B]: 64.0
>
> > Reply rate [replies/s]: min 1474.7 avg 1478.0 max 1480.3 stddev 2.0 (13 
> > samples)
> > Reply time [ms]: response 0.6 transfer 0.0
> > Reply size [B]: header 205.0 content 66.0 footer 2.0 (total 273.0)
> > Reply status: 1xx=0 2xx=0 3xx=10 4xx=0 5xx=0
>
> > CPU time [s]: user 25.67 system 41.99 (user 37.9% system 62.1% total 100.0%)
> > Net I/O: 483.5 KB/s (4.0*10^6 bps)
>
> > Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
> > Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0
>
> > ROCKET:
>
> > r...@kubatron:/home/kuba/httperf-0.9.0/src# ./httperf --hog --server
> > 192.168.0.1 --port=8000 ==uri=/vae/default/benchmark2
> > --num-conns=1 --num-calls=10
> > httperf --hog --client=0/1 --server=192.168.0.1 --port=8000 --uri=/
> > --send-buffer=4096 --recv-buffer=16384 --num-conns=1
> > --num-calls=10
> > Maximum connect burst length: 1
>
> > Total: connections 1 requests 10 replies 10 test-duration 
> > 64.760 s
>
> > Connection rate: 154.4 conn/s (6.5 ms/conn,<=1 concurrent connections)
> > Connection time [ms]: min 5.9 avg 6.5 max 72.7 median 6.5 stddev 1.0
> > Connection time [ms]: connect 0.1
> > Connection length [replies/conn]: 10.000
>
> > Request rate: 1544.2 req/s (0.6 ms/req)
> > Request size [B]: 64.0
>
> > Reply rate [replies/s]: min 1526.9 avg 1544.2 max 1555.9 stddev 8.6 (12 
> > samples)
> > Reply time [ms]: response 0.6 transfer 0.0
> > Reply size [B]: header 216.0 content 66.0 footer 0.0 (total 282.0)
> > Reply status: 1xx=0 2xx=0 3xx=10 4xx=0 5xx=0
>
> > CPU time [s]: user 24.18 system 40.58 (user 37.3% system 62.7% total 100.0%)
> > Net I/O: 521.8 KB/s (4.3*10^6 bps)
>
> > Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
> > Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0

-- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



Re: [web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-19 Thread Timothy Farrell

Thank you Kuba.  Would you mind re-running the 4x pound test like this also?

On 3/19/2010 3:09 PM, Kuba Kucharski wrote:

One instance of each, with 10 calls in a connection as it is closer to
reallife scenario:
(numbers speak for themselves)


CHERRYPY:

r...@kubatron:/home/kuba/httperf-0.9.0/src# ./httperf --hog --server
192.168.0.1 --port=8000 ==uri=/vae/default/benchmark2
--num-conns=1 --num-calls=10
httperf --hog --client=0/1 --server=192.168.0.1 --port=8000 --uri=/
--send-buffer=4096 --recv-buffer=16384 --num-conns=1
--num-calls=10

Maximum connect burst length: 1

Total: connections 1 requests 10 replies 10 test-duration 67.659 s

Connection rate: 147.8 conn/s (6.8 ms/conn,<=1 concurrent connections)
Connection time [ms]: min 6.2 avg 6.8 max 10.5 median 6.5 stddev 0.2
Connection time [ms]: connect 0.1
Connection length [replies/conn]: 10.000

Request rate: 1478.0 req/s (0.7 ms/req)
Request size [B]: 64.0

Reply rate [replies/s]: min 1474.7 avg 1478.0 max 1480.3 stddev 2.0 (13 samples)
Reply time [ms]: response 0.6 transfer 0.0
Reply size [B]: header 205.0 content 66.0 footer 2.0 (total 273.0)
Reply status: 1xx=0 2xx=0 3xx=10 4xx=0 5xx=0

CPU time [s]: user 25.67 system 41.99 (user 37.9% system 62.1% total 100.0%)
Net I/O: 483.5 KB/s (4.0*10^6 bps)

Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0


ROCKET:

r...@kubatron:/home/kuba/httperf-0.9.0/src# ./httperf --hog --server
192.168.0.1 --port=8000 ==uri=/vae/default/benchmark2
--num-conns=1 --num-calls=10
httperf --hog --client=0/1 --server=192.168.0.1 --port=8000 --uri=/
--send-buffer=4096 --recv-buffer=16384 --num-conns=1
--num-calls=10
Maximum connect burst length: 1

Total: connections 1 requests 10 replies 10 test-duration 64.760 s

Connection rate: 154.4 conn/s (6.5 ms/conn,<=1 concurrent connections)
Connection time [ms]: min 5.9 avg 6.5 max 72.7 median 6.5 stddev 1.0
Connection time [ms]: connect 0.1
Connection length [replies/conn]: 10.000

Request rate: 1544.2 req/s (0.6 ms/req)
Request size [B]: 64.0

Reply rate [replies/s]: min 1526.9 avg 1544.2 max 1555.9 stddev 8.6 (12 samples)
Reply time [ms]: response 0.6 transfer 0.0
Reply size [B]: header 216.0 content 66.0 footer 0.0 (total 282.0)
Reply status: 1xx=0 2xx=0 3xx=10 4xx=0 5xx=0

CPU time [s]: user 24.18 system 40.58 (user 37.3% system 62.7% total 100.0%)
Net I/O: 521.8 KB/s (4.3*10^6 bps)

Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0

   


--
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



Re: [web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-19 Thread Kuba Kucharski
One instance of each, with 10 calls in a connection as it is closer to
reallife scenario:
(numbers speak for themselves)


CHERRYPY:

r...@kubatron:/home/kuba/httperf-0.9.0/src# ./httperf --hog --server
192.168.0.1 --port=8000 ==uri=/vae/default/benchmark2
--num-conns=1 --num-calls=10
httperf --hog --client=0/1 --server=192.168.0.1 --port=8000 --uri=/
--send-buffer=4096 --recv-buffer=16384 --num-conns=1
--num-calls=10

Maximum connect burst length: 1

Total: connections 1 requests 10 replies 10 test-duration 67.659 s

Connection rate: 147.8 conn/s (6.8 ms/conn, <=1 concurrent connections)
Connection time [ms]: min 6.2 avg 6.8 max 10.5 median 6.5 stddev 0.2
Connection time [ms]: connect 0.1
Connection length [replies/conn]: 10.000

Request rate: 1478.0 req/s (0.7 ms/req)
Request size [B]: 64.0

Reply rate [replies/s]: min 1474.7 avg 1478.0 max 1480.3 stddev 2.0 (13 samples)
Reply time [ms]: response 0.6 transfer 0.0
Reply size [B]: header 205.0 content 66.0 footer 2.0 (total 273.0)
Reply status: 1xx=0 2xx=0 3xx=10 4xx=0 5xx=0

CPU time [s]: user 25.67 system 41.99 (user 37.9% system 62.1% total 100.0%)
Net I/O: 483.5 KB/s (4.0*10^6 bps)

Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0


ROCKET:

r...@kubatron:/home/kuba/httperf-0.9.0/src# ./httperf --hog --server
192.168.0.1 --port=8000 ==uri=/vae/default/benchmark2
--num-conns=1 --num-calls=10
httperf --hog --client=0/1 --server=192.168.0.1 --port=8000 --uri=/
--send-buffer=4096 --recv-buffer=16384 --num-conns=1
--num-calls=10
Maximum connect burst length: 1

Total: connections 1 requests 10 replies 10 test-duration 64.760 s

Connection rate: 154.4 conn/s (6.5 ms/conn, <=1 concurrent connections)
Connection time [ms]: min 5.9 avg 6.5 max 72.7 median 6.5 stddev 1.0
Connection time [ms]: connect 0.1
Connection length [replies/conn]: 10.000

Request rate: 1544.2 req/s (0.6 ms/req)
Request size [B]: 64.0

Reply rate [replies/s]: min 1526.9 avg 1544.2 max 1555.9 stddev 8.6 (12 samples)
Reply time [ms]: response 0.6 transfer 0.0
Reply size [B]: header 216.0 content 66.0 footer 0.0 (total 282.0)
Reply status: 1xx=0 2xx=0 3xx=10 4xx=0 5xx=0

CPU time [s]: user 24.18 system 40.58 (user 37.3% system 62.7% total 100.0%)
Net I/O: 521.8 KB/s (4.3*10^6 bps)

Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0

-- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



Re: [web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-19 Thread Vasile Ermicioi
I would add a vote for Rocket.

A few thoughts about:
- rocket is developed inside our community, that means more control over it:
feedback, contributions etc
- still young, that means  it will be optimized :) I believe that Tim and
others will do so
- one file

And even if cherrypy is only a bit faster than Rocket (but is it?) I don't
see in that a reason to stay on cherrypy - most of the time is not on web
server response but on the framework and code itself

-- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



Re: [web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-19 Thread Kuba Kucharski
>
> My point here was about the general web2py population rather than your
> "thing".  No offense intended, but you have a special case.  web2py handles
> web-services but that is not it's primary function.

yes, true, I was just explaining my httperf thinking


>I think Massimo wishes
> to primarily direct web2py toward the traditional "browsers requesting web
> content over persistent connections" situation.  (Massimo, as always, I'm
> open to correction.)

I think so too.
On the other hand "@services" are very very powerfull tool in web2py
and are as much important as classic web-services

-- 
Kuba

-- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



Re: [web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-19 Thread Timothy Farrell



In my own test, the difference (on Windows) between 1 and 10 yields a ~2.5x 
increase in requests per second.  I don't have a readily accessible Linux right 
now.  Kuba, please run these>numbers again with --num-calls=10.
 

my reality is a lot of concurrent connections with only one call.
I did num-calls=1 on purpose. I needed this test because of a "thing"
I am building and this thing MUST work like this.

Although I will try num-calls10 as soon as I have access to my testing
environment again
   
Perhaps it's important to state the context for which our benchmarks are 
conducted.




In the bigger picture, there are some other matters to consider:
- Who will likely run web2py with the build-in webserver?  New users testing
things out or devs running relatively small jobs.
 

This might not be true. My "thing" is not for users surfing through
some web application..downloading.. having sessions.., it is about
some voice over ip servers talking to my servers via xml-rpc. So, I
may need embedded server(like rocket or cherrypy) in production
because it could simplify cluster environment
   
My point here was about the general web2py population rather than your 
"thing".  No offense intended, but you have a special case.  web2py 
handles web-services but that is not it's primary function.  I think 
Massimo wishes to primarily direct web2py toward the traditional 
"browsers requesting web content over persistent connections" 
situation.  (Massimo, as always, I'm open to correction.)



thank you for your time, Tim, Rocket code looks really impressive
   


I built Rocket because I saw some deficiencies in Cherrypy.  I'm not 
expecting that web2py and Rocket will have totally compatible goals, but 
I do think that there is enough overlap for the projects to benefit from 
each other.  Beyond that, Rocket is young and there are plenty of 
optimization and tuning options yet to be explored.



-tim

--
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



Re: [web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-19 Thread Kuba Kucharski
On Fri, Mar 19, 2010 at 2:48 PM, mdipierro  wrote:
> Can you also do me a favor? Can you benchmark sneaky.py (in web2py/
> gluon/)? In my tests it was faster than cherryby and I thought rocket
> was an improvement over it.

ok, as soon as I get back to my testing environment again

-- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



Re: [web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-19 Thread Kuba Kucharski
>Just looking over the httperf command, Kuba used --num-calls=1  This would not 
>be an accurate real-world test because it creates a new connection for every 
>request whereas most >browsers span requests over only a few connections.  
>Nicholas Piel's test used --num-calls=10 for testing HTTP/1.1 servers.

>In my own test, the difference (on Windows) between 1 and 10 yields a ~2.5x 
>increase in requests per second.  I don't have a readily accessible Linux 
>right now.  Kuba, please run these >numbers again with --num-calls=10.

my reality is a lot of concurrent connections with only one call.
I did num-calls=1 on purpose. I needed this test because of a "thing"
I am building and this thing MUST work like this.

Although I will try num-calls10 as soon as I have access to my testing
environment again


> I'd be curious which version Nicholas Piel tested.  I just fixed a
> performance issue yesterday for linux.  If he tested prior to that version
> (1.0.2) then yes, it would appear much slower.

he used 1.0.1 - I am almost sure because  comments about Rocket came
on his blog entry earlier than yesterday and he was writing about
doing rocket benchmarks in past tense so my conclusion/feeling is that
he tested the version with a bug.

> Some other things to consider:
> - Kuba, how many processor cores are on your test machine?  Having more
> processes than processors will hurt Rocket more than Cherrypy.

of course 4 processor cores. I tried 8 processes over 4 processor
cores and indeed there is no gain

> - It seems that you are testing this against web2py (notice how all the
> responses are 3xx), perhaps you should just test the servers themselves for
> now.  If that's not the case, may we see the invocation code?

yes.
although I am testing both with web2py, and same application for both.
It is fair. I'm not comparing my benchmarks with those from Nicholas.
My "thing" will run web2py so this is what interests me.

> In the bigger picture, there are some other matters to consider:
> - Who will likely run web2py with the build-in webserver?  New users testing
> things out or devs running relatively small jobs.

This might not be true. My "thing" is not for users surfing through
some web application..downloading.. having sessions.., it is about
some voice over ip servers talking to my servers via xml-rpc. So, I
may need embedded server(like rocket or cherrypy) in production
because it could simplify cluster environment


thank you for your time, Tim, Rocket code looks really impressive

-- 
Kuba

-- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



Re: [web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-19 Thread Timothy Farrell
Just looking over the httperf command, Kuba used --num-calls=1  This 
would not be an accurate real-world test because it creates a new 
connection for every request whereas most browsers span requests over 
only a few connections.  Nicholas Piel's test used --num-calls=10 for 
testing HTTP/1.1 servers.


In my own test, the difference (on Windows) between 1 and 10 yields a 
~2.5x increase in requests per second.  I don't have a readily 
accessible Linux right now.  Kuba, please run these numbers again with 
--num-calls=10.


-tim

On 3/19/2010 8:49 AM, Timothy Farrell wrote:
This is a different test than the one I presented.  The test I 
presented was run on Windows with one instance and tested with 
ApacheBench.  I've looked at httperf a little and it seems to be a 
more realistic test than ApacheBench.


Due to the nature of how Rocket handles listening sockets, it is a 
little slower at accepting connections compared to Cherrypy.  Nicholas 
Piel's test handles 10 requests per connection whereas Apachebench 
would handle 1000.  So there will be a difference by virtue of the 
difference in fresh connections.  This could explain why Rocket is 
slower with 4 instances, but that being the case it should also be 
slower with one instance (even though they hit some arbitrary external 
wall) which is inconclusive at this point.


I'd be curious which version Nicholas Piel tested.  I just fixed a 
performance issue yesterday for linux.  If he tested prior to that 
version (1.0.2) then yes, it would appear much slower.


Are these numbers consistent with Tim numbers? Could this be dues to a
different memory usage?

Note that my tests were run on Windows.  I'm not sure what Cherrypy's 
bottleneck on Windows is, but Rocket is not subject to it on that 
platform.  Also, Rocket uses less memory (by almost 2MB) than Cherrypy 
on Windows 7.  I haven't looked at memory usage in Linux but due to 
Rocket's less-custom code-base we should see a similarly smaller 
memory usage amount.


In the 4-instance test, this is not a use-case I'd considered yet.  As 
previously mentioned, Rocket is slower at accepting connections.  If 
pound was closing the connection (HTTP 1.0 behavior and some HTTP 1.1 
proxies) after every request, this could explain why Rocket comes up 
slower.


Some other things to consider:
- Kuba, how many processor cores are on your test machine?  Having 
more processes than processors will hurt Rocket more than Cherrypy.
- It seems that you are testing this against web2py (notice how all 
the responses are 3xx), perhaps you should just test the servers 
themselves for now.  If that's not the case, may we see the invocation 
code?


In the bigger picture, there are some other matters to consider:
- Who will likely run web2py with the build-in webserver?  New users 
testing things out or devs running relatively small jobs.
- What platforms will those run on?  Windows in the former.  The 
latter is anyone's guess.  (Let's not start a rabbit-trail about which 
operating system is better, just consider which one most students run.)


So here are some things to consider in this situation:
- Rocket measures (not horribly) slower than Cherrypy on Linux with 4 
instances running.  How common of a situation is this?
- Rocket is not affected by a major concurrency issue with 
single-instance Cherrypy on Windows.


I think going forward we should figure out which one is truly faster 
as a single-instance on Linux.  I wouldn't be surprised if Rocket is 
slightly slower than Cherrypy but it should not be vastly slower.  The 
goal of Rocket was not to be faster than Cherrypy but to be more 
concurrent.  So far that's true for Windows and inconclusive on 
Linux.  I don't have access to a Mac, but I would be surprised if Macs 
performed differently than Linux.


Anyone know how to identify that wall that both servers are hitting on 
Linux?


-tim

On 3/19/2010 5:36 AM, Kuba Kucharski wrote:

Are these numbers consistent with Tim numbers? Could this be dues to a
different memory usage?

1. Tim?
2. I have a lot of free memory while testing


I wrote email to an author of the blog entry about wsgi webserver
benchmarks - Nicholas Piël
http://nichol.as/benchmark-of-python-web-servers

In short he says:


make sure you do not use ab

yes, in my tests I use httperf


make sure you are running from other machine with limits also tweaked

this is done like that by me


use recompiled httperf

done already

this also comes from him:

"I did a quick benchmark after being pointed to Rocket and I could not
see the same performance advantage for Rocket over CherryPy, more the
opposite."







--
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



Re: [web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-19 Thread Alex Fanjul
Massimo, there is no possibility to keep both of two and select one? 
anyway it's only a file isn't it?

Or maybe keep it as plugins to download?
alex

El 19/03/2010 14:24, mdipierro escribió:

Clearly we have conflicting benchmarks. I like Rocket because it is
cleaner but we need to go with the fastest. Let's wait for Tim
response and we there is a disagreement I propose that a few more
people try reproduce the benchmarks (to make sure there is not
something weird in the setup) and then we decide what to do.

Massimo

On Mar 19, 5:36 am, Kuba Kucharski  wrote:
   

Are these numbers consistent with Tim numbers? Could this be dues to a
different memory usage?
   

1. Tim?
2. I have a lot of free memory while testing

I wrote email to an author of the blog entry about wsgi webserver
benchmarks - Nicholas Piëlhttp://nichol.as/benchmark-of-python-web-servers

In short he says:

 

make sure you do not use ab
   

yes, in my tests I use httperf

 

make sure you are running from other machine with limits also tweaked
   

this is done like that by me

 

use recompiled httperf
   

done already

this also comes from him:

"I did a quick benchmark after being pointed to Rocket and I could not
see the same performance advantage for Rocket over CherryPy, more the
opposite."

--
Kuba
 
   


--
Alejandro Fanjul Fdez.
alex.fan...@gmail.com
www.mhproject.org

--
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



Re: [web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-19 Thread Timothy Farrell
This is a different test than the one I presented.  The test I presented 
was run on Windows with one instance and tested with ApacheBench.  I've 
looked at httperf a little and it seems to be a more realistic test than 
ApacheBench.


Due to the nature of how Rocket handles listening sockets, it is a 
little slower at accepting connections compared to Cherrypy.  Nicholas 
Piel's test handles 10 requests per connection whereas Apachebench would 
handle 1000.  So there will be a difference by virtue of the difference 
in fresh connections.  This could explain why Rocket is slower with 4 
instances, but that being the case it should also be slower with one 
instance (even though they hit some arbitrary external wall) which is 
inconclusive at this point.


I'd be curious which version Nicholas Piel tested.  I just fixed a 
performance issue yesterday for linux.  If he tested prior to that 
version (1.0.2) then yes, it would appear much slower.


Are these numbers consistent with Tim numbers? Could this be dues to a
different memory usage?

Note that my tests were run on Windows.  I'm not sure what Cherrypy's 
bottleneck on Windows is, but Rocket is not subject to it on that 
platform.  Also, Rocket uses less memory (by almost 2MB) than Cherrypy 
on Windows 7.  I haven't looked at memory usage in Linux but due to 
Rocket's less-custom code-base we should see a similarly smaller memory 
usage amount.


In the 4-instance test, this is not a use-case I'd considered yet.  As 
previously mentioned, Rocket is slower at accepting connections.  If 
pound was closing the connection (HTTP 1.0 behavior and some HTTP 1.1 
proxies) after every request, this could explain why Rocket comes up slower.


Some other things to consider:
- Kuba, how many processor cores are on your test machine?  Having more 
processes than processors will hurt Rocket more than Cherrypy.
- It seems that you are testing this against web2py (notice how all the 
responses are 3xx), perhaps you should just test the servers themselves 
for now.  If that's not the case, may we see the invocation code?


In the bigger picture, there are some other matters to consider:
- Who will likely run web2py with the build-in webserver?  New users 
testing things out or devs running relatively small jobs.
- What platforms will those run on?  Windows in the former.  The latter 
is anyone's guess.  (Let's not start a rabbit-trail about which 
operating system is better, just consider which one most students run.)


So here are some things to consider in this situation:
- Rocket measures (not horribly) slower than Cherrypy on Linux with 4 
instances running.  How common of a situation is this?
- Rocket is not affected by a major concurrency issue with 
single-instance Cherrypy on Windows.


I think going forward we should figure out which one is truly faster as 
a single-instance on Linux.  I wouldn't be surprised if Rocket is 
slightly slower than Cherrypy but it should not be vastly slower.  The 
goal of Rocket was not to be faster than Cherrypy but to be more 
concurrent.  So far that's true for Windows and inconclusive on Linux.  
I don't have access to a Mac, but I would be surprised if Macs performed 
differently than Linux.


Anyone know how to identify that wall that both servers are hitting on 
Linux?


-tim

On 3/19/2010 5:36 AM, Kuba Kucharski wrote:

Are these numbers consistent with Tim numbers? Could this be dues to a
different memory usage?
 

1. Tim?
2. I have a lot of free memory while testing


I wrote email to an author of the blog entry about wsgi webserver
benchmarks - Nicholas Piël
http://nichol.as/benchmark-of-python-web-servers

In short he says:

   

make sure you do not use ab
 

yes, in my tests I use httperf

   

make sure you are running from other machine with limits also tweaked
 

this is done like that by me

   

use recompiled httperf
 

done already

this also comes from him:

"I did a quick benchmark after being pointed to Rocket and I could not
see the same performance advantage for Rocket over CherryPy, more the
opposite."



   


--
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



[web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-19 Thread mdipierro
Can you also do me a favor? Can you benchmark sneaky.py (in web2py/
gluon/)? In my tests it was faster than cherryby and I thought rocket
was an improvement over it.

On Mar 19, 8:43 am, Kuba Kucharski  wrote:
> I like Rocket too. I would like it to be better than Cherrypy
>
> --
> Kuba

-- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



Re: [web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-19 Thread Kuba Kucharski
I like Rocket too. I would like it to be better than Cherrypy

-- 
Kuba

-- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



[web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-19 Thread mdipierro
Clearly we have conflicting benchmarks. I like Rocket because it is
cleaner but we need to go with the fastest. Let's wait for Tim
response and we there is a disagreement I propose that a few more
people try reproduce the benchmarks (to make sure there is not
something weird in the setup) and then we decide what to do.

Massimo

On Mar 19, 5:36 am, Kuba Kucharski  wrote:
> >Are these numbers consistent with Tim numbers? Could this be dues to a
> >different memory usage?
>
> 1. Tim?
> 2. I have a lot of free memory while testing
>
> I wrote email to an author of the blog entry about wsgi webserver
> benchmarks - Nicholas Piëlhttp://nichol.as/benchmark-of-python-web-servers
>
> In short he says:
>
> >make sure you do not use ab
>
> yes, in my tests I use httperf
>
> >make sure you are running from other machine with limits also tweaked
>
> this is done like that by me
>
> >use recompiled httperf
>
> done already
>
> this also comes from him:
>
> "I did a quick benchmark after being pointed to Rocket and I could not
> see the same performance advantage for Rocket over CherryPy, more the
> opposite."
>
> --
> Kuba

-- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



Re: [web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-19 Thread Kuba Kucharski
>Are these numbers consistent with Tim numbers? Could this be dues to a
>different memory usage?

1. Tim?
2. I have a lot of free memory while testing


I wrote email to an author of the blog entry about wsgi webserver
benchmarks - Nicholas Piël
http://nichol.as/benchmark-of-python-web-servers

In short he says:

>make sure you do not use ab
yes, in my tests I use httperf

>make sure you are running from other machine with limits also tweaked
this is done like that by me

>use recompiled httperf
done already

this also comes from him:

"I did a quick benchmark after being pointed to Rocket and I could not
see the same performance advantage for Rocket over CherryPy, more the
opposite."



-- 
Kuba

-- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



[web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-18 Thread mdipierro
Are these numbers consistent with Tim numbers? Could this be dues to a
different memory usage?

On Mar 18, 8:51 pm, Kuba Kucharski  wrote:
> I've changed methodics a bit so I repeat my measurments for ROCKET at
> the end of the file.
>
> Methodics: Increase rate till errors show
>
> @Massimo
> as you can see quad cherrypy is faster than quad rocket. but when you
> look closer to "SOLO" comparision you can see that both servers are
> hitting SAME WALL (880Req/sec), this must be some I/O setting.
>
> request per second:
>
> ---ROCKET-CHERRYPY
> SOLO   880               880
> QUAD  1881             2475
>
> I can think of these:
>
> sysctl -w fs.file-max=128000
> sysctl -w net.ipv4.tcp_keepalive_time=300
> sysctl -w net.core.somaxconn=25
> sysctl -w net.ipv4.tcp_max_syn_backlog=42500
> sysctl -w net.core.netdev_max_backlog=42500
> ulimit -n 10240
> /bin/echo 0 > /proc/sys/net/ipv4/tcp_syncookies
>
> r...@kubatron:/home/kuba/httperf-0.9.0/src# ./httperf --hog --server
> 192.168.0.1 --port=8080 ==uri=/vae/default/benchmark2 --ra 2500
> --num-conns=1
>
> 4 x CHERRYPY via POUND
>
> Connection rate: 2475.3 conn/s (0.4 ms/conn, <=30 concurrent connections)
> Connection time [ms]: min 0.9 avg 5.8 max 2998.4 median 4.5 stddev 43.4
> Connection time [ms]: connect 0.1
> Connection length [replies/conn]: 1.000
>
> Request rate: 2475.3 req/s (0.4 ms/req)
> Request size [B]: 64.0
>
> Reply rate [replies/s]: min 0.0 avg 0.0 max 0.0 stddev 0.0 (0 samples)
> Reply time [ms]: response 5.6 transfer 0.2
> Reply size [B]: header 205.0 content 66.0 footer 2.0 (total 273.0)
> Reply status: 1xx=0 2xx=0 3xx=1 4xx=0 5xx=0
>
> CPU time [s]: user 1.79 system 2.25 (user 44.3% system 55.7% total 100.0%)
> Net I/O: 809.8 KB/s (6.6*10^6 bps)
>
> Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
> Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0
>
> CHERRYPY SOLO
>
> r...@kubatron:/home/kuba/httperf-0.9.0/src# ./httperf --hog --server
> 192.168.0.1 --port=8000 ==uri=/vae/default/benchmark2 --ra 880
> --num-conns=1
>
> Connection rate: 880.0 conn/s (1.1 ms/conn, <=12 concurrent connections)
> Connection time [ms]: min 0.7 avg 2.1 max 210.1 median 0.5 stddev 3.9
> Connection time [ms]: connect 0.1
> Connection length [replies/conn]: 1.000
>
> Request rate: 880.0 req/s (1.1 ms/req)
> Request size [B]: 64.0
>
> Reply rate [replies/s]: min 880.0 avg 880.0 max 880.1 stddev 0.1 (2 samples)
> Reply time [ms]: response 1.9 transfer 0.1
> Reply size [B]: header 205.0 content 66.0 footer 2.0 (total 273.0)
> Reply status: 1xx=0 2xx=0 3xx=1 4xx=0 5xx=0
>
> CPU time [s]: user 5.84 system 5.52 (user 51.4% system 48.5% total 99.9%)
> Net I/O: 287.9 KB/s (2.4*10^6 bps)
> - Show quoted text -
>
> Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
> Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0
>
> #4xROCKET via POUND
>
> r...@kubatron:/home/kuba/httperf-0.9.0/src# ./httperf --hog --server
> 192.168.0.1 --port=8080 ==uri=/vae/default/benchmark2 --ra 1881
> --num-conns=1
> httperf --hog --client=0/1 --server=192.168.0.1 --port=8080 --uri=/
> --rate=1881 --send-buffer=4096 --recv-buffer=16384 --num-conns=1
> --num-calls=1
> Maximum connect burst length: 14
>
> Total: connections 1 requests 1 replies 1 test-duration 5.319 s
>
> Connection rate: 1879.9 conn/s (0.5 ms/conn, <=24 concurrent connections)
> Connection time [ms]: min 0.9 avg 4.3 max 3008.3 median 3.5 stddev 42.7
> Connection time [ms]: connect 0.1
> Connection length [replies/conn]: 1.000
>
> Request rate: 1879.9 req/s (0.5 ms/req)
> Request size [B]: 64.0
>
> Reply rate [replies/s]: min 1879.3 avg 1879.3 max 1879.3 stddev 0.0 (1 
> samples)
> Reply time [ms]: response 4.1 transfer 0.1
> Reply size [B]: header 211.0 content 66.0 footer 0.0 (total 277.0)
> Reply status: 1xx=0 2xx=0 3xx=1 4xx=0 5xx=0
>
> CPU time [s]: user 2.54 system 2.76 (user 47.7% system 52.0% total 99.6%)
> Net I/O: 626.0 KB/s (5.1*10^6 bps)
>
> Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
> Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0
>
> r...@kubatron:/home/kuba/httperf-0.9.0/src# ./httperf --hog --server
> 192.168.0.1 --port=8000 ==uri=/vae/default/benchmark2 --ra 880
> --num-conns=1
> httperf --hog --client=0/1 --server=192.168.0.1 --port=8000 --uri=/
> --rate=880 --send-buffer=4096 --recv-buffer=16384 --num-conns=1
> --num-calls=1
> Maximum connect burst length: 5
>
> Total: connections 1 requests 1 replies 1 test-duration 11.364 s
>
> Connection rate: 880.0 conn/s (1.1 ms/conn, <=9 concurrent connections)
> Connection time [ms]: min 0.4 avg 1.3 max 12.0 median 0.5 stddev 1.4
> Connection time [ms]: connect 0.1
> Connection length [replies/conn]: 1.000
>
> Request rate: 880.0 req/s (1.1 ms/req)
> Request size [B]: 64.0
>
> Reply rate [replies/s]: min 880.0 avg 880.0 max 880.1 stddev 0.1 (2 samples)
> Reply time [ms]: response 1.2 transfer 0.1
> Reply size [B]: 

Re: [web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-18 Thread Kuba Kucharski
the last one is doubled rocket solo w/o a header..

--
Kuba

-- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



Re: [web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-18 Thread Kuba Kucharski
I've changed methodics a bit so I repeat my measurments for ROCKET at
the end of the file.

Methodics: Increase rate till errors show

@Massimo
as you can see quad cherrypy is faster than quad rocket. but when you
look closer to "SOLO" comparision you can see that both servers are
hitting SAME WALL (880Req/sec), this must be some I/O setting.



request per second:

---ROCKET-CHERRYPY
SOLO   880   880
QUAD  1881 2475


I can think of these:

sysctl -w fs.file-max=128000
sysctl -w net.ipv4.tcp_keepalive_time=300
sysctl -w net.core.somaxconn=25
sysctl -w net.ipv4.tcp_max_syn_backlog=42500
sysctl -w net.core.netdev_max_backlog=42500
ulimit -n 10240
/bin/echo 0 > /proc/sys/net/ipv4/tcp_syncookies


r...@kubatron:/home/kuba/httperf-0.9.0/src# ./httperf --hog --server
192.168.0.1 --port=8080 ==uri=/vae/default/benchmark2 --ra 2500
--num-conns=1

4 x CHERRYPY via POUND

Connection rate: 2475.3 conn/s (0.4 ms/conn, <=30 concurrent connections)
Connection time [ms]: min 0.9 avg 5.8 max 2998.4 median 4.5 stddev 43.4
Connection time [ms]: connect 0.1
Connection length [replies/conn]: 1.000

Request rate: 2475.3 req/s (0.4 ms/req)
Request size [B]: 64.0

Reply rate [replies/s]: min 0.0 avg 0.0 max 0.0 stddev 0.0 (0 samples)
Reply time [ms]: response 5.6 transfer 0.2
Reply size [B]: header 205.0 content 66.0 footer 2.0 (total 273.0)
Reply status: 1xx=0 2xx=0 3xx=1 4xx=0 5xx=0

CPU time [s]: user 1.79 system 2.25 (user 44.3% system 55.7% total 100.0%)
Net I/O: 809.8 KB/s (6.6*10^6 bps)

Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0


CHERRYPY SOLO

r...@kubatron:/home/kuba/httperf-0.9.0/src# ./httperf --hog --server
192.168.0.1 --port=8000 ==uri=/vae/default/benchmark2 --ra 880
--num-conns=1

Connection rate: 880.0 conn/s (1.1 ms/conn, <=12 concurrent connections)
Connection time [ms]: min 0.7 avg 2.1 max 210.1 median 0.5 stddev 3.9
Connection time [ms]: connect 0.1
Connection length [replies/conn]: 1.000

Request rate: 880.0 req/s (1.1 ms/req)
Request size [B]: 64.0

Reply rate [replies/s]: min 880.0 avg 880.0 max 880.1 stddev 0.1 (2 samples)
Reply time [ms]: response 1.9 transfer 0.1
Reply size [B]: header 205.0 content 66.0 footer 2.0 (total 273.0)
Reply status: 1xx=0 2xx=0 3xx=1 4xx=0 5xx=0

CPU time [s]: user 5.84 system 5.52 (user 51.4% system 48.5% total 99.9%)
Net I/O: 287.9 KB/s (2.4*10^6 bps)
- Show quoted text -

Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0


#4xROCKET via POUND

r...@kubatron:/home/kuba/httperf-0.9.0/src# ./httperf --hog --server
192.168.0.1 --port=8080 ==uri=/vae/default/benchmark2 --ra 1881
--num-conns=1
httperf --hog --client=0/1 --server=192.168.0.1 --port=8080 --uri=/
--rate=1881 --send-buffer=4096 --recv-buffer=16384 --num-conns=1
--num-calls=1
Maximum connect burst length: 14

Total: connections 1 requests 1 replies 1 test-duration 5.319 s

Connection rate: 1879.9 conn/s (0.5 ms/conn, <=24 concurrent connections)
Connection time [ms]: min 0.9 avg 4.3 max 3008.3 median 3.5 stddev 42.7
Connection time [ms]: connect 0.1
Connection length [replies/conn]: 1.000

Request rate: 1879.9 req/s (0.5 ms/req)
Request size [B]: 64.0

Reply rate [replies/s]: min 1879.3 avg 1879.3 max 1879.3 stddev 0.0 (1 samples)
Reply time [ms]: response 4.1 transfer 0.1
Reply size [B]: header 211.0 content 66.0 footer 0.0 (total 277.0)
Reply status: 1xx=0 2xx=0 3xx=1 4xx=0 5xx=0

CPU time [s]: user 2.54 system 2.76 (user 47.7% system 52.0% total 99.6%)
Net I/O: 626.0 KB/s (5.1*10^6 bps)

Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0


r...@kubatron:/home/kuba/httperf-0.9.0/src# ./httperf --hog --server
192.168.0.1 --port=8000 ==uri=/vae/default/benchmark2 --ra 880
--num-conns=1
httperf --hog --client=0/1 --server=192.168.0.1 --port=8000 --uri=/
--rate=880 --send-buffer=4096 --recv-buffer=16384 --num-conns=1
--num-calls=1
Maximum connect burst length: 5

Total: connections 1 requests 1 replies 1 test-duration 11.364 s

Connection rate: 880.0 conn/s (1.1 ms/conn, <=9 concurrent connections)
Connection time [ms]: min 0.4 avg 1.3 max 12.0 median 0.5 stddev 1.4
Connection time [ms]: connect 0.1
Connection length [replies/conn]: 1.000

Request rate: 880.0 req/s (1.1 ms/req)
Request size [B]: 64.0

Reply rate [replies/s]: min 880.0 avg 880.0 max 880.1 stddev 0.1 (2 samples)
Reply time [ms]: response 1.2 transfer 0.1
Reply size [B]: header 205.0 content 66.0 footer 2.0 (total 273.0)
Reply status: 1xx=0 2xx=0 3xx=1 4xx=0 5xx=0

CPU time [s]: user 6.24 system 5.12 (user 55.0% system 45.0% total 100.0%)
Net I/O: 287.9 KB/s (2.4*10^6 bps)

Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0

-- 
You received this m

Re: [web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-18 Thread Kuba Kucharski
I've changed methodics a bit so I repeat my measurments for ROCKET at
the end of the file.

Methodics: Increase rate till errors show

@Massimo
as you can see quad cherrypy is faster than quad rocket. but when you
look closer to "SOLO" comparision you can see that both servers are
hitting SAME WALL (880Req/sec), this must be some I/O setting.



request per second:

---ROCKET-CHERRYPY
SOLO   880   880
QUAD  1881 2475


I can think of these:

sysctl -w fs.file-max=128000
sysctl -w net.ipv4.tcp_keepalive_time=300
sysctl -w net.core.somaxconn=25
sysctl -w net.ipv4.tcp_max_syn_backlog=42500
sysctl -w net.core.netdev_max_backlog=42500
ulimit -n 10240
/bin/echo 0 > /proc/sys/net/ipv4/tcp_syncookies


r...@kubatron:/home/kuba/httperf-0.9.0/src# ./httperf --hog --server
192.168.0.1 --port=8080 ==uri=/vae/default/benchmark2 --ra 2500
--num-conns=1

4 x CHERRYPY via POUND

Connection rate: 2475.3 conn/s (0.4 ms/conn, <=30 concurrent connections)
Connection time [ms]: min 0.9 avg 5.8 max 2998.4 median 4.5 stddev 43.4
Connection time [ms]: connect 0.1
Connection length [replies/conn]: 1.000

Request rate: 2475.3 req/s (0.4 ms/req)
Request size [B]: 64.0

Reply rate [replies/s]: min 0.0 avg 0.0 max 0.0 stddev 0.0 (0 samples)
Reply time [ms]: response 5.6 transfer 0.2
Reply size [B]: header 205.0 content 66.0 footer 2.0 (total 273.0)
Reply status: 1xx=0 2xx=0 3xx=1 4xx=0 5xx=0

CPU time [s]: user 1.79 system 2.25 (user 44.3% system 55.7% total 100.0%)
Net I/O: 809.8 KB/s (6.6*10^6 bps)

Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0


CHERRYPY SOLO

r...@kubatron:/home/kuba/httperf-0.9.0/src# ./httperf --hog --server
192.168.0.1 --port=8000 ==uri=/vae/default/benchmark2 --ra 880
--num-conns=1

Connection rate: 880.0 conn/s (1.1 ms/conn, <=12 concurrent connections)
Connection time [ms]: min 0.7 avg 2.1 max 210.1 median 0.5 stddev 3.9
Connection time [ms]: connect 0.1
Connection length [replies/conn]: 1.000

Request rate: 880.0 req/s (1.1 ms/req)
Request size [B]: 64.0

Reply rate [replies/s]: min 880.0 avg 880.0 max 880.1 stddev 0.1 (2 samples)
Reply time [ms]: response 1.9 transfer 0.1
Reply size [B]: header 205.0 content 66.0 footer 2.0 (total 273.0)
Reply status: 1xx=0 2xx=0 3xx=1 4xx=0 5xx=0

CPU time [s]: user 5.84 system 5.52 (user 51.4% system 48.5% total 99.9%)
Net I/O: 287.9 KB/s (2.4*10^6 bps)

Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0


#4xROCKET via POUND
r...@kubatron:/home/kuba/httperf-0.9.0/src# ./httperf --hog --server
192.168.0.1 --port=8080 ==uri=/vae/default/benchmark2 --ra 1881
--num-conns=1
httperf --hog --client=0/1 --server=192.168.0.1 --port=8080 --uri=/
--rate=1881 --send-buffer=4096 --recv-buffer=16384 --num-conns=1
--num-calls=1
Maximum connect burst length: 14

Total: connections 1 requests 1 replies 1 test-duration 5.319 s

Connection rate: 1879.9 conn/s (0.5 ms/conn, <=24 concurrent connections)
Connection time [ms]: min 0.9 avg 4.3 max 3008.3 median 3.5 stddev 42.7
Connection time [ms]: connect 0.1
Connection length [replies/conn]: 1.000

Request rate: 1879.9 req/s (0.5 ms/req)
Request size [B]: 64.0

Reply rate [replies/s]: min 1879.3 avg 1879.3 max 1879.3 stddev 0.0 (1 samples)
Reply time [ms]: response 4.1 transfer 0.1
Reply size [B]: header 211.0 content 66.0 footer 0.0 (total 277.0)
Reply status: 1xx=0 2xx=0 3xx=1 4xx=0 5xx=0

CPU time [s]: user 2.54 system 2.76 (user 47.7% system 52.0% total 99.6%)
Net I/O: 626.0 KB/s (5.1*10^6 bps)

Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0


r...@kubatron:/home/kuba/httperf-0.9.0/src# ./httperf --hog --server
192.168.0.1 --port=8000 ==uri=/vae/default/benchmark2 --ra 880
--num-conns=1
httperf --hog --client=0/1 --server=192.168.0.1 --port=8000 --uri=/
--rate=880 --send-buffer=4096 --recv-buffer=16384 --num-conns=1
--num-calls=1
Maximum connect burst length: 5

Total: connections 1 requests 1 replies 1 test-duration 11.364 s

Connection rate: 880.0 conn/s (1.1 ms/conn, <=9 concurrent connections)
Connection time [ms]: min 0.4 avg 1.3 max 12.0 median 0.5 stddev 1.4
Connection time [ms]: connect 0.1
Connection length [replies/conn]: 1.000

Request rate: 880.0 req/s (1.1 ms/req)
Request size [B]: 64.0

Reply rate [replies/s]: min 880.0 avg 880.0 max 880.1 stddev 0.1 (2 samples)
Reply time [ms]: response 1.2 transfer 0.1
Reply size [B]: header 205.0 content 66.0 footer 2.0 (total 273.0)
Reply status: 1xx=0 2xx=0 3xx=1 4xx=0 5xx=0

CPU time [s]: user 6.24 system 5.12 (user 55.0% system 45.0% total 100.0%)
Net I/O: 287.9 KB/s (2.4*10^6 bps)

Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0



-- 
Kuba

-- 
You received this message bec

[web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-18 Thread mdipierro
Because they are not pure python.

On Mar 18, 7:51 pm, Vasile Ermicioi  wrote:
> just curious: why not using an existing (and fast) python web server like
> tornado or fapws ?

-- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



Re: [web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-18 Thread Vasile Ermicioi
just curious: why not using an existing (and fast) python web server like
tornado or fapws ?

-- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



[web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-18 Thread mdipierro
Did you try cherrypy? I think I have had this problem with it before
but never quantified.

On Mar 18, 6:03 pm, Kuba Kucharski  wrote:
> with 4x Rocket via Pound all is ok, with Rocket Solo I get 4703
> addrunavail errors(in httperf this should never happened and it
> renders this benchmarks useless) per 1 connections. I think this
> might be about linux tweaking. Do ANYONE have some more experience
> with setting sysctl environment for benchmarking?
>
> #4xROCKET via POUND
> r...@kubatron:/home/kuba/httperf-0.9.0/src# ./httperf --hog --server
> 192.168.0.1 --port=8080 ==uri=/vae/default/benchmark2 --ra 1881
> --num-conns=1
> httperf --hog --client=0/1 --server=192.168.0.1 --port=8080 --uri=/
> --rate=1881 --send-buffer=4096 --recv-buffer=16384 --num-conns=1
> --num-calls=1
> Maximum connect burst length: 14
>
> Total: connections 1 requests 1 replies 1 test-duration 5.319 s
>
> Connection rate: 1879.9 conn/s (0.5 ms/conn, <=24 concurrent connections)
> Connection time [ms]: min 0.9 avg 4.3 max 3008.3 median 3.5 stddev 42.7
> Connection time [ms]: connect 0.1
> Connection length [replies/conn]: 1.000
>
> Request rate: 1879.9 req/s (0.5 ms/req)
> Request size [B]: 64.0
>
> Reply rate [replies/s]: min 1879.3 avg 1879.3 max 1879.3 stddev 0.0 (1 
> samples)
> Reply time [ms]: response 4.1 transfer 0.1
> Reply size [B]: header 211.0 content 66.0 footer 0.0 (total 277.0)
> Reply status: 1xx=0 2xx=0 3xx=1 4xx=0 5xx=0
>
> CPU time [s]: user 2.54 system 2.76 (user 47.7% system 52.0% total 99.6%)
> Net I/O: 626.0 KB/s (5.1*10^6 bps)
>
> Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
> Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0
>
> #ROCKET SOLO
> r...@kubatron:/home/kuba/httperf-0.9.0/src# ./httperf --hog --server
> 192.168.0.1 --port=8000 ==uri=/vae/default/benchmark2 --ra 1881
> --num-conns=1
> httperf --hog --client=0/1 --server=192.168.0.1 --port=8000 --uri=/
> --rate=1881 --send-buffer=4096 --recv-buffer=16384 --num-conns=1
> --num-calls=1
> Maximum connect burst length: 10
>
> Total: connections 5297 requests 5297 replies 5296 test-duration 11.308 s
>
> Connection rate: 468.4 conn/s (2.1 ms/conn, <=1022 concurrent connections)
> Connection time [ms]: min 1.5 avg 1362.1 max 9005.9 median 12.5 stddev 2293.7
> Connection time [ms]: connect 1156.2
> Connection length [replies/conn]: 1.000
>
> Request rate: 468.4 req/s (2.1 ms/req)
> Request size [B]: 64.0
>
> Reply rate [replies/s]: min 239.8 avg 522.7 max 805.5 stddev 400.0 (2 samples)
> Reply time [ms]: response 205.4 transfer 0.2
> Reply size [B]: header 211.0 content 66.0 footer 0.0 (total 277.0)
> Reply status: 1xx=0 2xx=0 3xx=5296 4xx=0 5xx=0
>
> CPU time [s]: user 1.41 system 9.89 (user 12.5% system 87.5% total 99.9%)
> Net I/O: 156.0 KB/s (1.3*10^6 bps)
>
> Errors: total 4704 client-timo 0 socket-timo 0 connrefused 0 connreset 1
> Errors: fd-unavail 4703 addrunavail 0 ftab-full 0 other 0

-- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



Re: [web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-18 Thread Kuba Kucharski
with 4x Rocket via Pound all is ok, with Rocket Solo I get 4703
addrunavail errors(in httperf this should never happened and it
renders this benchmarks useless) per 1 connections. I think this
might be about linux tweaking. Do ANYONE have some more experience
with setting sysctl environment for benchmarking?

#4xROCKET via POUND
r...@kubatron:/home/kuba/httperf-0.9.0/src# ./httperf --hog --server
192.168.0.1 --port=8080 ==uri=/vae/default/benchmark2 --ra 1881
--num-conns=1
httperf --hog --client=0/1 --server=192.168.0.1 --port=8080 --uri=/
--rate=1881 --send-buffer=4096 --recv-buffer=16384 --num-conns=1
--num-calls=1
Maximum connect burst length: 14

Total: connections 1 requests 1 replies 1 test-duration 5.319 s

Connection rate: 1879.9 conn/s (0.5 ms/conn, <=24 concurrent connections)
Connection time [ms]: min 0.9 avg 4.3 max 3008.3 median 3.5 stddev 42.7
Connection time [ms]: connect 0.1
Connection length [replies/conn]: 1.000

Request rate: 1879.9 req/s (0.5 ms/req)
Request size [B]: 64.0

Reply rate [replies/s]: min 1879.3 avg 1879.3 max 1879.3 stddev 0.0 (1 samples)
Reply time [ms]: response 4.1 transfer 0.1
Reply size [B]: header 211.0 content 66.0 footer 0.0 (total 277.0)
Reply status: 1xx=0 2xx=0 3xx=1 4xx=0 5xx=0

CPU time [s]: user 2.54 system 2.76 (user 47.7% system 52.0% total 99.6%)
Net I/O: 626.0 KB/s (5.1*10^6 bps)

Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0

#ROCKET SOLO
r...@kubatron:/home/kuba/httperf-0.9.0/src# ./httperf --hog --server
192.168.0.1 --port=8000 ==uri=/vae/default/benchmark2 --ra 1881
--num-conns=1
httperf --hog --client=0/1 --server=192.168.0.1 --port=8000 --uri=/
--rate=1881 --send-buffer=4096 --recv-buffer=16384 --num-conns=1
--num-calls=1
Maximum connect burst length: 10

Total: connections 5297 requests 5297 replies 5296 test-duration 11.308 s

Connection rate: 468.4 conn/s (2.1 ms/conn, <=1022 concurrent connections)
Connection time [ms]: min 1.5 avg 1362.1 max 9005.9 median 12.5 stddev 2293.7
Connection time [ms]: connect 1156.2
Connection length [replies/conn]: 1.000

Request rate: 468.4 req/s (2.1 ms/req)
Request size [B]: 64.0

Reply rate [replies/s]: min 239.8 avg 522.7 max 805.5 stddev 400.0 (2 samples)
Reply time [ms]: response 205.4 transfer 0.2
Reply size [B]: header 211.0 content 66.0 footer 0.0 (total 277.0)
Reply status: 1xx=0 2xx=0 3xx=5296 4xx=0 5xx=0

CPU time [s]: user 1.41 system 9.89 (user 12.5% system 87.5% total 99.9%)
Net I/O: 156.0 KB/s (1.3*10^6 bps)

Errors: total 4704 client-timo 0 socket-timo 0 connrefused 0 connreset 1
Errors: fd-unavail 4703 addrunavail 0 ftab-full 0 other 0

-- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



Re: [web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-18 Thread Timothy Farrell

1.0.2 is out. Go get it!

On 3/18/2010 11:57 AM, mdipierro wrote:

from https://launchpad.net/rocket the second gree button  on the right
is Rocket-mono-xxx.zip
Unzip it. You get rocket.py. Move it into web2py/gluon/

web2py trunk already uses 1.0.1 so we have wait for Tim to post the
new one.

Massimo

On Mar 18, 10:51 am, Kuba Kucharski  wrote:
   

@Tim

do you have the fix already in launchpad? if yes can you tell me how
to replace rocket with the newest one inside web2py?

--
Kuba
 
   


--
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



[web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-18 Thread mdipierro
from https://launchpad.net/rocket the second gree button  on the right
is Rocket-mono-xxx.zip
Unzip it. You get rocket.py. Move it into web2py/gluon/

web2py trunk already uses 1.0.1 so we have wait for Tim to post the
new one.

Massimo

On Mar 18, 10:51 am, Kuba Kucharski  wrote:
> @Tim
>
> do you have the fix already in launchpad? if yes can you tell me how
> to replace rocket with the newest one inside web2py?
>
> --
> Kuba

-- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



[web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-18 Thread mdipierro
Yes.

We already have it in trunk and you can try it here:

http://web2py.com/examples/static/1.76.5.2.b/web2py_src.zip
http://web2py.com/examples/static/1.76.5.2.b/web2py_win.zip
http://web2py.com/examples/static/1.76.5.2.b/web2py_osx.zip

Massimo


On Mar 18, 8:43 am, Vasile Ermicioi  wrote:
> What this thread means: will we have a new wsgi server for web2py (instead
> of cherrypy) ?

-- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



[web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-17 Thread mdipierro
I noticed that many vps hosting provide HAProxy these days.

On Mar 17, 9:26 pm, Kuba Kucharski  wrote:
> http://web2py.com/book/default/section/11/12?search=poundhttp://www.apsis.ch/pound/
>
> kernel is 2.6.31-14-server not PAE of course.. sorry for mistakes.
> I didn't mentioned but I had also executed some sysctl and ulimit
> tweaking before benchmarking.
>
> I ran tests again with the same configuration  very carefully and
> result is the same.
>
> @Alex
> I run all web2py instances on the same web2py folder for the purpose
> of this test

-- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



Re: [web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-17 Thread Kuba Kucharski
> I was going to say "extend/include" not import

and even this is not true as I see now for small layout

-- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



Re: [web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-17 Thread Kuba Kucharski
> Did you "compile" the app before running the benchmarks?
yes

>
> Can you say more about "The most important thing: effects depend much
> on what you import. "
> imports should be cached and should not make a difference.

actually they don't, it is late, I was going to say "extend/include" not import

-- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.



[web2py] Re: benchmarking: rocket vs pound with four rockets

2010-03-17 Thread mdipierro
Did you "compile" the app before running the benchmarks?

Can you say more about "The most important thing: effects depend much
on what you import. "
imports should be cached and should not make a difference.

Massimo

On Mar 17, 8:20 pm, Kuba Kucharski  wrote:
> This is probably obvious but I decided to try this myself:
>
> Should one run web2py with Pound on multi-core server?
>
> probably yes if you deal with concurrent connections..
>
> This is my config:
>
> 2.6.31-19-generic-pae
> Intel(R) Xeon(R) CPU E5410  @ 2.33GHz
> Ubuntu 64bit
> one processor with 4 cores
>
> disk is serial ata, with performance around this:
>
> #/sbin/hdparm -t /dev/sda
>
> /dev/sda:
> Timing buffered disk reads:  268 MB in  3.01 seconds =  88.96 MB/sec
>
> This is slow. This is one SATA disk.
>
> I used mysql-5.1 for "writing" tests. I run it on the same machine. Also 
> POUND.
>
> "ab" command runs on my 32bit ubuntu laptop with 2.5 Ghz Core2Duo.
> Server and laptop are connected via Gigabit ethernet directly.
>
> I did 12 concurrent connections with 2000 calls, and then 1 connection
> with 2000 calls test for every case. Application was compiled. I set
> migrate=False.
>
> READING:
> 
>
> ROCKET :
> 
> r...@kubatron:/home/kuba/httperf-0.9.0/src# ab -n 2000 -c 
> 12http://192.168.0.1:8000/vae/default/benchmark2
>
> Concurrency Level:      12
> Time taken for tests:   15.441 seconds
> Complete requests:      2000
> Failed requests:        0
> Write errors:           0
> Total transferred:      852000 bytes
> HTML transferred:       236000 bytes
> Requests per second:    129.52 [#/sec] (mean)
> Time per request:       92.647 [ms] (mean)
> Time per request:       7.721 [ms] (mean, across all concurrent requests)
> Transfer rate:          53.88 [Kbytes/sec] received
>
> #for concurrency level: 1
>
> #Requests per second:    157.27 [#/sec] (mean)
> #Time per request:       6.359 [ms] (mean)
> #Time per request:       6.359 [ms] (mean, across all concurrent requests)
> #Transfer rate:          65.43 [Kbytes/sec] received
>
> POUND + 4 x ROCKET :
> 
>
> r...@kubatron:/home/kuba/httperf-0.9.0/src# ab -n 2000 -c 
> 12http://192.168.0.1:8080/vae/default/benchmark2
>
> Concurrency Level:      12
> Time taken for tests:   6.828 seconds
> Complete requests:      2000
> Failed requests:        0
> Write errors:           0
> Total transferred:      852000 bytes
> HTML transferred:       236000 bytes
> Requests per second:    292.91 [#/sec] (mean)
> Time per request:       40.968 [ms] (mean)
> Time per request:       3.414 [ms] (mean, across all concurrent requests)
> Transfer rate:          121.86 [Kbytes/sec] received
>
> This is faster!! More than twice!
>
> #for concurrency level: 1
>
> #Requests per second:    129.28 [#/sec] (mean)
> #Time per request:       7.735 [ms] (mean)
> #Time per request:       7.735 [ms] (mean, across all concurrent requests)
> #Transfer rate:          53.78 [Kbytes/sec] received
>
> WRITING(mysql innodb)-> to see writing bottleneck
> =
>
> ROCKET :
>
> r...@kubatron:/home/kuba/httperf-0.9.0/src# ab -n 2000 -c 
> 12http://192.168.0.1:8000/vae/default/benchmark
>
> Concurrency Level:      12
> Time taken for tests:   23.466 seconds
> Complete requests:      2000
> Failed requests:        0
> Write errors:           0
> Total transferred:      858429 bytes
> HTML transferred:       242121 bytes
> Requests per second:    85.23 [#/sec] (mean)
> Time per request:       140.798 [ms] (mean)
> Time per request:       11.733 [ms] (mean, across all concurrent requests)
> Transfer rate:          35.72 [Kbytes/sec] received
>
> #for concurrency level: 1
>
> #Requests per second:    15.69 [#/sec] (mean)
> #Time per request:       63.735 [ms] (mean)
> #Time per request:       63.735 [ms] (mean, across all concurrent requests)
> #Transfer rate:          6.57 [Kbytes/sec] received
>
> POUND + 4 x ROCKET :
> 
>
> r...@kubatron:/home/kuba/httperf-0.9.0/src# ab -n 2000 -c 
> 12http://192.168.0.1:8080/vae/default/benchmark
>
> Concurrency Level:      12
> Time taken for tests:   17.797 seconds
> Complete requests:      2000
> Failed requests:        0
> Write errors:           0
> Total transferred:      858308 bytes
> HTML transferred:       242000 bytes
> Requests per second:    112.38 [#/sec] (mean)
> Time per request:       106.783 [ms] (mean)
> Time per request:       8.899 [ms] (mean, across all concurrent requests)
> Transfer rate:          47.10 [Kbytes/sec] received
>
> This is faster too.
>
> #for concurrency level: 1
>
> #Requests per second:    15.27 [#/sec] (mean)
> #Time per request:       65.468 [ms] (mean)
> #Time per request:       65.468 [ms] (mean, across all concurrent requests)
> #Transfer rate:          6.40 [Kbytes/sec] received
>
> model is:
> -
>
> #yes I need Service in my other controllers(xml-rpc)
> from gluon.tools import Service
>
> db = DAL('mysql://root:passw...@localhost/vae2')