Yes, I'll be here for the foreseeable future, but Yarko's philosophy is much better. I've designed Rocket with a liberal MIT license and clean-reading code so that it is easily maintainable. My best wishes going to anyone trying to maintain Cherrypy. I've studied its code and some aspects of it are still a mystery to me.

Rocket comes with:
- documentation
- cleanly seperated modules
- fewer general comments but fewer needed comments IMHO

-tim

On 3/20/2010 4:50 PM, Yarko Tymciurak wrote:
On Mar 20, 12:36 pm, mdipierro<mdipie...@cs.depaul.edu>  wrote:
Thanks this is clear. I should clarify my position on this.

I always liked cherrypy because it is known to be fast and because
many use it to it has been tested. The problem with cherrypy is that
the code is not as clean as Rocket's.

I also like Rocket because its code is very clean and readable and
because the developer (you) is a member of this community.

Some time ago I tried to rewrite Cherrypy in sneaky but I did not have
time to bring it to production quality. Rocket reminds me of sneaky
both as goals and as design and makes me feel I do not need work on
sneaky any more and I can remove it from web2py.

Speed is an important issue but not the only issue.

Another important issue is long term support. Are you Tim committed to
supporting Rocket long term?
I would chime in with 2 obersvations / agreements:

* Performance is NOT only about speed;
   -- remember when web2py.py used only cherrypy, and people had
truncated files (web2py-distro.zip) - because cherrypy would get into
situation of dropping connections?   It varied with client, and I
would expect (as HTML5 takes root, and browser engines update) this
will rear its head again.

Yeah, sure - I want to be able to have a web2py webstore or game
serving thousands of connections... maybe... BUT I also want to be
able to have a community website - church, social service agency,
perhaps governemt - and I want to be sure that their streaming
sermons, or huge podcast uploads, or those governemt drafts of huge
bills up for a vote ("How many pages did you say that was???") - that
those WILL work... or live video feeds... or live development /
collaborative / realtime interaction....

So there are number of connections, size of transfer (reliable large
item transaction), and real-time response ability.  And security...
and...

Well - for deployed solutions, we have "compiled" solutions.  So maybe
this is a mute point.  But broader variety of testing is at point.   I
think - even as you decide to support one (and I am _all_ for "from
our community" solutions!) - having a plugable architecture will HELP
maintenance, i.e. make it _really_ easy to continue performance
testing, and meaningful comparison tests:  I am all for this!  (but
still choose to support one).

As for "long term support":  1] with plugable testing, this is less
critical (accidents happen, people go away, and sooner or later you
_have_ to make changes / adapt);   [2] for volunteer work, all best
intentions, all commitments change....

Summary:  test different things;   don't worry about decision (and
make pluggable part of both the maintenance/testing and "insurance"
part of this).

- Yarko
With the numbers I have seen I still lean towards rockets, but I would
like to see more benchmakrs with pound (and/or haproxy).

I would also like to hear more opinions from other users on this
matter.

Even if we default to one of the two, we could setup web2py to give
users a choice (at least for a while). There may be problems with
openssl vs ssl, but I think they can be resolved. Eventually I think
we better make a choice and pack only one of the two.

Massimo

P.S. Binary web servers are not an option.

On Mar 20, 11:58 am, "Timothy Farrell"<tfarr...@swgen.com>  wrote:

Summary:
First, I'll speak in the context of a single instance of Rocket.  I'll talk 
about pound in a bit.
ApacheBench, which I used to test Rocket, unfairly accentuates the benefits of 
Rocket.  httperf allows for a much fairer test.  The httperf configuration that 
Kuba used tested a non-standard situation (while applicable to a project he's 
working on) that accentuates a known weakness of Rocket over Cherrypy.  Even 
though the single-instance test was inconclusive the multi-instance test 
implied that Rocket would be slower in the single instance.
Because my tests and Kuba's tests focused on polar opposite situations, the 
numbers were different.
Nicholas Piel tested version 1.0.1 which did not include epoll support so his 
initial conclusions, while correct for the time, are no longer accurate.
The difference in situations revolves around how many HTTP requests are 
pipelined over a single connection.  ApacheBench puts them all in a few 
connections, httperf allows for configuring this.  Kuba's benchmark settings 
put one request per connection.  A real-world setting is something around 10, 
which Nicholas Piel uses.
Kuba released another round of tests that follow Nicholas Piel's HTTP/1.1 tests 
(10 requests per connection).  The results showed Rocket as performing slightly 
faster.
Now, let's talk about pound.  I've not used pound for any tests before so this 
was all new information to me.  The first test showed 4 instances of Rocket 
behind pound to be slower than 4 instances of Cherrypy behind pound on a 
Quad-core machine.  There are several possible explanations for this.  All of 
the explanations require more development on Rocket to work around.  The 
difference in performance would not be a show-stopper for me, but others may 
disagree.
I've asked Kuba to retest 4xRocket vs. 4xCherrypy with the second test 
configuration.
Vasile Ermicioi, put in a vote for Rocket to be included in web2py because I'm 
in the web2py community and there is still plenty of room for Rocket to be 
optimized (which I noted).
Now you're up-to-date.
-tim
-----Original Message-----
From: "mdipierro"<mdipie...@cs.depaul.edu>
Sent: Friday, March 19, 2010 9:01pm
To: "web2py-users"<web2py@googlegroups.com>
Subject: [web2py] Re: benchmarking: rocket vs pound with four rockets
had a long day, can somebody provide an execute summary of all the
tests?
On Mar 19, 3:33 pm, Timothy Farrell<tfarr...@swgen.com>  wrote:
Thank you Kuba.  Would you mind re-running the 4x pound test like this also?
On 3/19/2010 3:09 PM, Kuba Kucharski wrote:
One instance of each, with 10 calls in a connection as it is closer to
reallife scenario:
(numbers speak for themselves)
CHERRYPY:
r...@kubatron:/home/kuba/httperf-0.9.0/src# ./httperf --hog --server
192.168.0.1 --port=8000 ==uri=/vae/default/benchmark2
--num-conns=10000 --num-calls=10
httperf --hog --client=0/1 --server=192.168.0.1 --port=8000 --uri=/
--send-buffer=4096 --recv-buffer=16384 --num-conns=10000
--num-calls=10
Maximum connect burst length: 1
Total: connections 10000 requests 100000 replies 100000 test-duration 67.659 s
Connection rate: 147.8 conn/s (6.8 ms/conn,<=1 concurrent connections)
Connection time [ms]: min 6.2 avg 6.8 max 10.5 median 6.5 stddev 0.2
Connection time [ms]: connect 0.1
Connection length [replies/conn]: 10.000
Request rate: 1478.0 req/s (0.7 ms/req)
Request size [B]: 64.0
Reply rate [replies/s]: min 1474.7 avg 1478.0 max 1480.3 stddev 2.0 (13 samples)
Reply time [ms]: response 0.6 transfer 0.0
Reply size [B]: header 205.0 content 66.0 footer 2.0 (total 273.0)
Reply status: 1xx=0 2xx=0 3xx=100000 4xx=0 5xx=0
CPU time [s]: user 25.67 system 41.99 (user 37.9% system 62.1% total 100.0%)
Net I/O: 483.5 KB/s (4.0*10^6 bps)
Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0
ROCKET:
r...@kubatron:/home/kuba/httperf-0.9.0/src# ./httperf --hog --server
192.168.0.1 --port=8000 ==uri=/vae/default/benchmark2
--num-conns=10000 --num-calls=10
httperf --hog --client=0/1 --server=192.168.0.1 --port=8000 --uri=/
--send-buffer=4096 --recv-buffer=16384 --num-conns=10000
--num-calls=10
Maximum connect burst length: 1
Total: connections 10000 requests 100000 replies 100000 test-duration 64.760 s
Connection rate: 154.4 conn/s (6.5 ms/conn,<=1 concurrent connections)
Connection time [ms]: min 5.9 avg 6.5 max 72.7 median 6.5 stddev 1.0
Connection time [ms]: connect 0.1
Connection length [replies/conn]: 10.000
Request rate: 1544.2 req/s (0.6 ms/req)
Request size [B]: 64.0
Reply rate [replies/s]: min 1526.9 avg 1544.2 max 1555.9 stddev 8.6 (12 samples)
Reply time [ms]: response 0.6 transfer 0.0
Reply size [B]: header 216.0 content 66.0 footer 0.0 (total 282.0)
Reply status: 1xx=0 2xx=0 3xx=100000 4xx=0 5xx=0
CPU time [s]: user 24.18 system 40.58 (user 37.3% system 62.7% total 100.0%)
Net I/O: 521.8 KB/s (4.3*10^6 bps)
Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0
--
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group 
athttp://groups.google.com/group/web2py?hl=en.


--
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.

Reply via email to