noted).
Now you're up-to-date.
-tim
-Original Message-
From: mdipierromdipie...@cs.depaul.edu
Sent: Friday, March 19, 2010 9:01pm
To: web2py-usersweb2py@googlegroups.com
Subject: [web2py] Re: benchmarking: rocket vs pound with four rockets
On Mar 22, 2010, at 5:55 AM, Timothy Farrell wrote:
web2py could support both but the benefits get lost quickly. web2py is
designed to be simple, asking the user to pick which bundled web server they
would like to use is too much in my opinion.
No need to ask; there'd be a silent default.
*objection
Gosh Massimo, you're wearing off on me.
On 3/22/2010 9:49 AM, Timothy Farrell wrote:
I have no object to gradual rollover. One way that could satisfy from
all angles is to have HTTPS configurations default to use Rocket while
regular connections use Cherrypy. This would
I said, I have no object to gradual rollover. but meant to say I have
no objection to gradual rollover.
I mean that I'm misspelling words like you typically do. It was meant
in jest. ;-P
My mom was a stickler for proper pronunciation (being in Oklahoma you
can see how that might be
-Original Message-
From: mdipierro mdipie...@cs.depaul.edu
Sent: Friday, March 19, 2010 9:01pm
To: web2py-users web2py@googlegroups.com
Subject: [web2py] Re: benchmarking: rocket vs pound with four rockets
had a long day, can somebody provide an execute summary of all the
tests?
On Mar 19, 3
On Mar 20, 2010, at 9:58 AM, Timothy Farrell wrote:
Vasile Ermicioi, put in a vote for Rocket to be included in web2py because
I'm in the web2py community and there is still plenty of room for Rocket to
be optimized (which I noted).
I like the idea of built-in servers as plugins (not
ALL POWER I CAN GET FROM quad core Xeon @ 2.33GHz
ONLY SOME STABLE RECORDS HERE:
Request rate: 929.0 req/s (1.1 ms/req) QUAD CHERRYPY
Request rate: 877.6 req/s (1.1 ms/req) QUAD ROCKET
Request rate: 1478.0 req/s (0.7 ms/req) CHERRYPY SOLO
Request rate: 1544.2 req/s (0.6 ms/req) ROCKET SOLO
ach! I meant to say: web2py.com
nice one.
yes. stability and funcionality over speed. I just wanted to learn
where are the borders(and how to benchmark properly).
--
Kuba
--
You received this message because you are subscribed to the Google Groups
web2py-users group.
To post to
you expect overhead from this? ;)
def benchmark2():
return dict(data=test)
--
You received this message because you are subscribed to the Google Groups
web2py-users group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to
I like Rocket too. I would like it to be better than Cherrypy
--
Kuba
--
You received this message because you are subscribed to the Google Groups
web2py-users group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to
This is a different test than the one I presented. The test I presented
was run on Windows with one instance and tested with ApacheBench. I've
looked at httperf a little and it seems to be a more realistic test than
ApacheBench.
Due to the nature of how Rocket handles listening sockets, it
Massimo, there is no possibility to keep both of two and select one?
anyway it's only a file isn't it?
Or maybe keep it as plugins to download?
alex
El 19/03/2010 14:24, mdipierro escribió:
Clearly we have conflicting benchmarks. I like Rocket because it is
cleaner but we need to go with the
Just looking over the httperf command, Kuba used --num-calls=1 This
would not be an accurate real-world test because it creates a new
connection for every request whereas most browsers span requests over
only a few connections. Nicholas Piel's test used --num-calls=10 for
testing HTTP/1.1
On Fri, Mar 19, 2010 at 2:48 PM, mdipierro mdipie...@cs.depaul.edu wrote:
Can you also do me a favor? Can you benchmark sneaky.py (in web2py/
gluon/)? In my tests it was faster than cherryby and I thought rocket
was an improvement over it.
ok, as soon as I get back to my testing environment
snip
In my own test, the difference (on Windows) between 1 and 10 yields a ~2.5x
increase in requests per second. I don't have a readily accessible Linux right
now. Kuba, please run thesenumbers again with --num-calls=10.
my reality is a lot of concurrent connections with only one
My point here was about the general web2py population rather than your
thing. No offense intended, but you have a special case. web2py handles
web-services but that is not it's primary function.
yes, true, I was just explaining my httperf thinking
I think Massimo wishes
to primarily
I would add a vote for Rocket.
A few thoughts about:
- rocket is developed inside our community, that means more control over it:
feedback, contributions etc
- still young, that means it will be optimized :) I believe that Tim and
others will do so
- one file
And even if cherrypy is only a bit
One instance of each, with 10 calls in a connection as it is closer to
reallife scenario:
(numbers speak for themselves)
CHERRYPY:
r...@kubatron:/home/kuba/httperf-0.9.0/src# ./httperf --hog --server
192.168.0.1 --port=8000 ==uri=/vae/default/benchmark2
--num-conns=1 --num-calls=10
httperf
Thank you Kuba. Would you mind re-running the 4x pound test like this also?
On 3/19/2010 3:09 PM, Kuba Kucharski wrote:
One instance of each, with 10 calls in a connection as it is closer to
reallife scenario:
(numbers speak for themselves)
CHERRYPY:
1.0.2 is out. Go get it!
On 3/18/2010 11:57 AM, mdipierro wrote:
from https://launchpad.net/rocket the second gree button on the right
is Rocket-mono-xxx.zip
Unzip it. You get rocket.py. Move it into web2py/gluon/
web2py trunk already uses 1.0.1 so we have wait for Tim to post the
new one.
with 4x Rocket via Pound all is ok, with Rocket Solo I get 4703
addrunavail errors(in httperf this should never happened and it
renders this benchmarks useless) per 1 connections. I think this
might be about linux tweaking. Do ANYONE have some more experience
with setting sysctl environment
I've changed methodics a bit so I repeat my measurments for ROCKET at
the end of the file.
Methodics: Increase rate till errors show
@Massimo
as you can see quad cherrypy is faster than quad rocket. but when you
look closer to SOLO comparision you can see that both servers are
hitting SAME WALL
I've changed methodics a bit so I repeat my measurments for ROCKET at
the end of the file.
Methodics: Increase rate till errors show
@Massimo
as you can see quad cherrypy is faster than quad rocket. but when you
look closer to SOLO comparision you can see that both servers are
hitting SAME WALL
the last one is doubled rocket solo w/o a header..
--
Kuba
--
You received this message because you are subscribed to the Google Groups
web2py-users group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to
Did you compile the app before running the benchmarks?
yes
Can you say more about The most important thing: effects depend much
on what you import.
imports should be cached and should not make a difference.
actually they don't, it is late, I was going to say extend/include not import
--
I was going to say extend/include not import
and even this is not true as I see now for small layout
--
You received this message because you are subscribed to the Google Groups
web2py-users group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group,
26 matches
Mail list logo