Re: ANNOUNCE: Updated Hello World Web Application Benchmarks

2000-01-30 Thread Perrin Harkins


 On Sun, 30 Jan 2000, Perrin Harkins wrote:
  I can understand that; I just don't want mod_perl users to get a
reputation
  as the Mindcraft of web application benchmarks.

 I'm not sure I see how that can happen when we quite clearly state that
 php4 is faster than mod_perl.

Only one person bothered to do both php and a PerlHandler, and in his test
mod_perl came out a little bit ahead.  On the "fastest" page, PerlHandler
gas a much higher score than php.  This is the kind of confusion I'm talking
about.
- Perrin



Re: ANNOUNCE: Updated Hello World Web Application Benchmarks

2000-01-29 Thread Perrin Harkins

Joshua Chamas wrote:
 There is no way that people are going to benchmark
 10+ different environments themselves, so this merely offers
 a quick fix to get people going with their own comparisons.

I agree that having the code snippets for running hello world on
different tools collected in one place is handy.

 Do you have any idea how much time it takes to do these?

Yes, I've done quite a few of them.  I never said they were easy.

 In order to improve the benchmarks, like the Resin  Velocigen
 ones that you cited where we have a very small sample, we simply
 need more numbers from more people.

I think we would need more numbers from the exact same people, on the
same machines, with the same configuration, the same client, the same
network, the same Linux kernel... In other words, controlled conditions.

 Also, any disclaimer modifications might be good if you feel
 there can be more work done there.

Ideally, I would get rid of every page except the one which lists the
tests grouped by OS/machine.  Then I would put a big statement at the
top saying that comparisons across different people's tests are
meaningless.

- Perrin



Re: ANNOUNCE: Updated Hello World Web Application Benchmarks

2000-01-29 Thread Joshua Chamas

Perrin Harkins wrote:
 
 I think we would need more numbers from the exact same people, on the
 same machines, with the same configuration, the same client, the same
 network, the same Linux kernel... In other words, controlled conditions.
 

I hear you, so how about a recommendation that people submit
no fewer than 2 benchmarks for listing eligibility, at least 
static html, and another.  The static html can be used as a 
rough control against other systems.

 Ideally, I would get rid of every page except the one which lists the
 tests grouped by OS/machine.  Then I would put a big statement at the
 top saying that comparisons across different people's tests are
 meaningless.
 

I see where you are going, you feel that the summarized results
are misleading, and to some extent they are in that they are
not "controlled", so people's various hardware, OS, and 
configuration come into play very strongly in how the benchmark
performed, and readers aren't wise enough to digest all the 
info presented and what it all really means.

I think too that the OS/machine results at 
http://www.chamas.com/bench/hello_bycode.html could be more accurate
in comparing results if the results are also grouped by tester, 
network connection type, and testing client so each grouping would 
well reflect the relative speed differences web applications on the 
same platform.

I would argue that we should keep the code type grouping listed at
http://www.chamas.com/bench/hello_bycode.html because it gives
a good feel for how some operating systems  web servers are faster 
than others, i.e., Solaris slower than Linux, WinNT good for static 
HTML, Apache::ASP faster than IIS/ASP PerlScript, etc.  

I should drop the normalized results at 
http://www.chamas.com/bench/hello_normalized.html as they are unfair, 
and could be easily read wrong.  You are not the first to complain 
about this.  The other pages sort by Rate/MHz anyway, so someone
can get a rough idea on those pages for what's faster overall.

Finally, I would very much like to keep the fastest benchmark page
as the first page, disclaiming it to death if necessary, the reason 
being that I would like to encourage future submissions, with 
new  faster hardware  OS configurations, and the best way to do 
that is to have something of a benchmark competition happening on the 
first page of the results.

It seems that HTTP 1.1 submissions represent a small subset of
skewed results, should these be dropped or presented separately?
I already exclude them from the "top 10" style list since they
don't compare well to HTTP 1.0 results, which are the majority.

I also need to clarify some results, or back them up somehow.
What should I do with results that seem skewed in general?
Not post them until there is secondary confirmation ?

Thanks Perrin for your feedback.

-- Joshua
_
Joshua Chamas   Chamas Enterprises Inc.
NodeWorks  free web link monitoring   Huntington Beach, CA  USA 
http://www.nodeworks.com1-714-625-4051



Re: ANNOUNCE: Updated Hello World Web Application Benchmarks

2000-01-29 Thread Perrin Harkins

 I think too that the OS/machine results at
 http://www.chamas.com/bench/hello_bycode.html could be more accurate
 in comparing results if the results are also grouped by tester,
 network connection type, and testing client so each grouping would
 well reflect the relative speed differences web applications on the
 same platform.

Agreed.

 I would argue that we should keep the code type grouping listed at
 http://www.chamas.com/bench/hello_bycode.html because it gives
 a good feel for how some operating systems  web servers are faster
 than others, i.e., Solaris slower than Linux, WinNT good for static
 HTML, Apache::ASP faster than IIS/ASP PerlScript, etc.

See, I don't think you can even make statements like that based on these
benchmarks.  Where is the test on Solaris x86 and Linux done by the same
person under the same conditions?  I don't see one.  Where is the test of NT
and Linux on the same machine by the same person?  Even the Apache::ASP vs
PerlScript comparisons you did seem to be using different clients, netowork
setups, and versions of NT.

I'm not criticizing you for not being able to get lab-quality results, but I
think we have to be careful what conclusions we draw from these.

 Finally, I would very much like to keep the fastest benchmark page
 as the first page, disclaiming it to death if necessary, the reason
 being that I would like to encourage future submissions, with
 new  faster hardware  OS configurations, and the best way to do
 that is to have something of a benchmark competition happening on the
 first page of the results.

I can understand that; I just don't want mod_perl users to get a reputation
as the Mindcraft of web application benchmarks.

 It seems that HTTP 1.1 submissions represent a small subset of
 skewed results, should these be dropped or presented separately?

I'd say they're as meaningful as any of the others if you consider them
independently of the other contributions.

 I also need to clarify some results, or back them up somehow.
 What should I do with results that seem skewed in general?
 Not post them until there is secondary confirmation ?

Your call.  Again, to my mind each person's contribution can only be viewed
in its own private context, so one is no more skewed than any other.

- Perrin




ANNOUNCE: Updated Hello World Web Application Benchmarks

2000-01-28 Thread Joshua Chamas

Hey,

I have updated the Hello World Web Application Benchmarks, 
now at http://www.chamas.com/bench/

The old page hello_world.html points here now, if anyone 
could update the link at http://perl.apache.org/, that would 
be grand.

New in the fastest benchmarks are:

 + the fastest yet mod_perl results, at 1042 hits per sec
   thanks Chip Turner, apparently his 100Mbs network was 
   the bottleneck ;) !!!

 + Velocigen Perl on Linux/Apache thanks to 
   Shahin Askari of Velocigen

 + 1st benchmarks for JSP Java and JSP JavaScript for 
   Caucho's Resin, and best benchmark for Java Servlet, 
   thanks to Scott Ferguson of Caucho

 + 1st benchmarks for RXML / Roxen on WinNT

New cool hardware benchmarks listed at 
http://www.chamas.com/bench/hello_bysystem.html
 
 + Linux/RH 2.2.14 - PIII-500 x 2
 + Linux/RH 2.2.14 - Ahtlon-600 (ooh an Athlon)
   thanks again Chip!

Thanks for all of your contributions, and keep them coming!

-- Joshua
_
Joshua Chamas   Chamas Enterprises Inc.
NodeWorks  free web link monitoring   Huntington Beach, CA  USA 
http://www.nodeworks.com1-714-625-4051



Re: ANNOUNCE: Updated Hello World Web Application Benchmarks

2000-01-28 Thread Joshua Chamas

Perrin Harkins wrote:
 
 On Fri, 28 Jan 2000, Joshua Chamas wrote:
  I have updated the Hello World Web Application Benchmarks,
  now at http://www.chamas.com/bench/
 
 The end result of all this is that you have benchmark numbers which, while
 sort of entertaining, should not be used to make any kind of decision.  I
 would hate to think that someone used these numbers as the basis for
 making decisions and didn't benchmark the options himself.  I almost fell
 into that trap when I saw the low scores for certain things and wrote them
 off without doing my own testing.  (Yes, I know a good developer should
 never believe other people's benchmarks.)
 
 Although there is a disclaimer on the page, I wish it said more.  In some
 ways, it's worse to have these misleading benchmarks than no benchmarks at
 all.  I shudder to think what a naive manager might do with these.
 
 Okay, I'm done complaining for now.  Sorry Joshua, I know you put effort
 into this, and I do appreciate it.
 

I hear you Perrin.  Its better to have something than nothing 
though.  There is no way that people are going to benchmark 
10+ different environments themselves, so this merely offers 
a quick fix to get people going with their own comparisons.
Do you have any idea how much time it takes to do these? 
Run a few on different operating systems, the time involved 
is humbling to say the least.

In order to improve the benchmarks, like the Resin  Velocigen 
ones that you cited where we have a very small sample, we simply 
need more numbers from more people.  That these benchmarks are 
open to anyone submitting, means we all can scrutinize those
benchmarks submitted by those with agendas and submit our own.

If you can find any way to make these benchmarks better than
simply more data, please feel free to suggest improvements.
Also, any disclaimer modifications might be good if you feel 
there can be more work done there.

-- Joshua
_
Joshua Chamas   Chamas Enterprises Inc.
NodeWorks  free web link monitoring   Huntington Beach, CA  USA 
http://www.nodeworks.com1-714-625-4051