Perrin Harkins wrote:
> 
> I think we would need more numbers from the exact same people, on the
> same machines, with the same configuration, the same client, the same
> network, the same Linux kernel... In other words, controlled conditions.
> 

I hear you, so how about a recommendation that people submit
no fewer than 2 benchmarks for listing eligibility, at least 
static html, and another.  The static html can be used as a 
rough control against other systems.

> Ideally, I would get rid of every page except the one which lists the
> tests grouped by OS/machine.  Then I would put a big statement at the
> top saying that comparisons across different people's tests are
> meaningless.
> 

I see where you are going, you feel that the summarized results
are misleading, and to some extent they are in that they are
not "controlled", so people's various hardware, OS, and 
configuration come into play very strongly in how the benchmark
performed, and readers aren't wise enough to digest all the 
info presented and what it all really means.

I think too that the OS/machine results at 
http://www.chamas.com/bench/hello_bycode.html could be more accurate
in comparing results if the results are also grouped by tester, 
network connection type, and testing client so each grouping would 
well reflect the relative speed differences web applications on the 
same platform.

I would argue that we should keep the code type grouping listed at
http://www.chamas.com/bench/hello_bycode.html because it gives
a good feel for how some operating systems & web servers are faster 
than others, i.e., Solaris slower than Linux, WinNT good for static 
HTML, Apache::ASP faster than IIS/ASP PerlScript, etc.  

I should drop the normalized results at 
http://www.chamas.com/bench/hello_normalized.html as they are unfair, 
and could be easily read wrong.  You are not the first to complain 
about this.  The other pages sort by Rate/MHz anyway, so someone
can get a rough idea on those pages for what's faster overall.

Finally, I would very much like to keep the fastest benchmark page
as the first page, disclaiming it to death if necessary, the reason 
being that I would like to encourage future submissions, with 
new & faster hardware & OS configurations, and the best way to do 
that is to have something of a benchmark competition happening on the 
first page of the results.

It seems that HTTP 1.1 submissions represent a small subset of
skewed results, should these be dropped or presented separately?
I already exclude them from the "top 10" style list since they
don't compare well to HTTP 1.0 results, which are the majority.

I also need to clarify some results, or back them up somehow.
What should I do with results that seem skewed in general?
Not post them until there is secondary confirmation ?

Thanks Perrin for your feedback.

-- Joshua
_________________________________________________________________
Joshua Chamas                           Chamas Enterprises Inc.
NodeWorks >> free web link monitoring   Huntington Beach, CA  USA 
http://www.nodeworks.com                1-714-625-4051

Reply via email to