> Well, the only way it's going to get fixed is if someone sits down,
> replicates it, and starts to document exactly what it is that these
> benchmarks are/aren't doing.
>

I think you will find that investigation is largely a waste of time,
because not only are some of these benchmarks just downright silly,
there are huge differences in the environments (compiler versions),
etc., etc. leading to a largely apples/oranges comparison. But also
the the analysis and reporting of the results by Phoronix is simply
moronic to the point of being worse than useful, they are spreading
misinformation.

Take the first test as an example, Blogbench read. This doesn't raise
any red flags, right? At least not until you realize that Blogbench
isn't a read test, it's a read/write test. So what they have done here
is run a read/write test and then thrown away the write results for
both platforms and reported only the read results. If you dig down
into the actual results,
http://openbenchmarking.org/result/1112113-AR-ORACLELIN37 -- you will
see two Blogbench numbers, one for read and another for write. These
were both taken from the same Blogbench run, so FreeBSD optimizes
writes over reads, that's probably a good thing for your data but a
bad thing when someone totally misrepresents benchmark results.

Other benchmarks in the Phoronix suite and their representations are
similarly flawed, _ALL_ of these results should be ignored and no time
should be wasted by any FreeBSD committer further evaluating this
garbage. (Yes, I have been down this rabbit hole).

Best,
Sam
_______________________________________________
freebsd-performance@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "freebsd-performance-unsubscr...@freebsd.org"

Reply via email to