On Fri, Aug 19, 2011 at 2:21 AM, Andy Doan <andy.d...@linaro.org> wrote:
> On 08/17/2011 04:59 PM, Michael Hope wrote:
>> On Wed, Aug 17, 2011 at 11:12 PM, Dave Martin <dave.mar...@linaro.org> wrote:
>>> On Tue, Aug 16, 2011 at 7:14 PM, Zach Pfeffer <zach.pfef...@linaro.org> 
>>> wrote:
>>>> Nicolas,
>>>>
>>>> Thanks for the notes. As you say there are many, many things that can
>>>> affect this demo. What notes like this really underscore is the
>>>> importance of staying up-to-date. This demo is more about the
>>>> macroscopic effects from tip support than anything else. We do have
>>>> some more specific benchmark numbers at:
>>>>
>>>>  https://wiki.linaro.org/Platform/Android/AndroidToolchainBenchmarking
>>>
>>> If we're confident that the benchmark produces results of a
>>> trustworthy quality, then that's fine.  I don't know this benchmark in
>>> detail, so I can't really judge, other than that the results look a
>>> bit odd.
>>
>> Ditto on that.  Have these benchmarks been qualified?  Do they
>> represent real workloads?  Where do they come from?  What aspects of
>> the system (CPU, memory, I/O, kernel, SMP) do they exercise?  How
>> sensitive are they to minor changes?
>
> The benchmark code comes from Android:
>  http://android.git.kernel.org/?p=toolchain/benchmark.git
>
> I'm not an expert on benchmarking. I've just tried to focus on running
> these in a way that's as fair and repeatable as possible.

OK.  Just keep an eye out then.  If the benchmarks are dominated by
things that Linaro isn't working on (such as I/O performance or memory
bandwidth) then the results won't change.  If they're dominated by
certain inner functions that are very sensitive to environment
changes, then you may see a regression.  Benchmarks need to represent
the workloads of a real system.

-- Michael

_______________________________________________
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev

Reply via email to