On 21 September 2012 07:17, Andrei Alexandrescu <
seewebsiteforem...@erdani.org> wrote:

> On 9/20/12 10:05 AM, Manu wrote:
>
>> Memory locality is often the biggest contributing
>>
> performance hazard in many algorithms, and usually the most
>> unpredictable. I want to know about that in my measurements.
>> Reproducibility is not important to me as accuracy. And I'd rather be
>> conservative(/pessimistic) with the error.
>>
> >
>
>> What guideline would you apply to estimate 'real-world' time spent when
>> always working with hyper-optimistic measurements?
>>
>
> The purpose of std.benchmark is not to estimate real-world time. (That is
> the purpose of profiling.) Instead, benchmarking measures and provides a
> good proxy of that time for purposes of optimizing the algorithm. If work
> is done on improving the minimum time given by the benchmark framework, it
> is reasonable to expect that performance in-situ will also improve.


Okay, I can buy this distinction in terminology.
What I'm typically more interested in is profiling. I do occasionally need
to do some benchmarking by your definition, so I'll find this useful, but
should there then be another module to provide a 'profiling' API? Also
worked into this API?

Reply via email to