True, for a definition of micro-benchmark you have decided for yourself
rather than asked me to clarify....

On 7 April 2013 09:37, Patricia Shanahan <p...@acm.org> wrote:

> On 4/7/2013 1:04 AM, Dan Creswell wrote:
>
>> On 7 April 2013 05:24, Patricia Shanahan <p...@acm.org> wrote:
>>
>>  On 4/6/2013 7:26 PM, Greg Trasuk wrote:
>>> ...
>>>
>>>   Once we have a stable set of regression tests, then OK, we could
>>>
>>>> think about improving performance or using Maven repositories as the
>>>> codebase server.
>>>>
>>>>  ...
>>>
>>> I think there is something else you need before it would be a good idea
>>> to
>>> release any changes for the sake of performance - some examples of
>>> workloads whose performance you want to improve, and that are in fact
>>> improved by the changes.
>>>
>>>
>> Indeed. And if we do these changes in a confined, one small step at a time
>> type fashion, we can build micro-benchmarks as we go along.
>>
>> Later, if it makes sense we can look at more macro tests such as e.g.
>> Lookup or JavaSpace or Transaction etc.
>>
>>
> Programmer constructed micro-benchmarks are good at answering the
> question "How am I doing at optimizing X?". They are useless for
> answering the question that needs to be asked first: "What, if anything,
> should I be optimizing?".
>
> To find out what, if anything, needs optimization one has to start from
> real workloads that are running slower than desired, and measure them.
> Or, in some cases, from industry benchmarks that are known to be
> correlated with real workloads in some field.
>
> Patricia
>
>

Reply via email to