Hi Pavel,
  Platform specific optimizations can be accomodated in the scheme
described by doing a cpuid check in the test and automatically passing it or
disabling on all other platforms. That shouldn't be too hard.
  I understand that some jit optimizations are deeper and more abstract,
but ultimately the value of the optimization cannot just be the morphing of
an IR, and the gain cannot be invisible to the user, or the regression
undetectable. If it needs to be part of a sequence to be effective, the
scenario in the test needs to be set up accordingly. It is a little
uncomfortable if a framework does some magic and then comes back and says
"everything is OK".
  Sorry to sound difficult.

Thanks,
Rana


On 9/14/06, Pavel Ozhdikhin <[EMAIL PROTECTED]> wrote:
>
> Hello Rana,
>
> When I think of an optimization which gives 1% improvement on some
> simple
> workload or 3% improvement on EM64T platforms only I doubt this can be
> easily detected with a general-purpose test suite. IMO the performance
> regression testing should have a specialized framework and a stable
> environment which guarantees no user application can spoil the results.
>
> The right solution might also be a JIT testing framework which would
> understand the JIT IRs and check if some code patterns have been
> optimized
> as expected. Such way we can guarantee necessary optimizations are done
> independently of the user environment.
>
> Thanks,
> Pavel
>
>
>
>
>

Reply via email to