On Tue, Nov 29, 2005 at 02:03:46AM +0000, Paul Brook wrote:
> > I was hoping that having it there when people did test runs would change
> > the psychology; instead of having already checked in a patch, which
> > we're then looking to revert, we'd be making ourselves aware of
> > performance impact before check-in, even for patches that we don't
> > expect to have performance impact.  (For major new optimizations, we
> > already expect people  to do some benchmarking.)
> 
> I'd be surprised if you could get meaningful performance numbers on anything 
> other than an dedicated performance testing machine. There are simply too 
> many external factors on a typical development machine[*].
> 
> I'm not saying we shouldn't try to do some sort of performance testing. Just 
> that even if we find a reasonable benchmark then adding it to "make check" 
> probably isn't going to give much useful data.

I think the only _feasible_ way to do this would be with cycle counting
i.e. simulators, and the _usefulness_ of the available simulators for
performance on today's hardware is probably too limited.

-- 
Daniel Jacobowitz
CodeSourcery, LLC

Reply via email to