> I was hoping that having it there when people did test runs would change
> the psychology; instead of having already checked in a patch, which
> we're then looking to revert, we'd be making ourselves aware of
> performance impact before check-in, even for patches that we don't
> expect to have performance impact.  (For major new optimizations, we
> already expect people  to do some benchmarking.)

I'd be surprised if you could get meaningful performance numbers on anything 
other than an dedicated performance testing machine. There are simply too 
many external factors on a typical development machine[*].

I'm not saying we shouldn't try to do some sort of performance testing. Just 
that even if we find a reasonable benchmark then adding it to "make check" 
probably isn't going to give much useful data.

Paul

[*] For example: Any kind other activity on the same machine, power management 
changing CPU frequency, which CPUs in an SMP machine have local memory, Which 
of N different machines was it run on.

Reply via email to