Gavin Sherry wrote:
Not all machines stay the same over time.
A machine may by upgraded, a machine may be getting backed up or may in
some other way be utilised during a performance test. This would skew the
stats for that machine. It may confuse people more than help them...

At the very least, the performance figures would need to be accompanied by
details of what other processes were running and what resources they were
chewing during the test.

This is what was nice about the OSDL approach. Each test was preceeded by
an automatic reinstall of the OS and the machines were specifically for
testing. The tester had complete control.

We could perhaps mimic some of that using virtualisation tools which
control access to system resources but it wont work on all platforms. The
problem is that it probably introduces a new variable, in that I'm not
sure that virtualisation software can absolutely limit CPU resources a
particular container has. That is, you might not be able to get
reproducible runs with the same code. :(


We are really not going to go in this direction. If you want ideal performance tests then a heterogenous distributed collection of autonomous systems like buildfarm is not what you want.

You are going to have to live with the fatc that there will be occasional, possibly even frequent, blips in the data due to other activity on the machine.

If you want tightly controlled or very heavy load testing this is the wrong vehicle.

You might think that what that leaves us is not worth having - the consensus in Toronto seemed to be that it is worth having, which is why it is being pursued.

cheers

andrew


---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
      choose an index scan if your joining column's datatypes do not
      match

Reply via email to