Hi Will, Good show...regarding the events...ONe thing I've noticed in my tests is to exercise the code that handles umasks and the code that handles events without umasks. The problem is the same though...code for generating events of a certain type...But I suppose we could prototype something for AMD64 and/or Montecito.
Phil On Tue, 2006-11-28 at 09:23 -0500, William Cohen wrote: > William Cohen wrote: > > Hi, > > > > I have looked over what is currently in the pfmon/tests directory and > > made a quick list of things that would be good to test for pfmon/libpfm. > > > > check that all the kernel systemcalls for perfmon function > > -each call exists in kernel > > -error checking prevents invalid arguments from crashing system > > > > generic pfmon tests > > -list of events available > > -repeatable measurements for sequential invocations of the same > > process > > -repeatable measurements for multiple indepedent concurrent processes > > -measure user-space only/kernel-space only/both user and kernel-space > > -measure following across fork, vfork, exec, pthread > > -measure with signal in process > > -attach to existing task > > -sampling using available buffers > > -system-wide setup > > Hi all, > > Attached is a more recent version of the tests for perfmon2. The following > changes have been made: > > -adapted to run with the 2.6.19-rc6-git10-perfmon2 > -repeatable measurements for multiple indepedent concurrent processes > -have tests for AMD64, p6 based processors, and Itanium2/Montecito > > Comments on the tests would be appreciated. > > The tests require dejagnu/expect to run. To run use the following commands: > > tar xvfz testsuite.tgz > cd testsuite > runtest --tool=pfmon > > A couple thoughts/issues on the current set of tests: > > The tests don't cover all the processor implementations that perfmon2 > supports. > > On some of the processors it might be difficult to select events that will > have > reasonably repeatible results. Some of the processors have very few events to > chose. Cycle count could vary greatly with clock scaling and interrupts. > Instruction counts could also vary significantly depending on the code that > the > compiler generates. > > The tests are not designed to exhaustively checking all the events available > for > a processor. In general the tests are just designed to check that an event > (or > two) can be measure on the hardware. Thus, problems with an event being > coded > incorrectly are unlikely to be caught with the tests. This decision might > need > to be revisited for lists of architected events. E.g. need to make sure that > this subset of events is measured correctly. > > The tests don't clean up the executables after a run. Thus, if the perfmon > changed and the second version doesn't compile, the old version of the > executable is run. > > -Will > _______________________________________________ > perfmon mailing list > [email protected] > http://www.hpl.hp.com/hosted/linux/mail-archives/perfmon/ _______________________________________________ perfmon mailing list [email protected] http://www.hpl.hp.com/hosted/linux/mail-archives/perfmon/
