Hi Valentin, On Fri, May 15, 2009 at 10:45:11AM +0200, Valentin Haenel wrote: > Hi, > > I am a bit confused about the best way to run the unit tests. > > When I run make test I get: > > "I: Running only non labile unittests. None of them should ever fail" > > These are all fine.
That is good. > But later on I get unit tests that sometimes fail and sometimes pass, and I'm > not really sure how to interpret that. Also its not really the same tests that > pass/fail. Is this due to random numbers? > > Is there anything I should look out for that signals a "real" failure? Some of the tests are flagged as 'labile'. This applies to tests which (to some degree) depend on random factors like classifiers running on randomly generated datasets. Neverthless, we have such tests since some pieces of the code will always carry some risk, but despite that should be ran for coverage reasons. You should be worried if anything 'non-labile' fails. Most of the labile ones have explaination attached to them, which will appear in the console if they fail (e.g. 'should have reached at least 80% accuracy, but only got X). If some tests fails without explaination, it might be worth adding one. Michael -- GPG key: 1024D/3144BE0F Michael Hanke http://apsy.gse.uni-magdeburg.de/hanke ICQ: 48230050 _______________________________________________ Pkg-ExpPsy-PyMVPA mailing list [email protected] http://lists.alioth.debian.org/mailman/listinfo/pkg-exppsy-pymvpa

