On Tue, 2011-02-15 at 19:12 -0500, David Malcolm wrote: > Something that's been very successful for the development of Python > itself is having a policy that all patches need to be accompanied by a > test that verifies the correct working of the new code.
Hey, if it works on my machine ... ship it! > This is one of > the things that is done on patch review. See e.g.: > http://docs.python.org/devguide/patch.html#preparation > (Historically Python didn't have much of a test suite, but having this > policy has meant that as things have evolved, the code being touched has > had greatly enhanced test coverage). > > I'm mostly just lurking here, but I keep watching you review each > other's patches, and every time I see changes that add new API > entrypoints, or optimize things, I have an inner voice tell me "but > where's the test case?" > > In particular, if yum's python API is meant to be the preferred API over > the rpm python bindings for accessing packaging information, it ought to > have a test suite that exercises that API, verifying various properties > of it, demonstrating the expected usage (and hopefully ensuring that the > stack is sane when e.g. python or rpm-python change from under you). So, in general, there are four "kinds" of APIs (although some are more than one kind :o) within yum: 1. Stuff in cli/output/etc. which is UI. 2. Stuff that interacts with "the local machine". Eg. this means yumdb and rpmdb ... running transactions etc. 3. Stuff that interacts with "remote repos.", Eg. downloading stuff, or working directly with that data. 4. The stuff in the middle of #2 and #3, they get the data from the other bits and "do something with it". For #1: I know of no good way to test this that isn't more pain than not automatically testing it (Eg. you can compare output, but then all your tests break everytime you tweak the UI). For #2 and #3, testing is _really_ painful as it means you'd have to ship 100s of MB of "test data" ... and even then testing things like client certs. or even server cert. checking is just _really_ hard to do automatically. Also, to get full testing we'd need to test against RHEL-5 rpm upwards. For #4 you can mock out #2 and #3 and create tests ... and that's most of our current testsuite (there are exceptions like testing rangeCompare() etc.) This is even 100% great though, as the mock'd objects don't always behave identically to the real versions. > Would it be helpful to institute a policy that patches that add new API > entrypoints should contain test cases, and other patches should at least > have a statement that all tests continue to pass? All patches are required to pass "make check" already, and most patches to the depsovler come with test cases ... the problem is that a huge amount of our patches are #1..#3 things. If someone was so inclined then probably the biggest new "testsuite" win would be to add some way to "fake up" new rpmdb's from a small amount of data ... my guess is that if this is even viable it'll make "make check" take a lot longer though. After that's done, and replaces the mock rpmdb, we could test lower level rpmdb calls too (although I'd be more inclined to stay fairly high level, and have the high level tests test the lower level API calls). The obvious extension is to then fake up sqlite repos. ... but I'm less sure how viable that is. _______________________________________________ Yum-devel mailing list Yum-devel@lists.baseurl.org http://lists.baseurl.org/mailman/listinfo/yum-devel