Ben Pfaff <b...@ovn.org> writes: > On Thu, Sep 06, 2018 at 04:56:18AM -0400, Aaron Conole wrote: >> As of June, the 0-day robot has tested over 450 patch series. >> Occasionally it spams the list (apologies for that), but for the >> majority of the time it has caught issues before they made it to the >> tree - so it's accomplishing the initial goal just fine. >> >> I see lots of ways it can improve. Currently, the bot runs on a light >> system. It takes ~20 minutes to complete a set of tests, including all >> the checkpatch and rebuild runs. That's not a big issue. BUT, it does >> mean that the machine isn't able to perform all the kinds of regression >> tests that we would want. I want to improve this in a way that various >> contributors can bring their own hardware and regression tests to the >> party. In that way, various projects can detect potential issues before >> they would ever land on the tree and it could flag functional changes >> earlier in the process. >> >> I'm not sure the best way to do that. One thing I'll be doing is >> updating the bot to push a series that successfully builds and passes >> checkpatch to a special branch on a github repository to kick off travis >> builds. That will give us a more complete regression coverage, and we >> could be confident that a series won't break something major. After >> that, I'm not sure how to notify various alternate test infrastructures >> how to kick off their own tests using the patched sources. > > That's pretty exciting. > > Don't forget about appveyor, either. Hardly any of us builds on > Windows, so appveyor is likely to catch things that we won't.
:) I did forget it, but it's true. I'm working on some scripts to poll the status. That way the bot can bundle up emails together on the series. >> My goal is to get really early feedback on patch series. I've sent this >> out to the folks I know are involved in testing and test discussions in >> the hopes that we can talk about how best to get more CI happening. The >> open questions: >> >> 1. How can we notify various downstream consumers of OvS of these >> 0-day builds? Should we just rely on people rolling their own? >> Should there be a more formalized framework? How will these other >> test frameworks report any kind of failures? > > Do you mean notify of successes or failures? I assumed that the robot's > email would notify us of that. I will keep the emails. I was thinking of some kind of public dashboard, or even just using the patchwork 'checks' API to report status of various tests that the robot runs. > Do you mean actually provide the builds? I don't know a good way to do > that. I didn't know if anyone would find it useful to have something like a PPA / COPR or other kind of repo available. That way, they can just update their package manager configuration to point at the appropriate place and install a pre-applied series. But I guess apart from .deb,.rpm for the various distros that are in-tree, it's difficult to provide something. Maybe a tarball of the sources with the series applied, and a tarball of the binaries that were spit out (but the configurations can be quite varied, so it probably wouldn't make sense). >> 2. What kinds of additional testing do we want to see the robot include? >> Should the test results be made available in general on some kind of >> public facing site? Should it just stay as a "bleep bloop - >> failure!" marker? > > It would be super awesome if we could run the various additional > testsuites that we have: check-system-userspace, check-kernel, etc. We > can't run them easily on travis because they require superuser (and bugs > sometimes crash the system). I agree. I'm hoping to take advantage of the poc sub-system to do various builds to have a superuser environment available that's insulated from the host system. >> 3. What other concerns should be addressed? Thanks for the input, Ben! _______________________________________________ dev mailing list d...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-dev