On 20 July 2015 at 14:11, Martin Packman <martin.pack...@canonical.com> wrote: > The logs are giant, > the actual failure lines tend to be non-informative with the real > cause several screens up in the log, multiple tests have basically the > same problems with common code...
FWIW I often delete all lines containing the string "[LOG]" before looking at the output - it helps me to see the wood from the trees. With regard to the more general issue, I agree that testing independent workers is hard and testing all coverage paths from outside the worker itself is probably not the best way to do things. These days, when I write some worker-like code, I tend to do more or less what William suggests and write an independent type (more-or-less a finite state machine) with appropriate methods but no sleeping or channel waiting and put the core logic into that. Then with most of the hard-to-test states out of the way, the actual worker-related logic layered on top does not require so many tests because I've gained confidence in the underlying logic. That's somewhat harder with the uniter, because its very state-dependent channel operations make it awkward to write a uniform outer select loop. If I were to do it, off the top of my head, I might consider making uniter.Mode (which BTW should not really be exported) into an interface and each of the existing mode functions into a separate types. The mode interface might look something like: type mode interface { wantEvents() eventMask eventDying() mode eventUpgrading(curl *charm.URL) mode eventAction(actionId string) mode eventHooks(hookInfo hook.Info) mode // ... etc for all possible events that we might wait on } type event uint64 const ( eventDying = 1<<iota eventUpgrading eventAction ... etc ) Then it becomes reasonably straightforward to write internal tests for the individual modes outside of the global context and also to write mock mode types to test the outer loop independently of any specific modes (one advantage of using an interface rather than functions is that it's possible to compare interface values). Channels created within the individual modes (leadership being one example) are one challenge to this approach but I don't think that's too hard to work around. I don't *think* that this would inevitably result in the code becoming much larger or harder to read (given suitable utility types for individual mode implementations to embed), and much of the change could be mechanical (disregarding those pesky tests!) YMMV :) cheers, rog. > > So, I'm not sure what bugs we want to file to track the work to get > master in a good state. As best as I can work out we have: > > 1) A regression on windows from the 16th, probably from this metrics landing: > <http://reviews.vapour.ws/r/2173/> > > 2) An earlier regression on all platforms tracked in this bug: > <https://bugs.launchpad.net/juju-core/+bug/1475724> > > 3) A more general problem with the TestUniterSendMetrics. > > I'm not even completely sure which of these your windows test skip > resolves, but I assume the first? Fun, huh. > >> [1] We should seriously start thinking how to gate landings on the unit >> tests passing on amd64, ppc, and windwos. > > I'd love to gate on the windows tests, but don't want to get yelled > at. Recently, three test runs at 40mins each has not been enough to > get a passing suite reliably, but maybe with this latest batch of > fixes that becomes more reasonable again. > > Martin > > -- > Juju-dev mailing list > Juju-dev@lists.ubuntu.com > Modify settings or unsubscribe at: > https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev