On Mon, Oct 26, 2015 at 10:03 AM, Will Andrews <w...@firepipe.net> wrote:

> On Mon, Oct 26, 2015 at 10:37 AM, Prakash Surya
> <prakash.su...@delphix.com> wrote:
> > This is just my personal opinion, but disabling tests that fail is
> practically equivalent to
> > removing the test altogether. I've seen tests disabled in other projects
> because "they
> > always fail" and then they're forgotten about, never fixed, and never
> re-enabled.
>
> That is certainly a fair point.  Another tack, which I prefer to
> disabling, is to mark them as "expected failure".  I don't know
> whether this is feasible in the illumos test system, hence my original
> suggestion.
>
> But, that way, the test in question still gets run, so it still has
> visibility, while at the same time is an easy indicator to someone
> testing an unrelated patch that their patch didn't cause a regression.
>

Yup, agreed.


>
> [...]
> > FWIW, I'm slowly working on porting this script over to get it working
> on the OpenZFS
> > automated builds for GitHub pull requests (the builds are still a work
> in progress, but
> > Matt mentioned them a bit during the Developer Summit last week). See:
> >
> >
> https://github.com/prakashsurya/openzfs-build/commit/51e6c17ae6ab5a85a0470ce68a2b1cd0de5f098c
> >
> > Once I get that integrated into the OpenZFS automated builds, it will
> hopefully
> > make it easier to distinguish "expected failures" from "new failures
> introduced
> > by a given patch"; but that will only be helpful for the automated
> builds kicked
> > off via pull requests.
> >
> > It might be worth pulling that script (or something equivalent) into the
> illumos
> > source to make the output of the ZFS test suite easier to consume. It's
> probably
> > possible to also have it be more dynamic, and query the illumos bug
> tracker
> > like we do with our internal bug tracker.
> >
> > I think there's many different ways to "fix" this (assuming we don't
> just go
> > and fix the bugs that cause the test failures), it's just a matter of
> deciding
> > which way is best, and what the requirements are.
>
> Seems to me any solution that implements 'expected failure' in some
> visible manner should be committed ASAP.  :)
>
> Wouldn't it be easier to consume if this mechanism were encoded
> directly into the 'zfstest' command that runs the test suite, rather
> than require a separate script?
>

Yes, I agree that it'd be best to wrap the "expected failure" mechanism
into the
test suite itself.


>
> It also seems like it would be best to encode the associated illumos
> issue number into the test source text (in a manner automatically
> consumable by the test runner), rather than require a network
> connection to confirm the 'expected failure' status from a reported
> issue.  Then, as part of closing the issue, such text can be removed
> in the same commit that fixes the test.
>

As long as there's an automated way to ensure the "text can be removed
in the same commit" isn't forgotten, I'm on board with this.


>
> Thanks for your effort on this.
>

Ultimately, I think this just comes down to developer's not having the time
(or not wanting to spend their time) to add this functionality; and probably
related to the fact that most people running the test suite are seasoned
veterans and have already come up with a wonky workaround that works
for them (e.g. like our script we use at Delphix).

I'm more than willing to review any changes to make this area more
approachable, and/or implementing some of the above ideas. ;)


>
> --Will.
>
_______________________________________________
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer

Reply via email to