On Thu, Jan 16, 2020 at 4:20 PM Edward K. Ream <edream...@gmail.com> wrote:
[...]

> The test_one_line_pet_peeves function contains a table of snippets. Modulo
> some housekeeping, it is:
>
> for contents in table: # table not shown. Some tests in the table fail.
>     contents, tokens, tree = self.make_data(contents, tag)
>     expected = self.blacken(contents)
>     results = self.beautify(contents, tag, tokens, tree)
>     if results != expected:
>         fails += 1
> assert fails == 0, fails
>
> *All the "one-line tests" run, even if some fail.*
>
>
It's also convenient to "pretend" that the test passes even when it
> doesn't. That way the -x (fail fast) option doesn't bother us.  So, for
> now, the last line is:
>
> assert fails == 13, fails
>
> As I fix the unit test failures I'll decrease the error count.
>

It is possible to let pytest do the bookkeeping for you by using a pytest
decorator to parameterize the test. See
https://docs.pytest.org/en/latest/parametrize.html.

Also, the @pytest.mark.xfail decorator (
https://docs.pytest.org/en/latest/skipping.html#xfail-mark-test-functions-as-expected-to-fail)
is useful for "a bug not yet fixed" which sounds like what you are
describing.

The parameterize decorator supports marking the input arguments with xfail
independent of each other.

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/CAO5X8CzOBi0G%3DRxJFfUqpXjAx4DxSJs70sXSxeBxajs6tg1a1Q%40mail.gmail.com.

Reply via email to