Zac Hatfield-Dodds <zac.hatfield.do...@gmail.com> added the comment:

Thanks for your comments Terry - I'm delighted that it's useful.  It's been a 
while since I wrote that, and entirely possible that it's just a typo.

Hypothesis does indeed support unittest, including for multiple-failure 
reporting and so on. 
 You can see my unittest implementation of the tokenise tests at 
https://github.com/Zac-HD/stdlib-property-tests/blob/b5ef97f9e7fd1b0e7a028823e436f78f374cf0dc/tests/test_source_code.py#L87-L133

Subtests are a little tricky, because the default interaction is to store the 
subtests for *all* test cases generated by Hypothesis.  We therefore 
monkeypatch it to a no-op, but more sophisticated handling is almost certainly 
possible.  More generally, when using Hypothesis I would usually ask @given for 
single inputs and call the test method; which replaces the usual loop over a 
list of inputs with subTest.

An iterator interface is not available because in general (see e.g. 
`hypothesis.strategies.data()` or stateful testing) it is not possible to 
separate data generation from test execution, and also because Hypothesis uses 
feedback from previous inputs in deciding what to generate.  Instead of "for 
testcase in testcases: with subTest: ...", I'd write "@given(testcase=...) def 
test_foo(self, testcase): ...".


I've sent you an email for the other conversation.  (anyone else interested is 
invited to get in touch via zhd.dev :-))

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue38953>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to