New issue 440: parametrized fixture output captured inconsistently
https://bitbucket.org/hpk42/pytest/issue/440/parametrized-fixture-output-captured
Jurko Gospodnetić:
When using parametrized module scoped fixtures, their finalization output gets
captured inconsistently. It does not get captured for a test run with the
initial parametrization, but tests run using a non-initial parametrization
capture output from the previous parametrization's finalization instead.
The following test demonstrates the issue. You can run is as a part of the
internal pytest test suite:
```
import fnmatch
def test_module_fixture_finalizer_output_capture(testdir):
"""
Parametrized module scoped fixture output should be captured consistently
and separately for each test using that fixture.
If the fixture code produces output, that output should be consistently
captured for every test using any of that fixture's parametrizations -
either it should or it should not be captured for every such test, but it
must not be captured only for some of them.
Also, if a fixture produces output for a specific fixture parametrization,
that output must not be captured for tests using a different fixture
parametrization.
Demonstrates a defect in pytest 2.5.0 where module scoped parametrized
fixtures do not get their finalization output captured for their initial
parametrization, but each test run using a non-initial parametrization
captures finalization output from the previous parametrization.
"""
testdir.makepyfile(r"""\
import pytest
@pytest.fixture(scope="module", params=["A", "B", "C"])
def ola(request):
print("<KISS> %s - in the fixture" % (request.param,))
class frufru:
def __init__(self, param):
self.param = param
def __call__(self):
print("<KISS> %s - in the finalizer" % (self.param,))
request.addfinalizer(frufru(request.param))
return request.param
def test_me(ola):
print("<KISS> %s - in the test" % (ola,))
pytest.fail()
""")
expected_params = "ABC"
result = testdir.runpytest("--tb=short", "-q")
output = result.stdout.get_lines_after("*=== FAILURES ===*")
# Collect reported captured output lines for each test.
in_output_block = False
test_outputs = []
for line in output:
if in_output_block:
if line.startswith("<KISS> "):
test_outputs[-1].append(line[7:])
# Check expected output line formatting.
assert line[7] in expected_params
assert line[8:].startswith(" - ")
else:
in_output_block = False
elif fnmatch.fnmatch(line, "*--- Captured stdout ---*"):
in_output_block = True
test_outputs.append([])
else:
# Sanity check - no lines except reported output lines should match
# our expected output line formatting.
assert not line.startswith("<KISS>")
# We ran a single test for each fixture parametrization.
assert len(test_outputs) == len(expected_params)
content_0 = None
for test_param_index, single_test_output in enumerate(test_outputs):
# All lines belonging to a single test should report using the same
# fixture parameter.
param = single_test_output[0][0]
for line_index, line in enumerate(single_test_output):
assert line[0] == param
# All tests should output the same content except for the param value.
content = [line[1:] for line in single_test_output]
if content_0 is None:
content_0 = content
else:
assert content == content_0
```
The test could be made shorter and use more precise assertions but I did not
want for it to assert the exact logged output, but only that the output be
consistent for tests run using all the different parametrizations.
Hope this helps.
Best regards,
Jurko Gospodnetić
_______________________________________________
pytest-commit mailing list
[email protected]
https://mail.python.org/mailman/listinfo/pytest-commit