ibzib commented on a change in pull request #14437:
URL: https://github.com/apache/beam/pull/14437#discussion_r608025030
##########
File path: sdks/python/apache_beam/runners/interactive/utils_test.py
##########
@@ -182,6 +183,9 @@ def
test_child_module_logger_can_override_logging_level(self, mock_emit):
@unittest.skipIf(
not ie.current_env().is_interactive_ready,
'[interactive] dependency is not installed.')
[email protected](
Review comment:
> The tests are skipped for tox environments that do not have
interactive dependencies installed. Similar to [gcp], [aws] and etc. Because
not all test suites and their test environment have/need all the dependencies
installed. These are intentionally skipped.
Makes sense.
> My worry is that there might be some rare cases that some dependency is
not installed correctly and pytest does not skip the test but error out.
In such cases, I would rather the test failed with a descriptive error
message. If dependencies are not installed correctly, that indicates a real
problem with either our test setup or the actual code, and we should be aware
of it. But I'm not sure what is the best way to differentiate between
intentional skips (as described above) and accidental skips.
> I've also just mocked out the ie.current_env()._is_in_notebook to gain
more stability. It seems that it is the root cause of the flakiness.
How do you know?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]