[ https://issues.apache.org/jira/browse/MAPREDUCE-7359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Yang Chen updated MAPREDUCE-7359: --------------------------------- Description: h2. What is the purpose of this change This PR is to fix 2 non-idempotent flaky tests: {code:java} org.apache.hadoop.mapred.TestOldCombinerGrouping.testCombiner org.apache.hadoop.mapreduce.TestNewCombinerGrouping.testCombiner{code} h2. Why the tests failed For test {{testcombiner}}, it writes to the directory {{TEST_ROOT_DIR}} whose path is determined by UUID. When running this test twice, the {{TEST_ROOT_DIR}} created by the first running already exists, so the second running cannot create the directory with the same name, which will lead to a {{RuntimeException}}. This will pollute the shared state. It may be better to clean state pollutions so that some other tests won't fail in the future due to the shared state polluted by this test. h2. Reproduce test failure Run the test twice in the same JVM. h2. Expected result The test should pass without any failure 2 times. h2. Actual result Get failures: {code:java} java.lang.RuntimeException: Could not create test dir: /home/...{code} h2. Fix Fully delete the directory created for this test when it ends. Link to PR: was: h2. What is the purpose of this change This PR is to fix 2 non-idempotent flaky tests: {code:java} org.apache.hadoop.mapred.TestOldCombinerGrouping.testCombiner org.apache.hadoop.mapreduce.TestNewCombinerGrouping.testCombiner{code} {{}} h2. Why the tests failed For test {{testcombiner}}, it writes to the directory {{TEST_ROOT_DIR}} whose path is determined by UUID. When running this test twice, the {{TEST_ROOT_DIR}} created by the first running already exists, so the second running cannot create the directory with the same name, which will lead to a {{RuntimeException}}. This will pollute the shared state. It may be better to clean state pollutions so that some other tests won't fail in the future due to the shared state polluted by this test. h2. Reproduce test failure Run the test twice in the same JVM. h2. Expected result The test should pass without any failure 2 times. h2. Actual result Get failures: {{}} {code:java} java.lang.RuntimeException: Could not create test dir: /home/...{code} {{}} h2. Fix Fully delete the directory created for this test when it ends. Link to PR: > Clean shared state pollution to avoid flaky tests in testCombiner > ----------------------------------------------------------------- > > Key: MAPREDUCE-7359 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7359 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: test > Reporter: Yang Chen > Priority: Minor > > h2. What is the purpose of this change > This PR is to fix 2 non-idempotent flaky tests: > {code:java} > org.apache.hadoop.mapred.TestOldCombinerGrouping.testCombiner > org.apache.hadoop.mapreduce.TestNewCombinerGrouping.testCombiner{code} > h2. Why the tests failed > For test {{testcombiner}}, it writes to the directory {{TEST_ROOT_DIR}} whose > path is determined by UUID. When running this test twice, the > {{TEST_ROOT_DIR}} created by the first running already exists, so the second > running cannot create the directory with the same name, which will lead to a > {{RuntimeException}}. This will pollute the shared state. It may be better to > clean state pollutions so that some other tests won't fail in the future due > to the shared state polluted by this test. > h2. Reproduce test failure > Run the test twice in the same JVM. > h2. Expected result > The test should pass without any failure 2 times. > h2. Actual result > Get failures: > {code:java} > java.lang.RuntimeException: Could not create test dir: /home/...{code} > h2. Fix > Fully delete the directory created for this test when it ends. > > Link to PR: -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org