Github user reggert commented on the pull request: https://github.com/apache/spark/pull/9264#issuecomment-156769951 Unrelated to this PR (except that it may be partially responsible for the most recent test failure), there appears to be a race condition here: ```scala if (!executionContext.isShutdown) { val f = Future { deleteFiles() } ``` (from `FileBasedWriteAheadLog.clean` in spark-streaming) If the `ExecutionContext` shuts down after `isShutdown` is called but before the task underlying the `Future` is enqueued on it, an exception will be thrown, which appears to be what's happening in `CommonWriteAheadLogTests.logCleanUpTest`. I'm not certain why the `ExecutionContext` would be getting shut down, but it does seem like the test that's failing has an awfully short timeout, and according to the stack trace, the thread pool is very busy at the time of the failure.
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org