Github user mallman commented on the pull request: https://github.com/apache/spark/pull/10700#issuecomment-173490509 Here are my current thoughts. Josh says this functionality is going to be removed in Spark 2.0. The bug this PR is designed to address manifests itself in Spark 1.5 in three ways (I'm aware of): 1. Misleading log messages from the Master (reported above). 2. Incomplete (aka "in progress") application event logs, which can be further divided into two scenarios: 2.a. Incomplete uncompressed event log files. The log processor can recover these files. 2.b. Incomplete compressed event log files. The compression output is truncated and unreadable by normal means. The history server reports a corrupted event log. I cannot definitively tie that symptom to this bug, but it agrees with my experience. The most problematic of these is unrecoverable event logs. I've been frustrated by this before and turned off event log compression as a workaround. Since deploying a build with this patch to one of our dev clusters I haven't seen this problem again. I don't see a simple way to write a test to support this PR. Overall, I feel we should close this PR but keep a reference to it from Jira with a comment that Spark 1.5 and 1.6 users can try this patchâat their own riskâto address the described symptoms if they wish to. It's going into our own Spark 1.x builds. I'll close this PR and the associated Jira issue within the next few days unless someone objects or wishes to continue discussion. Thanks.
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org