[ https://issues.apache.org/jira/browse/SPARK-35027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17372425#comment-17372425 ]
Jack Hu commented on SPARK-35027: --------------------------------- Of course, the "stop" in FileAppender does nothing but set a flag. The exception will be thrown in "appendStreamToFile", but the cloure in finally only closes the output stream (to file), but leave the "inputStream" open., which is the pipe's output stream. > Close the inputStream in FileAppender when writing the logs failure > ------------------------------------------------------------------- > > Key: SPARK-35027 > URL: https://issues.apache.org/jira/browse/SPARK-35027 > Project: Spark > Issue Type: Bug > Components: Spark Core > Affects Versions: 3.1.1 > Reporter: Jack Hu > Priority: Major > > In Spark Cluster, the ExecutorRunner uses FileAppender to redirect the > stdout/stderr of executors to file, when the writing processing is failure > due to some reasons: disk full, the FileAppender will only close the input > stream to file, but leave the pipe's stdout/stderr open, following writting > operation in executor side may be hung. > need to close the inputStream in FileAppender ? -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org