Github user jerryshao commented on a diff in the pull request: https://github.com/apache/spark/pull/21390#discussion_r190571272 --- Diff: core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala --- @@ -97,6 +99,10 @@ private[deploy] class Worker( private val APP_DATA_RETENTION_SECONDS = conf.getLong("spark.worker.cleanup.appDataTtl", 7 * 24 * 3600) + // Whether or not cleanup the non-shuffle files on executor death. + private val CLEANUP_NON_SHUFFLE_FILES_ENABLED = + conf.getBoolean("spark.storage.cleanupFilesAfterExecutorDeath", true) --- End diff -- Shall we rename this config to "spark.storage.cleanupFilesAfterExecutorExit"? Seems from the code that normal executor exit (dynamic allocation) will also trigger the cleanup, this config may be a little misleading. Please correct me if I'm wrong.
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org