mridulm commented on code in PR #38983: URL: https://github.com/apache/spark/pull/38983#discussion_r1048954394
########## core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala: ########## @@ -535,11 +535,17 @@ private[history] class FsHistoryProvider(conf: SparkConf, clock: Clock) } } catch { case _: NoSuchElementException => - // If the file is currently not being tracked by the SHS, add an entry for it and try - // to parse it. This will allow the cleaner code to detect the file as stale later on - // if it was not possible to parse it. + // If the file is currently not being tracked by the SHS, check whether the log file + // has expired, if expired, delete it from log dir, if not, add an entry for it and + // try to parse it. This will allow the cleaner code to detect the file as stale + // later on if it was not possible to parse it. try { - if (count < conf.get(UPDATE_BATCHSIZE)) { + if (conf.get(CLEANER_ENABLED) && reader.modificationTime < + clock.getTimeMillis() - conf.get(MAX_LOG_AGE_S) * 1000) { + logInfo(s"Deleting expired event log ${reader.rootPath.toString}") + deleteLog(fs, reader.rootPath) + false + } else if (count < conf.get(UPDATE_BATCHSIZE)) { Review Comment: When the exception is thrown, we should also cleanup state if the first read had succeeded - the exception could be thrown for either of the two reads. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org