Github user mridulm commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21895#discussion_r207493081
  
    --- Diff: 
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
    @@ -779,6 +808,8 @@ private[history] class FsHistoryProvider(conf: 
SparkConf, clock: Clock)
             listing.delete(classOf[LogInfo], log.logPath)
           }
         }
    +    // Clean the blacklist from the expired entries.
    +    clearBlacklist(CLEAN_INTERVAL_S)
    --- End diff --
    
    My only concern is that, if there happens to be a transient acl issue when 
initially accessing the file, we will never see it in the application list even 
when acl is fixed : without a SHS restart.
    Wondering if the clean interval here could be fraction of CLEAN_INTERVAL_S 
- so that these files have a chance of making it to app list : without much of 
an overhead on NN.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to