[ 
https://issues.apache.org/jira/browse/SPARK-26186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcelo Vanzin resolved SPARK-26186.
------------------------------------
       Resolution: Fixed
    Fix Version/s: 2.4.1
                   3.0.0

Issue resolved by pull request 23158
[https://github.com/apache/spark/pull/23158]

> In progress applications with last updated time is lesser than the cleaning 
> interval are getting removed during cleaning logs
> -----------------------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-26186
>                 URL: https://issues.apache.org/jira/browse/SPARK-26186
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.4.0, 3.0.0
>            Reporter: shahid
>            Assignee: shahid
>            Priority: Major
>             Fix For: 3.0.0, 2.4.1
>
>
> Inporgress applications with last updated time is withing the cleaning 
> interval are getting deleted.
>  
> Added a UT to test the scenario.
> {code:java}
> test("should not clean inprogress application with lastUpdated time less the 
> maxTime") {
>     val firstFileModifiedTime = TimeUnit.DAYS.toMillis(1)
>     val secondFileModifiedTime = TimeUnit.DAYS.toMillis(6)
>     val maxAge = TimeUnit.DAYS.toMillis(7)
>     val clock = new ManualClock(0)
>     val provider = new FsHistoryProvider(
>       createTestConf().set("spark.history.fs.cleaner.maxAge", 
> s"${maxAge}ms"), clock)
>     val log = newLogFile("inProgressApp1", None, inProgress = true)
>     writeFile(log, true, None,
>       SparkListenerApplicationStart(
>         "inProgressApp1", Some("inProgressApp1"), 3L, "test", 
> Some("attempt1"))
>     )
>     clock.setTime(firstFileModifiedTime)
>     provider.checkForLogs()
>     writeFile(log, true, None,
>       SparkListenerApplicationStart(
>         "inProgressApp1", Some("inProgressApp1"), 3L, "test", 
> Some("attempt1")),
>       SparkListenerJobStart(0, 1L, Nil, null)
>     )
>     clock.setTime(secondFileModifiedTime)
>     provider.checkForLogs()
>     clock.setTime(TimeUnit.DAYS.toMillis(10))
>     writeFile(log, true, None,
>       SparkListenerApplicationStart(
>         "inProgressApp1", Some("inProgressApp1"), 3L, "test", 
> Some("attempt1")),
>       SparkListenerJobStart(0, 1L, Nil, null),
>       SparkListenerJobEnd(0, 1L, JobSucceeded)
>     )
>     provider.checkForLogs()
>     // This should not trigger any cleanup
>     updateAndCheck(provider) { list =>
>       list.size should be(1)
>     }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to