Ngone51 commented on a change in pull request #25943: 
[WIP][SPARK-29261][SQL][CORE] Support recover live entities from KVStore for 
(SQL)AppStatusListener
URL: https://github.com/apache/spark/pull/25943#discussion_r333610531
 
 

 ##########
 File path: core/src/main/scala/org/apache/spark/status/AppStatusListener.scala
 ##########
 @@ -103,6 +104,81 @@ private[spark] class AppStatusListener(
     }
   }
 
+  // visible for tests
+  private[spark] def recoverLiveEntities(): Unit = {
+    if (!live) {
+      kvstore.view(classOf[JobDataWrapper])
+        .asScala.filter(_.info.status == JobExecutionStatus.RUNNING)
+        .map(_.toLiveJob).foreach(job => liveJobs.put(job.jobId, job))
+
+      kvstore.view(classOf[StageDataWrapper]).asScala
+        .filter { stageData =>
+          stageData.info.status == v1.StageStatus.PENDING ||
+            stageData.info.status == v1.StageStatus.ACTIVE
+        }
+        .map { stageData =>
+          val stageId = stageData.info.stageId
+          val jobs = liveJobs.values.filter(_.stageIds.contains(stageId)).toSeq
+          stageData.toLiveStage(jobs)
+        }.foreach { stage =>
+        val stageId = stage.info.stageId
+        val stageAttempt = stage.info.attemptNumber()
+        liveStages.put((stageId, stageAttempt), stage)
+
+        kvstore.view(classOf[ExecutorStageSummaryWrapper])
+          .index("stage")
+          .first(Array(stageId, stageAttempt))
+          .last(Array(stageId, stageAttempt))
+          .asScala
+          .map(_.toLiveExecutorStageSummary)
+          .foreach { esummary =>
+            stage.executorSummaries.put(esummary.executorId, esummary)
+            if (esummary.isBlacklisted) {
+              stage.blackListedExecutors += esummary.executorId
+              liveExecutors(esummary.executorId).isBlacklisted = true
+              liveExecutors(esummary.executorId).blacklistedInStages += stageId
+            }
+          }
+
+
+        kvstore.view(classOf[TaskDataWrapper])
+          .parent(Array(stageId, stageAttempt))
+          .index(TaskIndexNames.STATUS)
+          .first(TaskState.RUNNING.toString)
 
 Review comment:
   Ignore `LAUNCHING` is safe, because `status` in `TaskDataWrapper` is 
actually from `LiveTask.TaskInfo.status`:
   
   
https://github.com/apache/spark/blob/e1ea806b3075d279b5f08a29fe4c1ad6d3c4191a/core/src/main/scala/org/apache/spark/scheduler/TaskInfo.scala#L96-L112
   
   And there's no `LAUNCHING`.
   
   Another available running status is `GET RESULT`. But this also seems 
impossible. `TaskInfo` in `LiveTask` is only updated when 
`SparkListenerTaskStart` and `SparkListenerTaskEnd` events comes. And these two 
events don't change task's status to `GET RESULT`.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to