Ngone51 commented on code in PR #36162:
URL: https://github.com/apache/spark/pull/36162#discussion_r885247661


##########
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala:
##########
@@ -1217,6 +1289,61 @@ private[spark] class TaskSetManager(
   def executorAdded(): Unit = {
     recomputeLocality()
   }
+
+  /**
+   * A class for checking inefficient tasks to be speculated, the inefficient 
tasks come from
+   * the tasks which may be speculated by the previous strategy.
+   */
+  private[scheduler] class InefficientTaskCalculator {
+    var taskProgressThreshold = 0.0
+    var updateSealed = false
+    private var lastComputeMs = -1L
+
+    def maybeRecompute(nowMs: Long): Unit = {
+      if (!updateSealed && (lastComputeMs <= 0 ||
+        nowMs > lastComputeMs + speculationTaskStatsCacheInterval)) {
+        var successRecords = 0L
+        var successRunTime = 0L
+        var numSuccessTasks = 0L
+        taskInfos.values.filter(_.status == "SUCCESS").foreach { taskInfo =>
+          successRecords += taskInfo.successRecords
+          successRunTime += taskInfo.successRunTime
+          numSuccessTasks += 1
+        }

Review Comment:
   I wonder is it possible that we could have multiple successful attempts for 
a single partition? Seems like we have prevented this scenario by: 
   ```
   
https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala#L784-L785
   // Check if any other attempt succeeded before this and this attempt has not 
been handled
   if (successful(index) && killedByOtherAttempt.contains(tid)) {
   ```
   
   And even if it's possible, I think it's still fine since we only calculate 
the average number here. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to