Repository: spark
Updated Branches:
  refs/heads/master de4feae3c -> a2166ecdd


[SPARK-24455][CORE] fix typo in TaskSchedulerImpl comment

change runTasks to submitTasks  in the TaskSchedulerImpl.scala 's comment

Author: xueyu <xu...@yidian-inc.com>
Author: Xue Yu <278006...@qq.com>

Closes #21485 from xueyumusic/fixtypo1.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/a2166ecd
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/a2166ecd
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/a2166ecd

Branch: refs/heads/master
Commit: a2166ecddaec030f78acaa66ce660d979a35079c
Parents: de4feae
Author: xueyu <xu...@yidian-inc.com>
Authored: Mon Jun 4 08:10:49 2018 +0700
Committer: hyukjinkwon <gurwls...@apache.org>
Committed: Mon Jun 4 08:10:49 2018 +0700

----------------------------------------------------------------------
 .../scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/a2166ecd/core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala
----------------------------------------------------------------------
diff --git 
a/core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala 
b/core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala
index 8e97b3d..598b62f 100644
--- a/core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala
+++ b/core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala
@@ -42,7 +42,7 @@ import org.apache.spark.util.{AccumulatorV2, ThreadUtils, 
Utils}
  * up to launch speculative tasks, etc.
  *
  * Clients should first call initialize() and start(), then submit task sets 
through the
- * runTasks method.
+ * submitTasks method.
  *
  * THREADING: [[SchedulerBackend]]s and task-submitting clients can call this 
class from multiple
  * threads, so it needs locks in public API methods to maintain its state. In 
addition, some
@@ -62,7 +62,7 @@ private[spark] class TaskSchedulerImpl(
     this(sc, sc.conf.get(config.MAX_TASK_FAILURES))
   }
 
-  // Lazily initializing blackListTrackOpt to avoid getting empty 
ExecutorAllocationClient,
+  // Lazily initializing blacklistTrackerOpt to avoid getting empty 
ExecutorAllocationClient,
   // because ExecutorAllocationClient is created after this TaskSchedulerImpl.
   private[scheduler] lazy val blacklistTrackerOpt = 
maybeCreateBlacklistTracker(sc)
 
@@ -228,7 +228,7 @@ private[spark] class TaskSchedulerImpl(
         // 1. The task set manager has been created and some tasks have been 
scheduled.
         //    In this case, send a kill signal to the executors to kill the 
task and then abort
         //    the stage.
-        // 2. The task set manager has been created but no tasks has been 
scheduled. In this case,
+        // 2. The task set manager has been created but no tasks have been 
scheduled. In this case,
         //    simply abort the stage.
         tsm.runningTasksSet.foreach { tid =>
             taskIdToExecutorId.get(tid).foreach(execId =>
@@ -694,7 +694,7 @@ private[spark] class TaskSchedulerImpl(
    *
    * After stage failure and retry, there may be multiple TaskSetManagers for 
the stage.
    * If an earlier attempt of a stage completes a task, we should ensure that 
the later attempts
-   * do not also submit those same tasks.  That also means that a task 
completion from an  earlier
+   * do not also submit those same tasks.  That also means that a task 
completion from an earlier
    * attempt can lead to the entire stage getting marked as successful.
    */
   private[scheduler] def markPartitionCompletedInAllTaskSets(stageId: Int, 
partitionId: Int) = {


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to