Github user kayousterhout commented on a diff in the pull request:

    https://github.com/apache/spark/pull/1225#discussion_r14253406
  
    --- Diff: core/src/main/scala/org/apache/spark/TaskEndReason.scala ---
    @@ -30,27 +30,69 @@ import org.apache.spark.storage.BlockManagerId
     @DeveloperApi
     sealed trait TaskEndReason
     
    +/**
    + * :: DeveloperApi ::
    + * Task succeeded.
    + */
     @DeveloperApi
     case object Success extends TaskEndReason
     
    +/**
    + * :: DeveloperApi ::
    + * Various possible reasons why a task failed.
    + */
    +@DeveloperApi
    +sealed trait TaskFailedReason extends TaskEndReason {
    +  /** Error message displayed in the web UI. */
    +  def toErrorString: String
    +}
    +
    +/**
    + * :: DeveloperApi ::
    + * A [[org.apache.spark.scheduler.ShuffleMapTask]] that completed 
successfully earlier, but we
    + * lost the executor before the stage completed. This means Spark needs to 
reschedule the task
    + * to be re-executed on a different executor.
    + */
     @DeveloperApi
    -case object Resubmitted extends TaskEndReason // Task was finished earlier 
but we've now lost it
    +case object Resubmitted extends TaskFailedReason {
    +  override def toErrorString: String = "Resubmitted (resubmitted due to 
lost executor)"
    +}
     
    +/**
    + * :: DeveloperApi ::
    + * Task failed to fetch shuffle data from a remote node. Probably means we 
have lost the remote
    + * executors the task is trying to fetch from, and thus needs to rerun the 
previous stage.
    --- End diff --
    
    super nit: needs -> need


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to