Github user kayousterhout commented on a diff in the pull request:

    https://github.com/apache/spark/pull/1498#discussion_r15498524
  
    --- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala 
---
    @@ -691,25 +689,41 @@ class DAGScheduler(
         }
       }
     
    -
       /** Called when stage's parents are available and we can now do its 
task. */
       private def submitMissingTasks(stage: Stage, jobId: Int) {
         logDebug("submitMissingTasks(" + stage + ")")
         // Get our pending tasks and remember them in our pendingTasks entry
         stage.pendingTasks.clear()
         var tasks = ArrayBuffer[Task[_]]()
    +
    +    var broadcastRddBinary: Broadcast[Array[Byte]] = null
    +    try {
    +      broadcastRddBinary = stage.rdd.createBroadcastBinary()
    +    } catch {
    +      case e: NotSerializableException =>
    +        abortStage(stage, "Task not serializable: " + e.toString)
    +        runningStages -= stage
    --- End diff --
    
    I looked at this a little more and it doesn't make sense right now to send 
a SparkListenerStageCompleted event here, because the 
SparkListenerStageSubmitted event hasn't happened yet.  Not sure if that's the 
desired behavior (is it helpful for people to see in the listener / UI that the 
stage failed because it was not serializable?), but if it is, then it seems 
like you should just change the test to not check for the failed stage in 
sparkListener.failedStages.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to