Github user kayousterhout commented on a diff in the pull request:

    https://github.com/apache/spark/pull/1498#discussion_r15498341
  
    --- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala 
---
    @@ -691,25 +689,41 @@ class DAGScheduler(
         }
       }
     
    -
       /** Called when stage's parents are available and we can now do its 
task. */
       private def submitMissingTasks(stage: Stage, jobId: Int) {
         logDebug("submitMissingTasks(" + stage + ")")
         // Get our pending tasks and remember them in our pendingTasks entry
         stage.pendingTasks.clear()
         var tasks = ArrayBuffer[Task[_]]()
    +
    +    var broadcastRddBinary: Broadcast[Array[Byte]] = null
    +    try {
    +      broadcastRddBinary = stage.rdd.createBroadcastBinary()
    +    } catch {
    +      case e: NotSerializableException =>
    +        abortStage(stage, "Task not serializable: " + e.toString)
    +        runningStages -= stage
    --- End diff --
    
    As the code currently stands, you don't need to remove the stage from 
runningStages here -- because it doesn't get added until below (line 738).  
BUT, the test failure you're seeing is occurring because the stage is not yet 
in runningStages.  If you look on line 1049, we only post the 
SparkListenerStageCompleted event if the stage was in runningStages.  So, 
because the stage hasn't yet been added to runningStages when you call 
abortStage, the listener event is never posted.  You'll have to poke around to 
figure out the right way to fix this...I'm not sure whether it makes sense to 
just delete the check on line 1049 or if that's necessary for other reasons.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to