Github user andrewor14 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/10934#discussion_r51341040
  
    --- Diff: 
streaming/src/test/scala/org/apache/spark/streaming/CheckpointSuite.scala ---
    @@ -821,6 +821,75 @@ class CheckpointSuite extends TestSuiteBase with 
DStreamCheckpointTester
         checkpointWriter.stop()
       }
     
    +  test("SPARK-6847: stack overflow when updateStateByKey is followed by a 
checkpointed dstream") {
    +    // In this test, there are two updateStateByKey operators. The RDD DAG 
is as follows:
    +    //
    +    //     batch 1            batch 2            batch 3     ...
    +    //
    +    // 1) input rdd          input rdd          input rdd
    +    //       |                  |                  |
    +    //       v                  v                  v
    +    // 2) cogroup rdd   ---> cogroup rdd   ---> cogroup rdd  ...
    +    //       |         /        |         /        |
    +    //       v        /         v        /         v
    +    // 3)  map rdd ---        map rdd ---        map rdd     ...
    +    //       |                  |                  |
    +    //       v                  v                  v
    +    // 4) cogroup rdd   ---> cogroup rdd   ---> cogroup rdd  ...
    +    //       |         /        |         /        |
    +    //       v        /         v        /         v
    +    // 5)  map rdd ---        map rdd ---        map rdd     ...
    +    //
    +    // Every batch depends on its previous batch, so "updateStateByKey" 
needs to do checkpoint to
    +    // break the RDD chain. However, before SPARK-6847, when the state RDD 
(layer 5) of the second
    +    // "updateStateByKey" does checkpoint, it won't checkpoint the state 
RDD (layer 3) of the first
    +    // "updateStateByKey" (Note: "updateStateByKey" has already marked 
that its state RDD (layer 3)
    +    // should be checkpointed). Hence, the connections between layer 2 and 
layer 3 won't be broken
    +    // and the RDD chain will grow infinitely and cause StackOverflow.
    +    //
    +    // Therefore SPARK-6847 introduces 
"spark.checkpoint.checkpointAllMarked" to force checkpointing
    +    // all marked RDDs in the DAG to resolve this issue. (For the previous 
example, it will break
    +    // connections between layer 2 and layer 3)
    +    ssc = new StreamingContext(master, framework, batchDuration)
    +    val batchCounter = new BatchCounter(ssc)
    +    ssc.checkpoint(checkpointDir)
    +    val inputDStream = new CheckpointInputDStream(ssc)
    +    val updateFunc = (values: Seq[Int], state: Option[Int]) => {
    +      Some(values.sum + state.getOrElse(0))
    +    }
    +    @volatile var shouldCheckpointAllMarkedRDDs = false
    +    @volatile var rddsCheckpointed = false
    +    inputDStream.map(i => (i, i))
    +      .updateStateByKey(updateFunc).checkpoint(batchDuration)
    +      .updateStateByKey(updateFunc).checkpoint(batchDuration)
    +      .foreachRDD { rdd =>
    +        /**
    +         * Find all RDDs that are marked for checkpointing in the 
specified RDD and its ancestors.
    +         */
    +        def findAllMarkedRDDs(rdd: RDD[_]): List[RDD[_]] = {
    +          val markedRDDs = rdd.dependencies.flatMap(dep => 
findAllMarkedRDDs(dep.rdd)).toList
    +          if (rdd.checkpointData.isDefined) {
    +            rdd :: markedRDDs
    +          } else {
    +            markedRDDs
    +          }
    +        }
    +
    +        shouldCheckpointAllMarkedRDDs =
    +          
Option(rdd.sparkContext.getLocalProperty(RDD.CHECKPOINT_ALL_MARKED_ANCESTORS)).
    +            map(_.toBoolean).getOrElse(false)
    +
    +        val stateRDDs = findAllMarkedRDDs(rdd)
    +          rdd.count()
    +          // Check the two state RDDs are both checkpointed
    +          rddsCheckpointed = stateRDDs.size == 2 && 
stateRDDs.forall(_.isCheckpointed)
    +        }
    --- End diff --
    
    hm indentation is weird here?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to