Github user dilipbiswal commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22337#discussion_r216114963
  
    --- Diff: 
external/kafka-0-10/src/test/scala/org/apache/spark/streaming/kafka010/KafkaRDDSuite.scala
 ---
    @@ -44,20 +44,25 @@ class KafkaRDDSuite extends SparkFunSuite with 
BeforeAndAfterAll {
       private var sc: SparkContext = _
     
       override def beforeAll {
    +    super.beforeAll()
         sc = new SparkContext(sparkConf)
         kafkaTestUtils = new KafkaTestUtils
         kafkaTestUtils.setup()
       }
     
       override def afterAll {
    -    if (sc != null) {
    -      sc.stop
    -      sc = null
    -    }
    -
    -    if (kafkaTestUtils != null) {
    -      kafkaTestUtils.teardown()
    -      kafkaTestUtils = null
    +    try {
    +      if (sc != null) {
    +        sc.stop
    +        sc = null
    +      }
    +
    +      if (kafkaTestUtils != null) {
    +        kafkaTestUtils.teardown()
    +        kafkaTestUtils = null
    +      }
    +    } finally {
    +      super.afterAll()
    --- End diff --
    
    @holdenk Please correct me on this one, in my understanding, we do 3 
distinct things in the afterAll() method.
    ```
    1) sc.stop 
    2) kafkaTestUtils.teardown() 
    3) super.afterAll()
    ```
    Currently, even if we fail in (1) or (2) , we will always do (3) as its 
done in the finally block. My question was , if we fail at (1) , should we do 
(2) and (3) ? I don't know about the details of the test suite to know for sure 
if this matters or not. I was just curious.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to