Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16174#discussion_r91144141
  
    --- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
    @@ -2350,6 +2350,16 @@ object SparkContext extends Logging {
         }
       }
     
    +  private[spark] def getActiveContext(): Option[SparkContext] = {
    +    SPARK_CONTEXT_CONSTRUCTOR_LOCK.synchronized {
    +      Option(activeContext.get())
    +    }
    +  }
    +
    +  private[spark] def stopActiveContext(): Unit = {
    --- End diff --
    
    I don't know. I'm not a big fan of the approach you're taking here: calling 
this method before running tests. That feels like a sledgehammer to fix flaky 
tests. I think it would be better for test code to be more careful about 
cleaning after itself. Kinda like most tests in spark-core use 
`LocalSparkContext` to more or less automatically do that without the need for 
these methods.
    
    The `ReuseableSparkContext` trait you have is a step in that direction. If 
you make sure all needed streaming tests are using it, and keep this state 
within that class, I think it would be a better change.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to