Github user dongjoon-hyun commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21495#discussion_r193936286
  
    --- Diff: 
repl/scala-2.11/src/main/scala/org/apache/spark/repl/SparkILoopInterpreter.scala
 ---
    @@ -21,8 +21,22 @@ import scala.collection.mutable
     import scala.tools.nsc.Settings
     import scala.tools.nsc.interpreter._
     
    -class SparkILoopInterpreter(settings: Settings, out: JPrintWriter) extends 
IMain(settings, out) {
    -  self =>
    +class SparkILoopInterpreter(settings: Settings, out: JPrintWriter, 
initializeSpark: () => Unit)
    +    extends IMain(settings, out) { self =>
    +
    +  /**
    +   * We override `initializeSynchronous` to initialize Spark *after* 
`intp` is properly initialized
    +   * and *before* the REPL sees any files in the private `loadInitFiles` 
functions, so that
    +   * the Spark context is visible in those files.
    +   *
    +   * This is a bit of a hack, but there isn't another hook available to us 
at this point.
    +   *
    +   * See the discussion in Scala community 
https://github.com/scala/bug/issues/10913 for detail.
    +   */
    +  override def initializeSynchronous(): Unit = {
    +    super.initializeSynchronous()
    +    initializeSpark()
    --- End diff --
    
    Can we upgrade to the corresponding `jline` version together in this PR, 
@dbtsai ?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to