Github user jerryshao commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19643#discussion_r151307924
  
    --- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
    @@ -1838,12 +1852,21 @@ class SparkContext(config: SparkConf) extends 
Logging {
               case _ => path
             }
           }
    +
           if (key != null) {
             val timestamp = System.currentTimeMillis
             if (addedJars.putIfAbsent(key, timestamp).isEmpty) {
               logInfo(s"Added JAR $path at $key with timestamp $timestamp")
               postEnvironmentUpdate()
             }
    +
    +        if (addToCurrentClassLoader) {
    +          Utils.getContextOrSparkClassLoader match {
    +            case cl: MutableURLClassLoader => 
cl.addURL(Utils.resolveURI(path).toURL)
    --- End diff --
    
    I'm not sure does it support remote jars on HTTPS or Hadoop 
FileSystems?In the executor side, we handle this explicitly by downloading 
jars to local and  add to classpath, but here looks like we don't have such 
logic. I'm not sure how this `URLClassLoader` communicate with Hadoop or Https 
without certificates.
    
    The `addJar` is just adding jars to fileserver, so that executor could 
fetch them from driver and add to classpath. It will not affect driver's 
classpath. If we support adding jars to current driver's classloader, then how 
do we leverage this newly added jars?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to