tgravescs commented on a change in pull request #32810:
URL: https://github.com/apache/spark/pull/32810#discussion_r650029785



##########
File path: 
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala
##########
@@ -1464,6 +1464,14 @@ private object Client extends Logging {
     (mainUri ++ secondaryUris).toArray
   }
 
+  /**
+   * Returns a list of local, absolute URLs representing the user classpath.
+   *
+   * @param conf Spark configuration.
+   */
+  def getUserClasspathUrls(conf: SparkConf): Array[URL] =
+    getUserClasspath(conf).map(entry => new URL("file:" + new 
File(entry.getPath).getAbsolutePath))

Review comment:
       maybe I'm missing it but this seems wrong to me for the executors.  On 
yarn you have to put the current directory on jars specified that don't have an 
absolute path, because they are downloaded into distributed cache in ./
   
   
https://github.com/apache/spark/pull/32810/files#diff-de769ef2877d691f385e30a3a8a8a5d151e70e4e5d2acb9f6a03cf920ccda0d4L198




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to