[ 
https://issues.apache.org/jira/browse/YARN-11386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17642974#comment-17642974
 ] 

ASF GitHub Bot commented on YARN-11386:
---------------------------------------

GauthamBanasandra opened a new pull request, #5183:
URL: https://github.com/apache/hadoop/pull/5183

   <!--
     Thanks for sending a pull request!
       1. If this is your first time, please read our contributor guidelines: 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
       2. Make sure your PR title starts with JIRA issue id, e.g., 
'HADOOP-17799. Your PR title ...'.
   -->
   
   ### Description of PR
   While launching the AM, the client puts the classpath into environment 
HashMap of the ContainerLaunchContext. To support cross-platform compatibility, 
some special notations must be used by the client. For example, <CPS> must be 
used as the classpath separator instead of ";" (Windows) or ":" (Linux). The NM 
would resolve all the <CPS> using the appropriate path separator according to 
the OS platform, prior to launching the container. In addition to this, the NM 
is also expected to expand and resolve all of the wildcard ( * ) occurrences in 
the classpath before launching the container, since the JVM doesn't understand 
the wildcard charater.
   
   The issue we see here is, neither <CPS> resolution nor wildcard expansion is 
happening.
   
   I tested this from Spark. To run a Spark application on YARN, one needs to 
set the spark.yarn.jars config to point to the location where Spark jars are 
present. NM would then add these jars to the classpath before launching the 
Spark AM container. The resolution didn't happen -
   
   ```
   Manifest-Version:
   1.0
   Class-Path:
   
file:/D:/tmp/hadoop-Gautham/nm-local-dir/usercache/Gautham/appcache/application_1670138957874_0001/container_1670138957874_0001_01_000001/%7B%7BPWD%7D%7D%3CCPS%3E%7B%7BPWD%7D%7D/__spark_conf__%3CCPS%3E%7B%7BPWD%7D%7D/__spark_libs__/*%3CCPS%3E/D:/projects/github/apache/spark/jars/*%3CCPS%3E%7B%7BPWD%7D%7D/__spark_conf__/__hadoop_conf__
   
file:/D:/tmp/hadoop-Gautham/nm-local-dir/usercache/Gautham/appcache/application_1670138957874_0001/container_1670138957874_0001_01_000001/__spark_conf__/
   
file:/D:/tmp/hadoop-Gautham/nm-local-dir/usercache/Gautham/appcache/application_1670138957874_0001/container_1670138957874_0001_01_000001/__app__.jar
   ```
   
   Note that `%3CCPS%3E` in the above code block should've been replaced by ";" 
or ":" as part of classpath resolution. Since this didn't happen, NM failed to 
launch the Spark AM container -
   
   ```
   Could not find or load main class 
org.apache.spark.deploy.yarn.ApplicationMaster
   ```
   
   ### How was this patch tested?
   I tested locally by submitting a Spark job and ensured that it ran 
successfully.
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> Fix issue with classpath resolution
> -----------------------------------
>
>                 Key: YARN-11386
>                 URL: https://issues.apache.org/jira/browse/YARN-11386
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: nodemanager
>    Affects Versions: 3.4.0
>         Environment: Windows 10
>            Reporter: Gautham Banasandra
>            Assignee: Gautham Banasandra
>            Priority: Critical
>
> While launching the AM, the client puts the classpath into environment 
> HashMap of the ContainerLaunchContext. To support cross-platform 
> compatibility, some special notations must be used by the client. For 
> example, <CPS> must be used as the classpath separator instead of ";" 
> (Windows) or ":" (Linux). The NM would resolve all the <CPS> using the 
> appropriate path separator according to the OS platform, prior to launching 
> the container. In addition to this, the NM is also expected to expand and 
> resolve all of the wildcard ( * ) occurrences in the classpath before 
> launching the container, since the JVM doesn't understand the wildcard 
> charater.
> The issue we see here is, neither <CPS> resolution nor wildcard expansion is 
> happening.
> I tested this from Spark. To run a Spark application on YARN, one needs to 
> set the *spark.yarn.jars* config to point to the location where Spark jars 
> are present. NM would then add these jars to the classpath before launching 
> the Spark AM container. The resolution didn't happen -
> {code}
> Manifest-Version:
> 1.0
> Class-Path:
> file:/D:/tmp/hadoop-Gautham/nm-local-dir/usercache/Gautham/appcache/application_1670138957874_0001/container_1670138957874_0001_01_000001/%7B%7BPWD%7D%7D%3CCPS%3E%7B%7BPWD%7D%7D/__spark_conf__%3CCPS%3E%7B%7BPWD%7D%7D/__spark_libs__/*%3CCPS%3E/D:/projects/github/apache/spark/jars/*%3CCPS%3E%7B%7BPWD%7D%7D/__spark_conf__/__hadoop_conf__
> file:/D:/tmp/hadoop-Gautham/nm-local-dir/usercache/Gautham/appcache/application_1670138957874_0001/container_1670138957874_0001_01_000001/__spark_conf__/
> file:/D:/tmp/hadoop-Gautham/nm-local-dir/usercache/Gautham/appcache/application_1670138957874_0001/container_1670138957874_0001_01_000001/__app__.jar
> {code}
> Note that %3CCPS%3E in the above code block should've been replaced by ";" or 
> ":". Since this didn't happen, NM failed to launch the Spark AM container -
> {code}
> Could not find or load main class 
> org.apache.spark.deploy.yarn.ApplicationMaster
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org

Reply via email to