Github user zeodtr commented on a diff in the pull request:

    https://github.com/apache/spark/pull/899#discussion_r13123900
  
    --- Diff: 
yarn/common/src/main/scala/org/apache/spark/deploy/yarn/ClientBase.scala ---
    @@ -473,15 +474,15 @@ object ClientBase {
           if (localPath != null) {
             val parentPath = new File(localPath).getParent()
             YarnSparkHadoopUtil.addToEnvironment(env, 
Environment.CLASSPATH.name, parentPath,
    -          File.pathSeparator)
    +          ApplicationConstants.CLASS_PATH_SEPARATOR)
    --- End diff --
    
    The value of ApplicationConstants.CLASS_PATH_SEPARATOR is "<CPS>" - neither 
":" nor ";".
    The point is that the separator will be chosen by the cluster(in my case, 
linux machine), rather than the client(in my case, Windows machine) if 
ApplicationConstants.CLASS_PATH_SEPARATOR is used.
    That is, the server hadoop module will find "<CPS>" string in the path 
string and replace it with the real separator appropriate to its OS.
    But current Spark 1.0 code 'hardcodes' the separator on the client side, by 
using File.pathSeparator. Then the Windows-style path string (that contains ';' 
which confuses the linux shell script interpreter) will be sent to the linux 
cluster, in my case.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to