Daniel Darabos created SPARK-4831:
-------------------------------------

             Summary: Current directory always on classpath with spark-submit
                 Key: SPARK-4831
                 URL: https://issues.apache.org/jira/browse/SPARK-4831
             Project: Spark
          Issue Type: Bug
          Components: Deploy
    Affects Versions: 1.1.1, 1.2.0
            Reporter: Daniel Darabos
            Priority: Minor


We had a situation where we were launching an application with spark-submit, 
and a file (play.plugins) was on the classpath twice, causing problems (trying 
to register plugins twice). Upon investigating how it got on the classpath 
twice, we found that it was present in one of our jars, and also in the current 
working directory. But the one in the current working directory should not be 
on the classpath. We never asked spark-submit to put the current directory on 
the classpath.

I think this is caused by a line in 
[compute-classpath.sh|https://github.com/apache/spark/blob/v1.2.0-rc2/bin/compute-classpath.sh#L28]:

{code}
CLASSPATH="$SPARK_CLASSPATH:$SPARK_SUBMIT_CLASSPATH"
{code}

Now if SPARK_CLASSPATH is empty, the empty string is added to the classpath, 
which means the current working directory.

We tried setting SPARK_CLASSPATH to a bogus value, but that is [not 
allowed|https://github.com/apache/spark/blob/v1.2.0-rc2/core/src/main/scala/org/apache/spark/SparkConf.scala#L312].

What is the right solution? Only add SPARK_CLASSPATH if it's non-empty? I can 
send a pull request for that I think. Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to