In a bit more detail:
You upload the files using 'hdfs dfs -copyFromLocal' command
Then specify hdfs location of the files on the command line.

Cheers

On Wed, Jan 13, 2016 at 8:05 AM, Ted Yu <yuzhih...@gmail.com> wrote:

> Can you place metrics.properties and 
> datainsights-metrics-source-assembly-1.0.jar
> on hdfs ?
>
> Cheers
>
> On Wed, Jan 13, 2016 at 8:01 AM, Byron Wang <open...@gmail.com> wrote:
>
>> I am using the following command to submit Spark job, I hope to send jar
>> and
>> config files to each executor and load it there
>>
>> spark-submit --verbose \
>> --files=/tmp/metrics.properties \
>> --jars /tmp/datainsights-metrics-source-aembly-1.0.jar \
>> --total-executor-cores 4\
>> --conf "spark.metrics.conf=metrics.properties" \
>> --conf
>>
>> "spark.executor.extraClassPath=datainsights-metrics-source-assembly-1.0.jar"
>> \
>> --class org.microsoft.ofe.datainsights.StartServiceSignalPipeline \
>> ./target/datainsights-1.0-jar-with-dependencies.jar
>>
>> --files and --jars is used to send files to executors, I found that the
>> files are sent to the working directory of executor like
>> 'worker/app-xxxxx-xxxx/0/
>>
>> But when job is running, the executor always throws exception saying that
>> it
>> could not find the file 'metrics.properties'or the class which is
>> contained
>> in 'datainsights-metrics-source-assembly-1.0.jar'. It seems that the job
>> is
>> looking for files under another dir other than working directory.
>>
>> Do you know how to load the file which is sent to executors?
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/How-to-get-the-working-directory-in-executor-tp25962.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
>>
>>
>

Reply via email to