Hi,
I am using spark-submit to submit my application jar to a YARN cluster. I
want to deliver a single jar file to my users, so I would like to avoid to
tell them also, please put that log4j.xml file somewhere and add that path
to the spark-submit command.
I thought it would be sufficient that
I think the standard practice is to include your log config file among
the files uploaded to YARN containers, and then set
-Dlog4j.configuration=yourfile.xml in
spark.{executor.driver}.extraJavaOptions ?
http://spark.apache.org/docs/latest/running-on-yarn.html
On Thu, Nov 20, 2014 at 9:20 AM,
How do I configure the files to be uploaded to YARN containers. So far, I’ve
only seen --conf spark.yarn.jar=hdfs://….” which allows me to specify the HDFS
location of the Spark JAR, but I’m not sure how to prescribe other files for
uploading (e.g., spark-env.sh)
mn
On Nov 20, 2014, at 4:08
Check the --files argument in the output spark-submit -h.
On Thu, Nov 20, 2014 at 7:51 AM, Matt Narrell matt.narr...@gmail.com wrote:
How do I configure the files to be uploaded to YARN containers. So far, I’ve
only seen --conf spark.yarn.jar=hdfs://….” which allows me to specify the
HDFS
Hi Tobias,
With the current Yarn code, packaging the configuration in your app's
jar and adding the -Dlog4j.configuration=log4jConf.xml argument to
the extraJavaOptions configs should work.
That's not the recommended way for get it to work, though, since this
behavior may change in the future.