have your send spark-env.sh to the slave nodes ?
2014-03-11 6:47 GMT+08:00 Linlin linlin200...@gmail.com:
Hi,
I have a java option (-Xss) setting specified in SPARK_JAVA_OPTS in
spark-env.sh, noticed after stop/restart the spark cluster, the
master/worker daemon has the setting being
The properties in spark-env.sh are machine-specific. so need to specify in
you worker as well. I guess you ask is the System.setproperty(). you can
call it before you initialize your sparkcontext.
Best Regards,
Chen Jingci
On Tue, Mar 11, 2014 at 6:47 AM, Linlin linlin200...@gmail.com wrote:
my cluster only has 1 node (master/worker).
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/SPARK-JAVA-OPTS-not-picked-up-by-the-application-tp2483p2506.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
Thanks!
since my worker is on the same node, -Xss JVM option is for setting thread
maximum stack size, my worker does show this option now. now I realized I
accidently run the the app run in local mode as I didn't give the master URL
when initializing the spark context, for local mode, how to
Thanks!
so SPARK_DAEMON_JAVA_OPTS is for worker? and SPARK_JAVA_OPTS is for master?
I only set SPARK_JAVA_OPTS in spark-env.sh, and the JVM opt is applied to
both master/worker daemon.
--
View this message in context: