Re: Running Spark shell on YARN

2014-08-16 Thread Eric Friedman
+1 for such a document. Eric Friedman On Aug 15, 2014, at 1:10 PM, Kevin Markey kevin.mar...@oracle.com wrote: Sandy and others: Is there a single source of Yarn/Hadoop properties that should be set or reset for running Spark on Yarn? We've sort of stumbled through one property

Re: Running Spark shell on YARN

2014-08-16 Thread Soumya Simanta
I followed this thread http://apache-spark-user-list.1001560.n3.nabble.com/YARN-issues-with-resourcemanager-scheduler-address-td5201.html#a5258 to set SPARK_YARN_USER_ENV to HADOOP_CONF_DIR export SPARK_YARN_USER_ENV=CLASSPATH=$HADOOP_CONF_DIR and used the following command to share conf

Running Spark shell on YARN

2014-08-15 Thread Soumya Simanta
I've been using the standalone cluster all this time and it worked fine. Recently I'm using another Spark cluster that is based on YARN and I've not experience with YARN. The YARN cluster has 10 nodes and a total memory of 480G. I'm having trouble starting the spark-shell with enough memory. I'm

Re: Running Spark shell on YARN

2014-08-15 Thread Andrew Or
Hi Soumya, The driver's console output prints out how much memory is actually granted to each executor, so from there you can verify how much memory the executors are actually getting. You should use the '--executor-memory' argument in spark-shell. For instance, assuming each node has 48G of

Re: Running Spark shell on YARN

2014-08-15 Thread Soumya Simanta
I just checked the YARN config and looks like I need to change this value. Should be upgraded to 48G (the max memory allocated to YARN) per node ? property nameyarn.scheduler.maximum-allocation-mb/name value6144/value sourcejava.io.BufferedInputStream@2e7e1ee/source /property On Fri, Aug 15,

Re: Running Spark shell on YARN

2014-08-15 Thread Sandy Ryza
We generally recommend setting yarn.scheduler.maximum-allocation-mbto the maximum node capacity. -Sandy On Fri, Aug 15, 2014 at 11:41 AM, Soumya Simanta soumya.sima...@gmail.com wrote: I just checked the YARN config and looks like I need to change this value. Should be upgraded to 48G (the

Re: Running Spark shell on YARN

2014-08-15 Thread Soumya Simanta
After changing the allocation I'm getting the following in my logs. No idea what this means. 14/08/15 15:44:33 INFO cluster.YarnClientSchedulerBackend: Application report from ASM: appMasterRpcPort: -1 appStartTime: 1408131861372 yarnAppState: ACCEPTED 14/08/15 15:44:34 INFO

Re: Running Spark shell on YARN

2014-08-15 Thread Kevin Markey
Sandy and others: Is there a single source of Yarn/Hadoop properties that should be set or reset for running Spark on Yarn? We've sort of stumbled through one property after another, and (unless there's an update I've not yet seen) CDH5 Spark-related properties