Yes export worked.
Thank you
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-1-0-0-on-yarn-cluster-problem-tp7560p17180.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
--
Did you `export` the environment variables? Also, are you running in client
mode or cluster mode? If it still doesn't work you can try to set these
through the spark-submit command lines --num-executors, --executor-cores,
and --executor-memory.
2014-10-23 19:25 GMT-07:00 firemonk9 :
> Hi,
>
>
Hi,
I am facing same problem. My spark-env.sh has below entries yet I see the
yarn container with only 1G and yarn only spawns two workers.
SPARK_EXECUTOR_CORES=1
SPARK_EXECUTOR_MEMORY=3G
SPARK_EXECUTOR_INSTANCES=5
Please let me know if you are able to resolve this issue.
Thank you
--
Vi
Hi Sophia, did you ever resolve this?
A common cause for not giving resources to the job is that the RM cannot
communicate with the workers.
This itself has many possible causes. Do you have a full stack trace from
the logs?
Andrew
2014-06-13 0:46 GMT-07:00 Sophia :
> With the yarn-client mode
With the yarn-client mode,I submit a job from client to yarn,and the spark
file spark-env.sh:
export HADOOP_HOME=/usr/lib/hadoop
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
SPARK_EXECUTOR_INSTANCES=4
SPARK_EXECUTOR_CORES=1
SPARK_EXECUTOR_MEMORY=1G
SPARK_DRIVER_MEMORY=2G
SPARK_YARN_APP_NAME="Spar
I built my new package like this:
"mvn -Pyarn -Phadoop-2.3 -Dhadoop.version=2.3.0-cdh5.0.1 -DskipTests clean
package"
Spark-shell is working now, but pyspark is still broken. I reported the
problem on a different thread. Please take a look if you can... Desperately
need ideas..
Thanks.
-Simon
O
Okay I'm guessing that our upstreaming "Hadoop2" package isn't new
enough to work with CDH5. We should probably clarify this in our
downloads. Thanks for reporting this. What was the exact string you
used when building? Also which CDH-5 version are you building against?
On Mon, Jun 2, 2014 at 8:11
OK, rebuilding the assembly jar file with cdh5 works now...
Thanks..
-Simon
On Sun, Jun 1, 2014 at 9:37 PM, Xu (Simon) Chen wrote:
> That helped a bit... Now I have a different failure: the start up process
> is stuck in an infinite loop outputting the following message:
>
> 14/06/02 01:34:56
That helped a bit... Now I have a different failure: the start up process
is stuck in an infinite loop outputting the following message:
14/06/02 01:34:56 INFO cluster.YarnClientSchedulerBackend: Application
report from ASM:
appMasterRpcPort: -1
appStartTime: 1401672868277
yarnAppState: ACCEPTE
As a debugging step, does it work if you use a single resource manager
with the key "yarn.resourcemanager.address" instead of using two named
resource managers? I wonder if somehow the YARN client can't detect
this multi-master set-up.
On Sun, Jun 1, 2014 at 12:49 PM, Xu (Simon) Chen wrote:
> Not
Note that everything works fine in spark 0.9, which is packaged in CDH5: I
can launch a spark-shell and interact with workers spawned on my yarn
cluster.
So in my /opt/hadoop/conf/yarn-site.xml, I have:
...
yarn.resourcemanager.address.rm1
controller-1.mycomp.com:23140
I would agree with your guess, it looks like the yarn library isn't
correctly finding your yarn-site.xml file. If you look in
yarn-site.xml do you definitely the resource manager
address/addresses?
Also, you can try running this command with
SPARK_PRINT_LAUNCH_COMMAND=1 to make sure the classpath
Hi all,
I tried a couple ways, but couldn't get it to work..
The following seems to be what the online document (
http://spark.apache.org/docs/latest/running-on-yarn.html) is suggesting:
SPARK_JAR=hdfs://test/user/spark/share/lib/spark-assembly-1.0.0-hadoop2.2.0.jar
YARN_CONF_DIR=/opt/hadoop/conf
13 matches
Mail list logo