I see, I had this issue before. I think you are using Java 8, right?
Because Java 8 JVM requires more bootstrap heap memory.
Turning off the memory check is an unsafe way to avoid this issue. I think
it is better to increase the memory ratio, like this:
yarn.nodemanager.vmem-pmem-ratio
I modified yarn-site.xml yarn.nodemanager.vmem-check-enabled to false
and it works for yarn-client and spark-shell
On Fri, Oct 21, 2016 at 10:59 AM, Li Li wrote:
> I found a warn in nodemanager log. is the virtual memory exceed? how
> should I config yarn to solve this problem?
>
> 2016-10-21 10:
I found a warn in nodemanager log. is the virtual memory exceed? how
should I config yarn to solve this problem?
2016-10-21 10:41:12,588 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
Memory usage of ProcessTree 20299 for container-id
container_14770
It is not Spark has difficulty to communicate with YARN, it simply means AM
is exited with FINISHED state.
I'm guessing it might be related to memory constraints for container,
please check the yarn RM and NM logs to find out more details.
Thanks
Saisai
On Fri, Oct 21, 2016 at 8:14 AM, Xi Shen
16/10/20 18:12:14 ERROR cluster.YarnClientSchedulerBackend: Yarn
application has already exited with state FINISHED!
From this, I think it is spark has difficult communicating with YARN. You
should check your Spark log.
On Fri, Oct 21, 2016 at 8:06 AM Li Li wrote:
which log file should I
On
which log file should I
On Thu, Oct 20, 2016 at 10:02 PM, Saisai Shao wrote:
> Looks like ApplicationMaster is killed by SIGTERM.
>
> 16/10/20 18:12:04 ERROR yarn.ApplicationMaster: RECEIVED SIGNAL TERM
> 16/10/20 18:12:04 INFO yarn.ApplicationMaster: Final app status:
>
> This container may be k
which log file should I check?
On Thu, Oct 20, 2016 at 11:32 PM, Amit Tank
wrote:
> I recently started learning spark so I may be completely wrong here but I
> was facing similar problem with sparkpi on yarn. After changing yarn to
> cluster mode it worked perfectly fine.
>
> Thank you,
> Amit
>
yes, when I use yarn-cluster mode, it's correct. What's wrong with
yarn-client? the spark shell is also not work because it's client
mode. Any solution for this?
On Thu, Oct 20, 2016 at 11:32 PM, Amit Tank
wrote:
> I recently started learning spark so I may be completely wrong here but I
> was fa
I recently started learning spark so I may be completely wrong here but I
was facing similar problem with sparkpi on yarn. After changing yarn to
cluster mode it worked perfectly fine.
Thank you,
Amit
On Thursday, October 20, 2016, Saisai Shao wrote:
> Looks like ApplicationMaster is killed by
Try to set the memory size limits. For example:
./bin/spark-submit --class org.apache.spark.examples.SparkPi --master
yarn --deploy-mode cluster --driver-memory 4g
--executor-memory 2g --executor-cores 1
./examples/jars/spark-examples_2.11-2.0.0.2.5.2.0-47.jar
By default yarn pre
Looks like ApplicationMaster is killed by SIGTERM.
16/10/20 18:12:04 ERROR yarn.ApplicationMaster: RECEIVED SIGNAL TERM
16/10/20 18:12:04 INFO yarn.ApplicationMaster: Final app status:
This container may be killed by yarn NodeManager or other processes, you'd
better check yarn log to dig out more
I am setting up a small yarn/spark cluster. hadoop/yarn version is
2.7.3 and I can run wordcount map-reduce correctly in yarn.
And I am using spark-2.0.1-bin-hadoop2.7 using command:
~/spark-2.0.1-bin-hadoop2.7$ ./bin/spark-submit --class
org.apache.spark.examples.SparkPi --master yarn-client
exam
12 matches
Mail list logo