Hi Vinod,
I found the issue: The yarn.nodemanager.resource.memory-mb value was to
low. I set it back to the default value and the job runs fine now.
Thanks!
- André
On Thu, Aug 29, 2013 at 7:36 PM, Vinod Kumar Vavilapalli vino...@apache.org
wrote:
This usually means there are no available
Hi,
I am in the middle of setting up a hadoop 2 cluster. I am using the hadoop
2.1-beta tarball.
My cluster has 1 master node running the hdfs namenode, the resourcemanger
and the job history server. Next to that I have 3 nodes acting as
datanodes and nodemanagers.
In order to test, if
This usually means there are no available resources as seen by the
ResourceManager. Do you see Active Nodes on the RM web UI first page? If not,
you'll have to check the NodeManager logs to see if they crashed for some
reason.
Thanks,
+Vinod Kumar Vavilapalli
Hortonworks Inc.