Found the cause:
https://issues.apache.org/jira/browse/MAPREDUCE-4250
https://issues.apache.org/jira/browse/HADOOP-8393
hadoop-config.sh and yarn-config.sh determine the correct home and
config dirs, but don't export them. Applying the patches to those
files fixes the MRAppMaster issue.
-Trevor
I figured out why this was hung in the single node case: The node
manager was failing to start because the resource manager was already
listening on the same port. I had followed Cloudera's example YARN
setup, which incorrectly (or at least unwisely) uses port 8040 for
yarn.resourcemanager.address:
Would someone please give me some troubleshooting tips for TestDFSIO
hanging on a new 0.23.1-cdh4b2 cluster? I've tried both a 5-machine
cluster and just running everything on a single node. It's my first
time configuring YARN, so maybe I've misconfigured something. I don't
see anything suspicious
Please don't cross-post, your question is about HBase not MapReduce
itself so I put mapreduce-user@ in BCC.
0.20.3 is, relatively to the age of the project, as old as my
grand-mother so you should consider upgrading to 0.90 or 0.92 which
are both pretty stable.
I'm curious about the shell's behav
Folks,
I thought I'd drop a note and let folks know that I've scheduled a Hadoop
YARN/MapReduce meetup during Hadoop Summit, June 2012.
The agenda is:
# YARN - State of the art
# YARN futures
- Premption
- Resource Isolation
- Multi-resource scheduling
# Implementing new YARN framewo
Dne 10.5.2012 15:29, Robert Evans napsal(a):
Yes adding in more resources in the scheduling request would be the
ideal solution to the problem. But sadly that is not a trivial change.
Best will be to have custom resources.
Example: in node config define that this node has:
1 x GPU
1 x Gbit Ne
Hello,
I keep on getting a memory error, these are my configuration and their
respective errors:
Few questions: why is physical memory set to 1.0GB when I actually have 47G
on these machines.
virtual memory is also limited to 2.1, even when I define the heap to be
higher?
Trying to get some clari