Hi,
To get to the point: Does the number of replicas of a block increases the
memory requirement on NameNode, and by how much?
The calculation in this paper
https://www.usenix.org/legacy/publications/login/2010-04/openpdfs/shvachko.p
df from Yahoo! assumes 200 bytes per metadata object,
Almost forgot to include the final failure:
16/03/21 18:50:44 INFO mapreduce.Job: Job job_1453754997414_337405
failed with state FAILED due to: Task failed
task_1453754997414_337405_m_07
Job failed as tasks failed. failedMaps:1 failedReduces:0
16/03/21 18:50:44 INFO mapreduce.Job: Counters:
I'm trying to copy data between two clusters with
hadoop version
Hadoop 2.0.0-cdh4.1.3
Subversion
file:///data/1/jenkins/workspace/generic-package-rhel64-6-0/topdir/BUILD/hadoop-2.0.0-cdh4.1.3/src/hadoop-common-project/hadoop-common
-r dbc7a60f9a798ef63afb7f5b723dc9c02d5321e1
Compiled by jenkins
Hi,
I was able to narrow down the issue further. The way I was setting up the
Kerberos principals was different and I have modified it now.
Now both the server and the client have the same UGI and are authenticated
with Kerberos (hasKerberosCredentials() returns True). But on the server
side, I