Guys,
I am learning that NN doesn't persistently store block locations. Only file
names and heir permissions as well as file blocks. It is said that locations
come from DataNodes when NN starts.
So, how does it work?
Say we only have one file A.txt in our HDFS that is split into 4 blocks
Hi,
Am 19.11.2012 um 15:27 schrieb Kartashov, Andy andy.kartas...@mpac.ca:
I am learning that NN doesn’t persistently store block locations. Only file
names and heir permissions as well as file blocks. It is said that locations
come from DataNodes when NN starts.
So, how does it work?
(another one of the replicated blocks) when and
only when the initially task running (say on DN1)failed
Thanks,
From: Kai Voigt [mailto:k...@123.org]
Sent: Monday, November 19, 2012 10:01 AM
To: user@hadoop.apache.org
Subject: Re: a question on NameNode
Am 19.11.2012 um 15:43 schrieb Kartashov
Hi,
Am 19.11.2012 um 16:14 schrieb Kartashov, Andy andy.kartas...@mpac.ca:
Does MapReduce run tasks of redundant blocks ?
Say you have only 1 block of data replicated 3 times, one block over each of
three DNodes, block 1 – DN1 / block 1(replica #1) – DN2 / block1 (replica #2)
– DN3
the initially task running (say on DN1)failed
Thanks,
*From:* Kai Voigt [mailto:k...@123.org]
*Sent:* Monday, November 19, 2012 10:01 AM
*To:* user@hadoop.apache.org
*Subject:* Re: a question on NameNode
Am 19.11.2012 um 15:43 schrieb Kartashov, Andy andy.kartas...@mpac.ca:
So, what
) and will attempt to start (another one of the replicated
blocks) when and only when the initially task running (say on DN1)failed
Thanks,
*From:* Kai Voigt [mailto:k...@123.org]
*Sent:* Monday, November 19, 2012 10:01 AM
*To:* user@hadoop.apache.org
*Subject:* Re: a question on NameNode
Get we get rid of ZK completely? Since JNs are like simplified version of
ZK, it should be possible to use it for election.
I think it pretty easy:
- JN exposes latest heartbeat information via RPC (the active NN
heart-beats JNs every 1 second)
- zkfc decided whether the current active NN is
Hi Liang,
Answers inline below.
On Sun, Oct 14, 2012 at 8:01 PM, 谢良 xieli...@xiaomi.com wrote:
Hi Todd and other HA experts,
I've two question:
1) why the zkfc is a seperate process, i mean, what's the primary design
consideration that we didn't integrate zkfc features into namenode self
The namenode eventually came up. Here's the resumation of the logging:
2011-12-17 01:37:35,648 INFO org.apache.hadoop.hdfs.server.common.Storage:
Number of files = 16978046
2011-12-17 01:43:24,023 INFO org.apache.hadoop.hdfs.server.common.Storage:
Number of files under construction = 1
2011-12-17
Hi,
Since your using CDH2, I am moving this to CDH-USER. You can subscribe here:
http://groups.google.com/a/cloudera.org/group/cdh-user
BCC'd common-user
On Sat, Dec 17, 2011 at 2:01 AM, Meng Mao meng...@gmail.com wrote:
Maybe this is a bad sign -- the edits.new was created before the master
The problem with checkpoint /2nn is that it happily runs and has no
outward indication that it is unable to connect.
Because you have a large edits file you startup will complete, however with
that size it could take hours. It logs nothing while this is going on but
as long as the CPU is working
Our CDH2 production grid just crashed with some sort of master node failure.
When I went in there, JobTracker was missing and NameNode was up.
Trying to ls on HDFS met with no connection.
We decided to go for a restart. This is in the namenode log right now:
2011-12-17 01:37:35,568 INFO
All of the worker nodes datanodes' logs haven't logged anything after the
initial startup announcement:
STARTUP_MSG: host = prod1-worker075/10.2.19.75
STARTUP_MSG: args = []
STARTUP_MSG: version = 0.20.1+169.56
STARTUP_MSG: build = -r 8e662cb065be1c4bc61c55e6bff161e09c1d36f3;
compiled by
13 matches
Mail list logo