i get it 。3x
发自我的 iPhone
在 2011-9-13,19:20,Ted Dunning tdunn...@maprtech.com 写道:
2011/9/13 kang hua kanghua...@msn.com
Hi Master:
can you explain more detail --- The only way to avoid this is to make
the data much more cacheable and to have a viable cache coherency strategy.
!
kanghua
Date: Mon, 5 Sep 2011 21:52:53 -0700
Subject: Re: Regarding design of HDFS
From: dhr...@gmail.com
To: hdfs-user@hadoop.apache.org
My answers inline.
1. Why does namenode store the blockmap (block to datanode mapping) in the main
memory for all the files, even those that are not used
The only way to avoid this is to make the data much more cacheable and to
have a viable cache coherency strategy. Cache coherency at the meta-data
level is difficult. Cache coherency at the block level is also difficult
(but not as difficult) because many blocks get moved for balance
thanks a lot
On Thu, Aug 25, 2011 at 1:34 PM, Sesha Kumar sesha...@gmail.com wrote:
Hi all,
I am trying to get a good understanding of how Hadoop works, for my
undergraduate project. I have the following questions/doubts :
1. Why does namenode store the blockmap (block to datanode mapping) in the
main
My answers inline.
1. Why does namenode store the blockmap (block to datanode mapping) in the
main memory for all the files, even those that are not used?
The block to datanode mapping is needed for two reasons: when a client wants
to read a file, the namenode has to tell the client the
In order to have an answer to that sort of question, you first must
prove that you did your own homework eg write down what you think the
answer is based on your observations and readings, then I'm sure
someone will be happy to help you.
J-D
On Thu, Aug 25, 2011 at 1:04 AM, Sesha Kumar