Re: Block Location Information

2011-12-25 Thread Hemanth Makkapati
Hi Harsh,

Thank you!
This is exactly what I wanted.

Happy Holidays!

Regards,
Hemanth Makkapati

On Sat, Dec 24, 2011 at 10:11 PM, Harsh J ha...@cloudera.com wrote:

 You need:
 http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/fs/FileSystem.html#getFileBlockLocations(org.apache.hadoop.fs.FileStatus
 ,
 long, long)

 On Sat, Dec 24, 2011 at 11:40 PM, Hemanth Makkapati ma...@vt.edu wrote:
  Hi,
 
  In HDFS, how do I find out what are all the blocks that belong to a
  particular file and where each one of these blocks (incl. the replicas)
 is
  located?
 
  Thank you.
 
  Regards,
  Hemanth Makkapati



 --
 Harsh J



Re: Re: DN limit

2011-12-25 Thread bourne1900
Hi,
The replica of block is 1.
Threre is 150million block in NN web UI.




Bourne

发件人: Harsh J
发送时间: 2011年12月24日(星期六) 下午2:09
收件人: common-user
主题: Re: Re: DN limit
Bourne,

You have 14 million files, each taking up a single block or are these
files multi-blocked? What does the block count come up as in the live
nodes list of the NN web UI?

2011/12/23 bourne1900 bourne1...@yahoo.cn:
 Sorry, a detailed description:
 I wanna know how many files a datanode can hold, so there is only one 
 datanode in my cluster.
 When the datanode save 14million files, the cluster can't work, and the 
 datanode has used all of it's MEM(32G), the namenode's MEM is OK.




 Bourne

 Sender: Adrian Liu
 Date: 2011年12月23日(星期五) 上午10:47
 To: common-user@hadoop.apache.org
 Subject: Re: DN limit
 In my understanding, the max number of files stored in the HDFS should be 
 MEM of namenode/sizeof(inode struct).   This max number of HDFS files 
 should be no smaller than max files a datanode can hold.

 Please feel free to correct me because I'm just beginning learning hadoop.

 On Dec 23, 2011, at 10:35 AM, bourne1900 wrote:

 Hi all,
 How many files a datanode can hold?
 In my test case, when a datanode save 14million files, the cluster can't 
 work.




 Bourne

 Adrian Liu
 adri...@yahoo-inc.com



-- 
Harsh J