[jira] [Created] (HDFS-7596) NameNode should prune dead storages from storageMap

2015-01-08 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-7596: --- Summary: NameNode should prune dead storages from storageMap Key: HDFS-7596 URL: https://issues.apache.org/jira/browse/HDFS-7596 Project: Hadoop HDFS Issue Typ

Re: HDFS 2.6.0 upgrade ends with missing blocks

2015-01-08 Thread dlmarion
Colin, Thanks for the response, understanding the details is important and I think some general guidelines would be great. Since my initial email the system administrators told me that the drives are not actually full; the filesystems by default keep 5% in reserve. We can lower the reserve by

Re: HDFS 2.6.0 upgrade ends with missing blocks

2015-01-08 Thread Colin P. McCabe
Hi dlmarion, In general, any upgrade process we do will consume disk space, because it's creating hardlinks and a new "current" directory, and so forth. So upgrading when disk space is very low is a bad idea in any scenario. It's certainly a good idea to free up some space before doing the upgrad

[jira] [Created] (HDFS-7595) Remove hftp

2015-01-08 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-7595: -- Summary: Remove hftp Key: HDFS-7595 URL: https://issues.apache.org/jira/browse/HDFS-7595 Project: Hadoop HDFS Issue Type: Improvement Affects Version

Jenkins build is back to normal : Hadoop-Hdfs-trunk #1999

2015-01-08 Thread Apache Jenkins Server
See

[jira] [Created] (HDFS-7594) Add isFileClosed and IsInSafeMode APIs in o.a.h.hdfs.client.HdfsAdmin

2015-01-08 Thread Uma Maheswara Rao G (JIRA)
Uma Maheswara Rao G created HDFS-7594: - Summary: Add isFileClosed and IsInSafeMode APIs in o.a.h.hdfs.client.HdfsAdmin Key: HDFS-7594 URL: https://issues.apache.org/jira/browse/HDFS-7594 Project:

[jira] [Created] (HDFS-7593) Data Not replicate in ssd drive

2015-01-08 Thread ViVek Raghuwanshi (JIRA)
ViVek Raghuwanshi created HDFS-7593: --- Summary: Data Not replicate in ssd drive Key: HDFS-7593 URL: https://issues.apache.org/jira/browse/HDFS-7593 Project: Hadoop HDFS Issue Type: Test

Re: Getting a list of all StorageInfo

2015-01-08 Thread Youssef Hatem
Hi Chris, thanks a lot for taking the time answering my question. Skimming through BlockPlacementPolicyDefault helped my a lot; I managed to get DatanodeDescriptor(s) by using Host2NodesMap object. The DatanodeDescriptor contains storage info which I was looking for. Thanks again for your help, I