Arpit Agarwal created HDFS-7596:
---
Summary: NameNode should prune dead storages from storageMap
Key: HDFS-7596
URL: https://issues.apache.org/jira/browse/HDFS-7596
Project: Hadoop HDFS
Issue
Hi Chris,
thanks a lot for taking the time answering my question. Skimming through
BlockPlacementPolicyDefault helped my a lot; I managed to get
DatanodeDescriptor(s) by using Host2NodesMap object. The DatanodeDescriptor
contains storage info which I was looking for.
Thanks again for your help,
Uma Maheswara Rao G created HDFS-7594:
-
Summary: Add isFileClosed and IsInSafeMode APIs in
o.a.h.hdfs.client.HdfsAdmin
Key: HDFS-7594
URL: https://issues.apache.org/jira/browse/HDFS-7594
Project:
Allen Wittenauer created HDFS-7595:
--
Summary: Remove hftp
Key: HDFS-7595
URL: https://issues.apache.org/jira/browse/HDFS-7595
Project: Hadoop HDFS
Issue Type: Improvement
Affects
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1999/changes
Colin,
Thanks for the response, understanding the details is important and I think
some general guidelines would be great. Since my initial email the system
administrators told me that the drives are not actually full; the filesystems
by default keep 5% in reserve. We can lower the reserve by
Hi dlmarion,
In general, any upgrade process we do will consume disk space, because
it's creating hardlinks and a new current directory, and so forth.
So upgrading when disk space is very low is a bad idea in any
scenario. It's certainly a good idea to free up some space before
doing the