Hi, I would like to know how much memory our data take on the name-node per
block, file and directory.
For example, the metadata size of a file.
When I store some files in HDFS,how can I get the memory size take on the
name-node?
Is there some tools or commands to test the memory size take on t
Can you manually go into the directory configured for hadoop.tmp.dir under
core-site.xml and do an ls -l to find the disk usage details, it will have
fsimage, edits, fstime, VERSION.
or the basic commands like,
hadoop fs -du
hadoop fsck
On Wed, Apr 24, 2013 at 7:56 AM, 自己 wrote:
> Hi, I would
Every file, directory and block in HDFS is represented as an object in the
namenode’s memory, Namenode consume about average of 150 bytes per each
block(object).
On Wed, Apr 24, 2013 at 12:30 PM, Mahesh Balija
wrote:
> Can you manually go into the directory configured for hadoop.tmp.dir under
>