Hi Arko. Sounds like you may be running a DataNode on the NameNode which is
not recommended practice. Normally, the only files the NN stores are the
image and edits files. It does not store any actual HDFS data. If you must
run a DN on the NN, try turning down the logging in /conf/log4j.properties:

#hadoop.root.logger=INFO,console
#hadoop.root.logger=WARN,console
hadoop.root.logger=ERROR,console

Depending on the logging information you require, of course.

-Chuck


-----Original Message-----
From: Arko Provo Mukherjee [mailto:arkoprovomukher...@gmail.com] 
Sent: Wednesday, February 13, 2013 11:32 AM
To: hdfs-u...@hadoop.apache.org
Subject: Managing space in Master Node

Hello Gurus,

I am managing a Hadoop Cluster to run some experiments.

The issue I am continuously facing is that the Master Node runs out of disk
space due to logs and data files.

I can monitor and delete log files. However, I cannot delete the HDFS data.

Thus, is there a way to force Hadoop not to save any HDFS data in the Master
Node?

Then I can use my master to handle the metadata only and store the logs.

Thanks & regards
Arko
SDL Enterprise Technologies, Inc. - all rights reserved.  The information 
contained in this email may be confidential and/or legally privileged. It has 
been sent for the sole use of the intended recipient(s). If you are not the 
intended recipient of this mail, you are hereby notified that any unauthorized 
review, use, disclosure, dissemination, distribution, or copying of this 
communication, or any of its contents, is strictly prohibited. If you have 
received this communication in error, please reply to the sender and destroy 
all copies of the message.
Registered address: 201 Edgewater Drive, Suite 225, Wakefield, MA 01880, USA

Reply via email to