Thanks! I have already requested for downtime. Will do the changes soon! Warm regards Arko
On Thu, Feb 14, 2013 at 3:31 AM, <ramon....@accenture.com> wrote: > Hi Arko. > > Only thing you need to do is not running the TaskTracker and DataNode > demons on your master machine. Ensure you do not have this on your slaves > file con hadoop's config directory when you start the system. I'm supposing > you are on Open Source release. For other distributions look at its > documentation about how to remove those demons from your master node machine. > > > -----Original Message----- > From: Arko Provo Mukherjee [mailto:arkoprovomukher...@gmail.com] > Sent: miƩrcoles, 13 de febrero de 2013 20:32 > To: hdfs-u...@hadoop.apache.org > Subject: Managing space in Master Node > > Hello Gurus, > > I am managing a Hadoop Cluster to run some experiments. > > The issue I am continuously facing is that the Master Node runs out of > disk space due to logs and data files. > > I can monitor and delete log files. However, I cannot delete the HDFS data. > > Thus, is there a way to force Hadoop not to save any HDFS data in the > Master Node? > > Then I can use my master to handle the metadata only and store the logs. > > Thanks & regards > Arko > > > This message is for the designated recipient only and may contain privileged, > proprietary, or otherwise private information. If you have received it in > error, please notify the sender immediately and delete the original. Any > other use of the e-mail by you is prohibited. > > Where allowed by local law, electronic communications with Accenture and its > affiliates, including e-mail and instant messaging (including content), may > be scanned by our systems for the purposes of information security and > assessment of internal compliance with Accenture policy. > > ______________________________________________________________________________________ > > www.accenture.com >