Just don't run the DN daemon on that machine. Dedicate that machine to NN.
Remove the hostname of the NN machine from the 'slaves' file.

Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com


On Thu, Feb 14, 2013 at 1:31 AM, Arko Provo Mukherjee <
arkoprovomukher...@gmail.com> wrote:

> Hi,
>
> Thanks for the help!
>
> However, I am still unsure about how to "turn off" the datanode
> feature on the NN.
>
> I checked the "hdfs-site.xml" file and the dfs.data.dir is pointed to
> a directory.
>
> Should I just comment out that property? What would happen to the
> current data that there is in the Master? Will it get removed
> automatically?
>
> Thanks & regards
> Arko
>
>
>
> On Wed, Feb 13, 2013 at 1:55 PM, Mohammad Tariq <donta...@gmail.com>
> wrote:
> > You can specify the logging level as specified by Charles. But turning
> logs
> > off is never a good idea. Logs are really helpful in problem diagnosis,
> > which are eventual.
> >
> > Warm Regards,
> > Tariq
> > https://mtariq.jux.com/
> > cloudfront.blogspot.com
> >
> >
> > On Thu, Feb 14, 2013 at 1:22 AM, Arko Provo Mukherjee
> > <arkoprovomukher...@gmail.com> wrote:
> >>
> >> Hi,
> >>
> >> Yeah, my NameNode is also seconding as a DataNode.
> >>
> >> I would like to "turn off" this feature.
> >>
> >> Request help regarding the same.
> >>
> >> Thanks & regards
> >> Arko
> >>
> >> On Wed, Feb 13, 2013 at 1:38 PM, Charles Baker <cba...@sdl.com> wrote:
> >> > Hi Arko. Sounds like you may be running a DataNode on the NameNode
> which
> >> > is
> >> > not recommended practice. Normally, the only files the NN stores are
> the
> >> > image and edits files. It does not store any actual HDFS data. If you
> >> > must
> >> > run a DN on the NN, try turning down the logging in
> >> > /conf/log4j.properties:
> >> >
> >> > #hadoop.root.logger=INFO,console
> >> > #hadoop.root.logger=WARN,console
> >> > hadoop.root.logger=ERROR,console
> >> >
> >> > Depending on the logging information you require, of course.
> >> >
> >> > -Chuck
> >> >
> >> >
> >> > -----Original Message-----
> >> > From: Arko Provo Mukherjee [mailto:arkoprovomukher...@gmail.com]
> >> > Sent: Wednesday, February 13, 2013 11:32 AM
> >> > To: hdfs-user@hadoop.apache.org
> >> > Subject: Managing space in Master Node
> >> >
> >> > Hello Gurus,
> >> >
> >> > I am managing a Hadoop Cluster to run some experiments.
> >> >
> >> > The issue I am continuously facing is that the Master Node runs out of
> >> > disk
> >> > space due to logs and data files.
> >> >
> >> > I can monitor and delete log files. However, I cannot delete the HDFS
> >> > data.
> >> >
> >> > Thus, is there a way to force Hadoop not to save any HDFS data in the
> >> > Master
> >> > Node?
> >> >
> >> > Then I can use my master to handle the metadata only and store the
> logs.
> >> >
> >> > Thanks & regards
> >> > Arko
> >> > SDL Enterprise Technologies, Inc. - all rights reserved.  The
> >> > information contained in this email may be confidential and/or legally
> >> > privileged. It has been sent for the sole use of the intended
> recipient(s).
> >> > If you are not the intended recipient of this mail, you are hereby
> notified
> >> > that any unauthorized review, use, disclosure, dissemination,
> distribution,
> >> > or copying of this communication, or any of its contents, is strictly
> >> > prohibited. If you have received this communication in error, please
> reply
> >> > to the sender and destroy all copies of the message.
> >> > Registered address: 201 Edgewater Drive, Suite 225, Wakefield, MA
> 01880,
> >> > USA
> >> >
> >
> >
>

Reply via email to