I confirmed, Hadoop 2 does use $HADOOP_CONF_DIR/masters to start secondary
namenodes.
-- Forwarded message --
From: craig w codecr...@gmail.com
Date: Wed, Jun 18, 2014 at 2:12 PM
Subject: Hadoop 2.x -- how to configure secondary namenode?
To: user@hadoop.apache.org
I have an old
I have an old Hadoop install that I'm looking to update to Hadoop 2. In the
old setup, I have a hadoop_home/conf/masters file that specifies the
secondary namenode.
Looking through the Hadoop 2 documentation I can't find any mention of a
masters file, or how to setup a secondary namenode.
Any
Hi,
I am setting up 2 node hadoop cluster ( 1.2.1)
After formatting the FS and starting namenode,datanode and
secondarynamenode , i am getting below warning in SecondaryNameNode logs.
*WARN org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Checkpoint
Period :3600 secs (60 min)*
Please
you can ignore this on 2 node cluster.
This value means time it waits between two periodic checkpoints on
secondary namenode.
On Thu, Mar 6, 2014 at 4:10 PM, Vimal Jain vkj...@gmail.com wrote:
Hi,
I am setting up 2 node hadoop cluster ( 1.2.1)
After formatting the FS and starting namenode
Hi,
I have a doubt of the processing steps of NameNode:
*Reference:* Hadoop: The Definitive Guide:3rd Ed book by Tom White
On page# 340 (Ch 10: HDFS The file system image edit log)
Text from book:
When a filesystem client performs a write operation (such as creating or
moving a file), it
Conceptually you can think of the namenode is similar to a journal file
system. For each write, it updates the in-memory data structure, persists
the operations on the stable storage (i.e., calling sync to flush the
buffer of the edit logs), then responds to the client.
Note that all writes are
It just happens without changing anything in the cluster. Secondary
namenode node has been working fine until today i notice that in second
namenode log file stop at.
2013-12-11 13:17:41,083 INFO org.apache.hadoop.hdfs.server.common.Storage:
Image file of size 3941631662 saved in 61 seconds.
2013
It looks that it can not copy the new checkpoint into the NameNode. Can you
copy-past what jstack says?
$ sudo -u hdfs jstack snn-pid
2013/12/11 Patai Sangbutsarakum silvianhad...@gmail.com
It just happens without changing anything in the cluster. Secondary
namenode node has been working
It just happens without changing anything in the cluster. Secondary
namenode node has been working fine until today i notice that in second
namenode log file stop at.
2013-12-11 13:17:41,083 INFO
org.apache.hadoop.hdfs.server.common.Storage: Image file of size 3941631662
saved in 61 seconds.
2013
.
On 03-10-2013 10:52, Furkan Bıçak wrote:
On 03-10-2013 10:44, Furkan Bıçak wrote:
Hi Everyone,
I am starting my hadoop cluster manually from Java which works fine
until some time. When secondary NameNode tries to do merge it gives
me the following exception:
---
13/10/03
Hi Everyone,
I am starting my hadoop cluster manually from Java which works fine
until some time. When secondary NameNode tries to do merge it gives me
the following exception:
---
13/10/03 10:29:38 ERROR namenode.SecondaryNameNode: Throwable Exception
On 03-10-2013 10:44, Furkan Bıçak wrote:
Hi Everyone,
I am starting my hadoop cluster manually from Java which works fine
until some time. When secondary NameNode tries to do merge it gives me
the following exception:
---
13/10/03 10:29:38 ERROR
:
Hi Everyone,
I am starting my hadoop cluster manually from Java which works fine until
some time. When secondary NameNode tries to do merge it gives me the
following exception:
---
13/10/03 10:29:38 ERROR namenode.SecondaryNameNode: Throwable Exception
Did you upgraded your cluster ?
Regards
Jitendra
On Thu, Oct 3, 2013 at 1:22 PM, Furkan Bıçak bicak...@gmail.com wrote:
On 03-10-2013 10:44, Furkan Bıçak wrote:
Hi Everyone,
I am starting my hadoop cluster manually from Java which works fine until
some time. When secondary NameNode tries
any VERSION file for Secondary NameNode. Just
to mention, I am using Pseudo-Distributed Mode.
Thanks.
On 03-10-2013 11:23, Jitendra Yadav wrote:
Hi,
There is some layout Version value issue in your VERSION file. Can you
please share the VERSION file content from NN and SNN ?
${dfs.name.dir
Everyone,
I am starting my hadoop cluster manually from Java which works
fine until some time. When secondary NameNode tries to do
merge it gives me the following exception:
---
13/10/03 10:29:38 ERROR namenode.SecondaryNameNode
upgraded your cluster ?
Regards
Jitendra
On Thu, Oct 3, 2013 at 1:22 PM, Furkan Bıçak bicak...@gmail.com wrote:
On 03-10-2013 10:44, Furkan Bıçak wrote:
Hi Everyone,
I am starting my hadoop cluster manually from Java which works fine
until some time. When secondary NameNode tries to do
the secondary namenode is started, it tries the merge and then gives
that error.
Thanks,
Frkn.
On 03-10-2013 12:10, Jitendra Yadav wrote:
Can you restart your cluster using below scripts and check the logs?
# stop-all.sh
#start-all.sh
Regards
Jitendra
On Thu, Oct 3, 2013 at 2:09 PM, Furkan Bıçak bicak
and stoping the cluster with
scripts. I am having this problem when starting the cluster from Java.
Also, when I start the cluster from Java, I can run map-reduce jobs and it
succesfully finishes. However, after some time about 5 minutes when the
secondary namenode is started, it tries
start the cluster from Java, I can run map-reduce
jobs and it succesfully finishes. However, after some time about 5
minutes when the secondary namenode is started, it tries the merge
and then gives that error.
Thanks,
Frkn.
On 03-10-2013 12:10, Jitendra Yadav wrote:
Can
I start the cluster from Java, I can run map-reduce
jobs and it succesfully finishes. However, after some time about
5 minutes when the secondary namenode is started, it tries the
merge and then gives that error.
Thanks,
Frkn.
On 03-10-2013 12:10, Jitendra Yadav wrote
refer to https://issues.apache.org/jira/browse/HDFS-2827
when do this operation: hadoop fs -mv /a/b / , maybe reappear this issue.
2012/5/10 Alex Levin ale...@gmail.com
Hi,
I have an issue with crashing secondary namenode due to a simple move
operation
Appreciate any ideas
Hi All,
What is the property name of Hadoop 1.0.4 to change secondary namenode location?
Currently the default in my machine is /tmp/hadoop-hadoop/dfs/namesecondary,
I would like to change it to /data/namesecondary
Best regards,
Henry
The privileged
Hi Henry
You can change the secondary name node storage location by overriding the
property 'fs.checkpoint.dir' in your core-site.xml
On Wed, Apr 17, 2013 at 2:35 PM, Henry Hung ythu...@winbond.com wrote:
Hi All,
** **
What is the property name of Hadoop 1.0.4 to change secondary
Thank you very much, Bryan!
It is now clear for me, that in development mode I'll not start secondary
namenode.
But in production it's better to have it.
Thanks!
Regards,
Ivan
2012/12/17 Bryan Beaudreault bbeaudrea...@hubspot.com
You don't need a secondary name node. It creates snapshots
is
OK.
Outside of that... its pretty much a good idea.
-Just saying...
On Dec 17, 2012, at 11:23 AM, Ivan Ryndin iryn...@gmail.com wrote:
Thank you very much, Bryan!
It is now clear for me, that in development mode I'll not start secondary
namenode.
But in production it's better to have
I agree with Michael. Skipping the SNN daemon is really a bad idea when you
are dealing something real.
Best Regards,
Tariq
+91-9741563634
On Tue, Dec 18, 2012 at 12:22 AM, Patai Sangbutsarakum
patai.sangbutsara...@turn.com wrote:
is it necessary to run secondary namenode when starting
Hi
When I start my cluster with start-dfs.sh the secondary namenodes are
created in the slaves machines. I set conf/masters to a different single
machine (along with the assignment of dfs.http.address to the
nameserver:50070) but it is apparently ignored.
hadoop version: 1.0.3
1 machine with JT
Hi,
I have an issue with crashing secondary namenode due to a simple move
operation
Appreciate any ideas on the resolution ...
Details bellow:
I was moving old backups to a separate folder, exact command:
sudo -u hdfs hadoop fs -mv /hbase-bak /backup/
and shortly after the command
Forgot to add that Hadoop distribution is cdh3u3 ...
Thanks
-- Alex
On Wed, May 9, 2012 at 1:58 PM, Alex Levin ale...@gmail.com wrote:
Hi,
I have an issue with crashing secondary namenode due to a simple move
operation
Appreciate any ideas on the resolution ...
Details bellow:
I
Hi,
I have an issue with crashing secondary namenode due to a simple move
operation
Appreciate any ideas on the resolution ...
Details bellow:
I was moving old backups to a separate folder, exact command:
sudo -u hdfs hadoop fs -mv /hbase-bak /backup/
and shortly after the command
of HDFS you
use, you may run into a corruption issue which has only recently been
fixed.
My doubt is Do all Datanodes should know where secondary namenode is
running or Only namenode should be knowing where secondary namenode is
running ?
The SNN works on a pull-mechanism, so its the converse
Hey people,
How can we setup another machine in the cluster as Secondary Namenode
in hadoop 0.20.205 ?
Can a DN also act as SNN, any pros and cons of having this configuration ?
Thanks,
Praveenesh
Hey Praveenesh,
You can start secondary namenode also by just giving the option ./hadoop
secondarynamenode
DN can not act as seconday namenode. The basic work for seconday namenode is to
do checkpointing and getting the edits insync with Namenode till last
checkpointing period. DN
need to configure correct namenode http address also for the
secondaryNN, so that it can connect NN for checkpointing operations.
http://hadoop.apache.org/common/docs/current/hdfs_user_guide.html#Secondary+NameNode
You can configure secondary node IP in masters file, start-dfs.sh itself
to configure correct namenode http address also for the
secondaryNN, so that it can connect NN for checkpointing operations.
http://hadoop.apache.org/common/docs/current/hdfs_user_guide.html#Secondary+NameNode
You can configure secondary node IP in masters file, start-dfs.sh itself
will start
given SNN's
interactions.
On 26-Dec-2011, at 7:53 PM, Uma Maheswara Rao G wrote:
Hey Praveenesh,
You can start secondary namenode also by just giving the option ./hadoop
secondarynamenode
DN can not act as seconday namenode. The basic work for seconday namenode
is to do checkpointing
.
On 26-Dec-2011, at 7:53 PM, Uma Maheswara Rao G wrote:
Hey Praveenesh,
You can start secondary namenode also by just giving the option
./hadoop secondarynamenode
DN can not act as seconday namenode. The basic work for seconday
namenode is to do checkpointing and getting the edits insync
/property
...
...
property
namefs.checkpoint.dir/name
value/hadoop/namesecondary/value
/property
and here is from secondary namenode
[namesecondary]$ pwd
/hadoop/namesecondary
[namesecondary]$ ls
current image in_use.lock
hope this make sense
P
Hi all,
I was wondering if there are any (technical) issues with running two
secondary namenodes on two separate servers rather than running just
one. Since basically everything falls or stands with a consistent
snapshot of the namenode fsimage I was considering to run two secondary
namenodes
Jorn,
If you've configured the Name Node fsimage and edit log replication to
both NFS and Secondary Name Node and regularly backup the fsimage and
edit logs you would do better investing time in understanding exactly
how the Name Node builds up it's internal database and how it applies
it's edit
Onderwerp: Re: Running more than one secondary namenode
Jorn,
If you've configured the Name Node fsimage and edit log replication to
both NFS and Secondary Name Node and regularly backup the fsimage and
edit logs you would do better investing time in understanding exactly
how the Name Node
Jorn,
Speaking beyond what Chris said: A very bad idea. You'll end up with a
corrupted FS if you do that right now:
https://issues.apache.org/jira/browse/HDFS-2305 (Fixed in a future
release, however)
On Wed, Oct 12, 2011 at 1:20 PM, Jorn Argelo - Ephorus
jorn.arg...@ephorus.com wrote:
Hi all,
current and
previous.checkpoint directories
this is from hdfs-site.xml
property
namedfs.name.dir/name
value/hadoop/name,/hadoop/backup/value
/property
...
...
property
namefs.checkpoint.dir/name
value/hadoop/namesecondary/value
/property
and here is from secondary namenode
the1plum...@gmail.com
Date: Tuesday, October 11, 2011 2:31 am
Subject: Re: Secondary namenode fsimage concept
To: common-user@hadoop.apache.org
hey parick
i wanted to configure my cluster to write namenode metadata to
multipledirectories as well:
property
:58 PM, patrick sang silvianhad...@gmail.comwrote:
I would say your namenode write metadata in local fs (where your secondary
namenode will pull files), and NFS mount.
property
namedfs.name.dir/name
value/hadoop/name,/hadoop/nfs_server_name/value
/property
my 0.02$
P
On Thu
: Shouguo Li the1plum...@gmail.com
Date: Tuesday, October 11, 2011 2:31 am
Subject: Re: Secondary namenode fsimage concept
To: common-user@hadoop.apache.org
hey parick
i wanted to configure my cluster to write namenode metadata to
multipledirectories as well:
property
namedfs.name.dir
:31 am
Subject: Re: Secondary namenode fsimage concept
To: common-user@hadoop.apache.org
hey parick
i wanted to configure my cluster to write namenode metadata to
multipledirectories as well:
property
namedfs.name.dir/name
value/hadoop/var/name,/mnt/hadoop/var/name/value
/property
transaction log, where changes will not
be applied to the datafile, but appended to a log.
To prevent the edits file growing infinitely, the secondary namenode
periodically pulls these two files, and the namenode starts writing changes to
a new edits file. Then, the secondary namenode merges
Hi Kai,
In the Second part I meant
Is the secondary namenode also contain the FSImage file or the two
files(FSImage and EdiltLog) are transferred from the namenode at the checkpoint
time.
Thanks
Shanmuganathan
On Thu, 06 Oct 2011 11:37:50 +0530 Kai Voigtlt;k...@123.orggt
Hi,
the secondary namenode only fetches the two files when a checkpointing is
needed.
Kai
Am 06.10.2011 um 08:45 schrieb shanmuganathan.r:
Hi Kai,
In the Second part I meant
Is the secondary namenode also contain the FSImage file or the two
files(FSImage and EdiltLog
namenode only fetches the two files when a checkpointing is
needed.
Kai
Am 06.10.2011 um 08:45 schrieb shanmuganathan.r:
gt; Hi Kai,
gt;
gt; In the Second part I meant
gt;
gt;
gt; Is the secondary namenode also contain the FSImage file or the two
files(FSImage and EdiltLog) are transferred
Hi,
yes, the secondary namenode is actually a badly named piece of software, as
it's not a namenode at all. And it's going to be renamed to checkpoint node.
To prevent metadata loss when your namenode fails, you should write the
namenode files to a local RAID and also a networked storage (NFS
I would say your namenode write metadata in local fs (where your secondary
namenode will pull files), and NFS mount.
property
namedfs.name.dir/name
value/hadoop/name,/hadoop/nfs_server_name/value
/property
my 0.02$
P
On Thu, Oct 6, 2011 at 12:04 AM, shanmuganathan.r
shanmuganatha
Hi All,
I have a doubt in hadoop secondary namenode concept . Please
correct if the following statements are wrong .
The namenode hosts the fsimage and edit log files. The secondary namenode hosts
the fsimage file only. At the time of checkpoint the edit log file is
transferred
SSN = checkpoint node.
I think multiple checkpoint node is supported in 0.21.
On Wed, May 25, 2011 at 9:53 PM, ccxixicc ccxix...@foxmail.com wrote:
Hi,all
I'm testing hadoop with two secondary namenode ( I'm using hadoop-0.20.2) ,
and get some errors. So I did a few search. But first I got
Hi,
As of now, primary namenode and secondary namenode are running on the
same machine in our configuration.
As both are RAM heavy processes, we want to move secondary namenode to
another machine in the cluster.
What does this move take?
Please refer me to some article which
White's Hadoop: The Definitive Guide mentions that the
Offline Image Viewer supplied with 0.21.0 can be used to test the integrity
of any backups taken from the Secondary Namenode (previous.checkpoint)
directory.
How does this work in practice?
I've tested the tool on a valid fsimage file
page 312 of Tom White's Hadoop: The Definitive Guide mentions that the
Offline Image Viewer supplied with 0.21.0 can be used to test the integrity
of any backups taken from the Secondary Namenode (previous.checkpoint)
directory.
How does this work in practice?
I've tested the tool on a valid
We are running hadoop-0.20.1. I did not set this cluster up, and the
person who did is unavailable, so I apologize for any of the following
that is unclear.
We would like to (re)start a secondary namenode, and I am looking for
guidance on how to do so.
We have secondary namenode, but it has
)
From: suresh srinivas [mailto:srini30...@gmail.com]
Sent: January 6, 2011 15:27
To: hdfs-user@hadoop.apache.org
Subject: Re: Secondary Namenode doCheckpoint, FileNotFoundException
Can you add namenode log around this time
I'm getting the below exception on my secondary namenode.
As far as I can tell the edits isn't being reconciled as it should be (i.e.
edits.NEW continues to grow) on the namenode.
I've searched around but haven't turned up any answers so far. Anyone have any
ideas?
Hadoop 0.20.2
NN host below
Date: Wed, 18 Aug 2010 13:08:03 +0530
From: adarsh.sha...@orkash.com
To: core-u...@hadoop.apache.org
Subject: Configure Secondary Namenode
I am not able to find any command or parameter in core-default.xml to
configure secondary namenode on separate machine.
I have a 4-node cluster
/secondarynamenode list machine name in it.
Best,
Xiujin Yang.
Date: Wed, 18 Aug 2010 13:08:03 +0530
From: adarsh.sha...@orkash.com
To: core-u...@hadoop.apache.org
Subject: Configure Secondary Namenode
I am not able to find any command or parameter in core-default.xml to
configure
I am not able to find any command or parameter in core-default.xml to
configure secondary namenode on separate machine.
I have a 4-node cluster with jobtracker,master,secondary namenode on one
machine
and remaining 3 are slaves.
Can anyone please tell me.
Thanks in Advance
hi Sharma,
I think you must have added files like masters and slaves under conf,
actually the node you add to masters is the secondary namenode
2010-08-18
shangan
发件人: Adarsh Sharma
发送时间: 2010-08-18 15:36:44
收件人: core-user
抄送:
主题: Configure Secondary Namenode
I am not able
Yang.
Date: Wed, 18 Aug 2010 13:08:03 +0530
From: adarsh.sha...@orkash.com
To: core-u...@hadoop.apache.org
Subject: Configure Secondary Namenode
I am not able to find any command or parameter in core-default.xml to
configure secondary namenode on separate machine.
I have a 4-node cluster
Hey Zhang,
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Fatal Error : All
storage directories are inaccessible.
Are the directories specified by dfs.namenode.[name|edits].dir
accessible? Perhaps they're NFS mounts that are flaking out?
Thanks,
Eli
message is just the symptom. It is not the cause of the failure.
Zhang
-Original Message-
From: Eli Collins [mailto:e...@cloudera.com]
Sent: Friday, December 04, 2009 2:03 PM
To: common-user@hadoop.apache.org
Subject: Re: Namenode crashes while rolling edit log from secondary
namenode
Hey
-Original Message-
From: Eli Collins [mailto:e...@cloudera.com]
Sent: Friday, December 04, 2009 2:03 PM
To: common-user@hadoop.apache.org
Subject: Re: Namenode crashes while rolling edit log from secondary
namenode
Hey Zhang,
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Fatal Error
Hi,
Just one week since upgrading to 0.20.1, I've been hit twice by NN
crashes. The symptoms were the same. The NN log says:
2009-12-01 12:04:00,420 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from
10.63.118.5
2009-12-01 12:04:00,420 INFO
That is the entire log message,
The only way to get the actual error is via tcpdump and packet reassembly
I can't find a getImage log messages on the namenode, they would be in the
jetty error log, but I think that is set to /dev/null.
On Tue, Oct 13, 2009 at 10:57 PM, Raghu Angadi
Aaron Kimball wrote:
Quite possible. :\
- A
This is a bit odd.. we made this change yesterday and we're seeing this
in the 2NN
log:
2009-10-07 19:16:21,225 WARN org.apache.hadoop.dfs.Storage: Checkpoint
directory /data/hadoop/tmp/dfs/namesecondary is added.
2009-10-07 19:16:21,285 INFO
Quite possible. :\
- A
On Thu, Oct 1, 2009 at 5:17 PM, Mayuran Yogarajah
mayuran.yogara...@casalemedia.com wrote:
Aaron Kimball wrote:
If you want to run the 2NN on a different node than the NN, then you need
to
set dfs.http.address on the 2NN to point to the namenode's http server
If you want to run the 2NN on a different node than the NN, then you need to
set dfs.http.address on the 2NN to point to the namenode's http server
address. See
http://www.cloudera.com/blog/2009/02/10/multi-host-secondarynamenode-configuration/
- Aaron
On Mon, Sep 28, 2009 at 2:17 PM, Todd
Aaron Kimball wrote:
If you want to run the 2NN on a different node than the NN, then you need to
set dfs.http.address on the 2NN to point to the namenode's http server
address. See
http://www.cloudera.com/blog/2009/02/10/multi-host-secondarynamenode-configuration/
- Aaron
Uhh this wasn't
We've got the namenode image being written to a second machine via
NFS so we have that backed up. That said, do we still need a secondary
namenode, or is it OK to have the cluster going without one?
thanks
Hi Mayuran,
Yes, you need to run a secondary namenode.
The secondary namenode is *not* a backup mechanism. It is an important part
of the HDFS metadata system, and is responsible for periodically
checkpointing the filesystem namespace into a single file.
Without the secondary namenode running
Hey Todd,
Note that you do not need to run the 2NN on a separate machine *if* you have
enough RAM for two entire copies of your filesystem namespace. For small
clusters you should be fine to run the two daemons on one machine.
Just wanted to confirm.. to set up the secondary NN I just need
On Mon, Sep 28, 2009 at 10:44 AM, Mayuran Yogarajah
mayuran.yogara...@casalemedia.com wrote:
Hey Todd,
Note that you do not need to run the 2NN on a separate machine *if* you
have
enough RAM for two entire copies of your filesystem namespace. For small
clusters you should be fine to run
Hey Todd,
I don't personally like to use the slaves/masters files for managing which
daemons run on which nodes. But, if you'd like to, it looks like you should
put it in the masters file, not the slaves file. Look at how start-dfs.sh
works to understand how those files are used.
-Todd
I am trying to migrate from 32 bit jvm and 64 bit for namenode only.
*setup*
NN - 64 bit
Secondary namenode (instance 1) - 64 bit
Secondary namenode (instance 2) - 32 bit
datanode- 32 bit
From the mailing list I deduced that NN-64 bit and Datanode -32 bit
On 11/25/08 3:58 PM, Sagar Naik [EMAIL PROTECTED] wrote:
I am trying to migrate from 32 bit jvm and 64 bit for namenode only.
*setup*
NN - 64 bit
Secondary namenode (instance 1) - 64 bit
Secondary namenode (instance 2) - 32 bit
datanode- 32 bit
I might be wrong, but my assumption is running SN either in 64/32 shouldn't
matter.
But I am curious how two instances of Secondary namenode is setup, will both of
them talk to same NN and running in parallel?
what are the advantages here.
Wondering if there are chances of image corruption
lohit wrote:
I might be wrong, but my assumption is running SN either in 64/32 shouldn't matter.
But I am curious how two instances of Secondary namenode is setup, will both of them talk to same NN and running in parallel?
what are the advantages here.
I just have multiple entries master
.
Thanks,
Lohit
- Original Message
From: Sagar Naik [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Tuesday, November 25, 2008 4:32:26 PM
Subject: Re: 64 bit namenode and secondary namenode 32 bit datanode
lohit wrote:
I might be wrong, but my assumption is running SN either
want to
allocte more than 3GB heap space to the namenode and secondary namenode. In
that case, you will jave to run the namenode and secondary namenode using 64
bit JVM.
dhruba
On Tue, Nov 25, 2008 at 4:39 PM, lohit [EMAIL PROTECTED] wrote:
Well, if I think about, image corruption might
to configure the secondary NameNode
on a remote server.
In order to do so, I cheked out hadoop-default.xml config file and I found
following:
property
namedfs.secondary.http.address/name
value0.0.0.0:50090/value
description
The secondary namenode http server address and port.
If the port
Hi All!
Due to, the NameNode is a single point of failure for the HDFS Cluster. I
would like to configure the secondary NameNode
on a remote server.
In order to do so, I cheked out hadoop-default.xml config file and I found
following:
property
namedfs.secondary.http.address/name
I have the same question. By default, when I run start-dfs.sh, hadoop will
start primary and secondary namenode on the same machine.
How can I start primary namenode on one machine, and secondary namenode on
another? Thanks.
tp.
On Thu, Aug 14, 2008 at 9:23 AM, lohit [EMAIL PROTECTED] wrote
Hi All,
I am facing the same confusions regarding setting up secondary namenode
server in a separate machine. It would be very helpful if anyone can
help us in the direction.
Pratyush
Brian Karlak wrote:
Hello All --
We have a 20-node cluster running 0.17.1. Currently, the secondary
Hi Brian,
I just tested this functionality in a small cluster at my disposal. It
seems that you need not specify the namenode hostname in the
conf/masters file. Just specifying the secondary namenode hostname in
the conf/masters file will do the job for you. To put it formally,
- if your
Hello All --
We have a 20-node cluster running 0.17.1. Currently, the secondary
namenode process is running on the same machine as the primary
namenode. We would like to move it to a separate machine in the
cluster, as recommended. However, I cannot seem to find much in the
way
for? Unless
there are plans to make this interface work, this config parameter should go
away, and so should the listening thread, shouldn't they?
Thanks,
-Yuri
On Friday 04 April 2008 03:30:46 pm dhruba Borthakur wrote:
Your configuration is good. The secondary Namenode does not publish a
web
The secondary Namenode uses the HTTP interface to pull the fsimage from
the primary. Similarly, the primary Namenode uses the
dfs.secondary.http.address to pull the checkpointed-fsimage back from
the secondary to the primary. So, the definition of
dfs.secondary.http.address is needed.
However
:
The secondary Namenode uses the HTTP interface to pull the fsimage from
the primary. Similarly, the primary Namenode uses the
dfs.secondary.http.address to pull the checkpointed-fsimage back from
the secondary to the primary. So, the definition of
dfs.secondary.http.address is needed.
However, the servlet
On Tuesday 08 April 2008 11:54:35 am Konstantin Shvachko wrote:
If you have anything in mind that can be displayed on the UI please let us
know. You can also find a jira for the issue, it would be good if this
discussion is reflected in it.
Well, I guess we could have interface to browse the
Unfortunately we do not have an api for the secondary nn that would allow
browsing the checkpoint.
I agree it would be nice to have one.
Thanks for filing the issue.
--Konstantin
Yuri Pradkin wrote:
On Tuesday 08 April 2008 11:54:35 am Konstantin Shvachko wrote:
If you have anything in mind
Your configuration is good. The secondary Namenode does not publish a
web interface. The null pointer message in the secondary Namenode log
is a harmless bug but should be fixed. It would be nice if you can open
a JIRA for it.
Thanks,
Dhruba
-Original Message-
From: Yuri Pradkin [mailto
Hi,
I'm running Hadoop (latest snapshot) on several machines and in our setup
namenode
and secondarynamenode are on different systems. I see from the logs than
secondary
namenode regularly checkpoints fs from primary namenode.
But when I go to the secondary namenode HTTP
100 matches
Mail list logo