Re: Setting up another machine as secondary node
y is already locked. at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:510) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:363) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:273) at org.apache.hadoop.hdfs.server.namenode.FSImage.doImportCheckpoint(FSImage.java:504) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:344) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:290) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:163) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:208) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:194) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:859) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:868) 2009-05-26 14:43:48,464 INFO org.apache.hadoop.ipc.Server: Stopping server on 4 2009-05-26 14:43:48,466 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: Cannot lock storage /tmp/hadoop-ithurs/dfs/namesecondary. The directory is already locked. at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:510) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:363) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:273) at org.apache.hadoop.hdfs.server.namenode.FSImage.doImportCheckpoint(FSImage.java:504) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:344) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:290) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:163) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:208) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:194) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:859) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:868) 2009-05-26 14:43:48,468 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: / SHUTDOWN_MSG: Shutting down NameNode at germapp/192.168.0.1 / any pointers/suggestions? Thanks, Raakhi On 5/20/09, Aaron Kimball wrote: See this regarding instructions on configuring a 2NN on a separate machine from the NN: http://www.cloudera.com/blog/2009/02/10/multi-host-secondarynamenode-configuration/ - Aaron On Thu, May 14, 2009 at 10:42 AM, Koji Noguchi wrote: Before 0.19, fsimage/edits were on the same directory. So whenever secondary finishes checkpointing, it copies back the fsimage while namenode still kept on writing to the edits file. Usually we observed some latency on the namenode side during that time. HADOOP-3948 would probably help after 0.19 or later. Koji -Original Message- From: Brian Bockelman [mailto:bbock...@cse.unl.edu] Sent: Thursday, May 14, 2009 10:32 AM To: core-user@hadoop.apache.org Subject: Re: Setting up another machine as secondary node Hey Koji, It's an expensive operation - for the secondary namenode, not the namenode itself, right? I don't particularly care if I stress out a dedicated node that doesn't have to respond to queries ;) Locally we checkpoint+backup fairly frequently (not 5 minutes ... maybe less than the default hour) due to sheer paranoia of losing metadata. Brian On May 14, 2009, at 12:25 PM, Koji Noguchi wrote: The secondary namenode takes a snapshot at 5 minute (configurable) intervals, This is a bit too aggressive. Checkpointing is still an expensive operation. I'd say every hour or even every day. Isn't the default 3600 seconds? Koji -Original Message- From: jason hadoop [mailto:jason.had...@gmail.com] Sent: Thursday, May 14, 2009 7:46 AM To: core-user@hadoop.apache.org Subject: Re: Setting up another machine as secondary node any machine put in the conf/masters file becomes a secondary namenode. At some point there was confusion on the safety of more than one machine, which I believe was settled, as many are safe. The secondary namenode takes a snapshot at 5 minute (configurable) intervals, rebuilds the fsimage and sends that back to the namenode. There is some performance advan
Re: Setting up another machine as secondary node
g.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:344) >>at >> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87) >>at >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311) >>at >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:290) >>at >> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:163) >>at >> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:208) >>at >> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:194) >>at >> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:859) >>at >> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:868) >> 2009-05-26 14:43:48,464 INFO org.apache.hadoop.ipc.Server: Stopping >> server on 4 >> 2009-05-26 14:43:48,466 ERROR >> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: >> Cannot lock storage /tmp/hadoop-ithurs/dfs/namesecondary. The >> directory is already locked. >>at >> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:510) >>at >> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:363) >>at >> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:273) >>at >> org.apache.hadoop.hdfs.server.namenode.FSImage.doImportCheckpoint(FSImage.java:504) >>at >> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:344) >>at >> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87) >>at >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311) >>at >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:290) >>at >> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:163) >>at >> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:208) >>at >> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:194) >>at >> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:859) >>at >> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:868) >> >> 2009-05-26 14:43:48,468 INFO >> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: >> / >> SHUTDOWN_MSG: Shutting down NameNode at germapp/192.168.0.1 >> / >> >> any pointers/suggestions? >> Thanks, >> Raakhi >> >> On 5/20/09, Aaron Kimball wrote: >> >>> See this regarding instructions on configuring a 2NN on a separate >>> machine >>> from the NN: >>> >>> http://www.cloudera.com/blog/2009/02/10/multi-host-secondarynamenode-configuration/ >>> >>> - Aaron >>> >>> On Thu, May 14, 2009 at 10:42 AM, Koji Noguchi >>> wrote: >>> >>> Before 0.19, fsimage/edits were on the same directory. >>>> So whenever secondary finishes checkpointing, it copies back the fsimage >>>> while namenode still kept on writing to the edits file. >>>> >>>> Usually we observed some latency on the namenode side during that time. >>>> >>>> HADOOP-3948 would probably help after 0.19 or later. >>>> >>>> Koji >>>> >>>> -Original Message- >>>> From: Brian Bockelman [mailto:bbock...@cse.unl.edu] >>>> Sent: Thursday, May 14, 2009 10:32 AM >>>> To: core-user@hadoop.apache.org >>>> Subject: Re: Setting up another machine as secondary node >>>> >>>> Hey Koji, >>>> >>>> It's an expensive operation - for the secondary namenode, not the >>>> namenode itself, right? I don't particularly care if I stress out a >>>> dedicated node that doesn't have to respond to queries ;) >>>> >>>> Locally we checkpoint+backup fairly frequently (not 5 minutes ... >>>> maybe less than the default hour) due to sheer paranoia of losing >>>> metadata. >>>> >>>> Brian >>>> >>>> On May 14, 2009, at 12:25 PM, Koji Noguchi wrote: >>>> >>>> The secondary namenode takes a s
Re: Setting up another machine as secondary node
344) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:290) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:163) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:208) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:194) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:859) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:868) 2009-05-26 14:43:48,468 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: / SHUTDOWN_MSG: Shutting down NameNode at germapp/192.168.0.1 / any pointers/suggestions? Thanks, Raakhi On 5/20/09, Aaron Kimball wrote: See this regarding instructions on configuring a 2NN on a separate machine from the NN: http://www.cloudera.com/blog/2009/02/10/multi-host-secondarynamenode-configuration/ - Aaron On Thu, May 14, 2009 at 10:42 AM, Koji Noguchi wrote: Before 0.19, fsimage/edits were on the same directory. So whenever secondary finishes checkpointing, it copies back the fsimage while namenode still kept on writing to the edits file. Usually we observed some latency on the namenode side during that time. HADOOP-3948 would probably help after 0.19 or later. Koji -Original Message- From: Brian Bockelman [mailto:bbock...@cse.unl.edu] Sent: Thursday, May 14, 2009 10:32 AM To: core-user@hadoop.apache.org Subject: Re: Setting up another machine as secondary node Hey Koji, It's an expensive operation - for the secondary namenode, not the namenode itself, right? I don't particularly care if I stress out a dedicated node that doesn't have to respond to queries ;) Locally we checkpoint+backup fairly frequently (not 5 minutes ... maybe less than the default hour) due to sheer paranoia of losing metadata. Brian On May 14, 2009, at 12:25 PM, Koji Noguchi wrote: The secondary namenode takes a snapshot at 5 minute (configurable) intervals, This is a bit too aggressive. Checkpointing is still an expensive operation. I'd say every hour or even every day. Isn't the default 3600 seconds? Koji -Original Message- From: jason hadoop [mailto:jason.had...@gmail.com] Sent: Thursday, May 14, 2009 7:46 AM To: core-user@hadoop.apache.org Subject: Re: Setting up another machine as secondary node any machine put in the conf/masters file becomes a secondary namenode. At some point there was confusion on the safety of more than one machine, which I believe was settled, as many are safe. The secondary namenode takes a snapshot at 5 minute (configurable) intervals, rebuilds the fsimage and sends that back to the namenode. There is some performance advantage of having it on the local machine, and some safety advantage of having it on an alternate machine. Could someone who remembers speak up on the single vrs multiple secondary namenodes? On Thu, May 14, 2009 at 6:07 AM, David Ritch wrote: First of all, the secondary namenode is not a what you might think a secondary is - it's not failover device. It does make a copy of the filesystem metadata periodically, and it integrates the edits into the image. It does *not* provide failover. Second, you specify its IP address in hadoop-site.xml. This is where you can override the defaults set in hadoop-default.xml. dbr On Thu, May 14, 2009 at 9:03 AM, Rakhi Khatwani wrote: Hi, I wanna set up a cluster of 5 nodes in such a way that node1 - master node2 - secondary namenode node3 - slave node4 - slave node5 - slave How do we go about that? there is no property in hadoop-env where i can set the ip-address for secondary name node. if i set node-1 and node-2 in masters, and when we start dfs, in both the m/cs, the namenode n secondary namenode processes r present. but i think only node1 is active. n my namenode fail over operation fails. ny suggesstions? Regards, Rakhi -- Alpha Chapters of my book on Hadoop are available http://www.apress.com/book/view/9781430219422 www.prohadoopbook.com a community for Hadoop Professionals
Re: Setting up another machine as secondary node
e.java:208) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:194) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:859) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:868) 2009-05-26 14:43:48,468 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: / SHUTDOWN_MSG: Shutting down NameNode at germapp/192.168.0.1 / any pointers/suggestions? Thanks, Raakhi On 5/20/09, Aaron Kimball wrote: > See this regarding instructions on configuring a 2NN on a separate machine > from the NN: > http://www.cloudera.com/blog/2009/02/10/multi-host-secondarynamenode-configuration/ > > - Aaron > > On Thu, May 14, 2009 at 10:42 AM, Koji Noguchi > wrote: > >> Before 0.19, fsimage/edits were on the same directory. >> So whenever secondary finishes checkpointing, it copies back the fsimage >> while namenode still kept on writing to the edits file. >> >> Usually we observed some latency on the namenode side during that time. >> >> HADOOP-3948 would probably help after 0.19 or later. >> >> Koji >> >> -Original Message- >> From: Brian Bockelman [mailto:bbock...@cse.unl.edu] >> Sent: Thursday, May 14, 2009 10:32 AM >> To: core-user@hadoop.apache.org >> Subject: Re: Setting up another machine as secondary node >> >> Hey Koji, >> >> It's an expensive operation - for the secondary namenode, not the >> namenode itself, right? I don't particularly care if I stress out a >> dedicated node that doesn't have to respond to queries ;) >> >> Locally we checkpoint+backup fairly frequently (not 5 minutes ... >> maybe less than the default hour) due to sheer paranoia of losing >> metadata. >> >> Brian >> >> On May 14, 2009, at 12:25 PM, Koji Noguchi wrote: >> >> >> The secondary namenode takes a snapshot >> >> at 5 minute (configurable) intervals, >> >> >> > This is a bit too aggressive. >> > Checkpointing is still an expensive operation. >> > I'd say every hour or even every day. >> > >> > Isn't the default 3600 seconds? >> > >> > Koji >> > >> > -Original Message- >> > From: jason hadoop [mailto:jason.had...@gmail.com] >> > Sent: Thursday, May 14, 2009 7:46 AM >> > To: core-user@hadoop.apache.org >> > Subject: Re: Setting up another machine as secondary node >> > >> > any machine put in the conf/masters file becomes a secondary namenode. >> > >> > At some point there was confusion on the safety of more than one >> > machine, >> > which I believe was settled, as many are safe. >> > >> > The secondary namenode takes a snapshot at 5 minute (configurable) >> > intervals, rebuilds the fsimage and sends that back to the namenode. >> > There is some performance advantage of having it on the local machine, >> > and >> > some safety advantage of having it on an alternate machine. >> > Could someone who remembers speak up on the single vrs multiple >> > secondary >> > namenodes? >> > >> > >> > On Thu, May 14, 2009 at 6:07 AM, David Ritch >> > wrote: >> > >> >> First of all, the secondary namenode is not a what you might think a >> >> secondary is - it's not failover device. It does make a copy of the >> >> filesystem metadata periodically, and it integrates the edits into >> >> the >> >> image. It does *not* provide failover. >> >> >> >> Second, you specify its IP address in hadoop-site.xml. This is where >> > you >> >> can override the defaults set in hadoop-default.xml. >> >> >> >> dbr >> >> >> >> On Thu, May 14, 2009 at 9:03 AM, Rakhi Khatwani >> > > >>> wrote: >> >> >> >>> Hi, >> >>>I wanna set up a cluster of 5 nodes in such a way that >> >>> node1 - master >> >>> node2 - secondary namenode >> >>> node3 - slave >> >>> node4 - slave >> >>> node5 - slave >> >>> >> >>> >> >>> How do we go about that? >> >>> there is no property in hadoop-env where i can set the ip-address >> > for >> >>> secondary name node. >> >>> >> >>> if i set node-1 and node-2 in masters, and when we start dfs, in >> > both the >> >>> m/cs, the namenode n secondary namenode processes r present. but i >> > think >> >>> only node1 is active. >> >>> n my namenode fail over operation fails. >> >>> >> >>> ny suggesstions? >> >>> >> >>> Regards, >> >>> Rakhi >> >>> >> >> >> > >> > >> > >> > -- >> > Alpha Chapters of my book on Hadoop are available >> > http://www.apress.com/book/view/9781430219422 >> > www.prohadoopbook.com a community for Hadoop Professionals >> >> >
Re: Setting up another machine as secondary node
See this regarding instructions on configuring a 2NN on a separate machine from the NN: http://www.cloudera.com/blog/2009/02/10/multi-host-secondarynamenode-configuration/ - Aaron On Thu, May 14, 2009 at 10:42 AM, Koji Noguchi wrote: > Before 0.19, fsimage/edits were on the same directory. > So whenever secondary finishes checkpointing, it copies back the fsimage > while namenode still kept on writing to the edits file. > > Usually we observed some latency on the namenode side during that time. > > HADOOP-3948 would probably help after 0.19 or later. > > Koji > > -Original Message- > From: Brian Bockelman [mailto:bbock...@cse.unl.edu] > Sent: Thursday, May 14, 2009 10:32 AM > To: core-user@hadoop.apache.org > Subject: Re: Setting up another machine as secondary node > > Hey Koji, > > It's an expensive operation - for the secondary namenode, not the > namenode itself, right? I don't particularly care if I stress out a > dedicated node that doesn't have to respond to queries ;) > > Locally we checkpoint+backup fairly frequently (not 5 minutes ... > maybe less than the default hour) due to sheer paranoia of losing > metadata. > > Brian > > On May 14, 2009, at 12:25 PM, Koji Noguchi wrote: > > >> The secondary namenode takes a snapshot > >> at 5 minute (configurable) intervals, > >> > > This is a bit too aggressive. > > Checkpointing is still an expensive operation. > > I'd say every hour or even every day. > > > > Isn't the default 3600 seconds? > > > > Koji > > > > -Original Message----- > > From: jason hadoop [mailto:jason.had...@gmail.com] > > Sent: Thursday, May 14, 2009 7:46 AM > > To: core-user@hadoop.apache.org > > Subject: Re: Setting up another machine as secondary node > > > > any machine put in the conf/masters file becomes a secondary namenode. > > > > At some point there was confusion on the safety of more than one > > machine, > > which I believe was settled, as many are safe. > > > > The secondary namenode takes a snapshot at 5 minute (configurable) > > intervals, rebuilds the fsimage and sends that back to the namenode. > > There is some performance advantage of having it on the local machine, > > and > > some safety advantage of having it on an alternate machine. > > Could someone who remembers speak up on the single vrs multiple > > secondary > > namenodes? > > > > > > On Thu, May 14, 2009 at 6:07 AM, David Ritch > > wrote: > > > >> First of all, the secondary namenode is not a what you might think a > >> secondary is - it's not failover device. It does make a copy of the > >> filesystem metadata periodically, and it integrates the edits into > >> the > >> image. It does *not* provide failover. > >> > >> Second, you specify its IP address in hadoop-site.xml. This is where > > you > >> can override the defaults set in hadoop-default.xml. > >> > >> dbr > >> > >> On Thu, May 14, 2009 at 9:03 AM, Rakhi Khatwani > > >>> wrote: > >> > >>> Hi, > >>>I wanna set up a cluster of 5 nodes in such a way that > >>> node1 - master > >>> node2 - secondary namenode > >>> node3 - slave > >>> node4 - slave > >>> node5 - slave > >>> > >>> > >>> How do we go about that? > >>> there is no property in hadoop-env where i can set the ip-address > > for > >>> secondary name node. > >>> > >>> if i set node-1 and node-2 in masters, and when we start dfs, in > > both the > >>> m/cs, the namenode n secondary namenode processes r present. but i > > think > >>> only node1 is active. > >>> n my namenode fail over operation fails. > >>> > >>> ny suggesstions? > >>> > >>> Regards, > >>> Rakhi > >>> > >> > > > > > > > > -- > > Alpha Chapters of my book on Hadoop are available > > http://www.apress.com/book/view/9781430219422 > > www.prohadoopbook.com a community for Hadoop Professionals > >
RE: Setting up another machine as secondary node
Before 0.19, fsimage/edits were on the same directory. So whenever secondary finishes checkpointing, it copies back the fsimage while namenode still kept on writing to the edits file. Usually we observed some latency on the namenode side during that time. HADOOP-3948 would probably help after 0.19 or later. Koji -Original Message- From: Brian Bockelman [mailto:bbock...@cse.unl.edu] Sent: Thursday, May 14, 2009 10:32 AM To: core-user@hadoop.apache.org Subject: Re: Setting up another machine as secondary node Hey Koji, It's an expensive operation - for the secondary namenode, not the namenode itself, right? I don't particularly care if I stress out a dedicated node that doesn't have to respond to queries ;) Locally we checkpoint+backup fairly frequently (not 5 minutes ... maybe less than the default hour) due to sheer paranoia of losing metadata. Brian On May 14, 2009, at 12:25 PM, Koji Noguchi wrote: >> The secondary namenode takes a snapshot >> at 5 minute (configurable) intervals, >> > This is a bit too aggressive. > Checkpointing is still an expensive operation. > I'd say every hour or even every day. > > Isn't the default 3600 seconds? > > Koji > > -Original Message- > From: jason hadoop [mailto:jason.had...@gmail.com] > Sent: Thursday, May 14, 2009 7:46 AM > To: core-user@hadoop.apache.org > Subject: Re: Setting up another machine as secondary node > > any machine put in the conf/masters file becomes a secondary namenode. > > At some point there was confusion on the safety of more than one > machine, > which I believe was settled, as many are safe. > > The secondary namenode takes a snapshot at 5 minute (configurable) > intervals, rebuilds the fsimage and sends that back to the namenode. > There is some performance advantage of having it on the local machine, > and > some safety advantage of having it on an alternate machine. > Could someone who remembers speak up on the single vrs multiple > secondary > namenodes? > > > On Thu, May 14, 2009 at 6:07 AM, David Ritch > wrote: > >> First of all, the secondary namenode is not a what you might think a >> secondary is - it's not failover device. It does make a copy of the >> filesystem metadata periodically, and it integrates the edits into >> the >> image. It does *not* provide failover. >> >> Second, you specify its IP address in hadoop-site.xml. This is where > you >> can override the defaults set in hadoop-default.xml. >> >> dbr >> >> On Thu, May 14, 2009 at 9:03 AM, Rakhi Khatwani > >> wrote: >> >>> Hi, >>>I wanna set up a cluster of 5 nodes in such a way that >>> node1 - master >>> node2 - secondary namenode >>> node3 - slave >>> node4 - slave >>> node5 - slave >>> >>> >>> How do we go about that? >>> there is no property in hadoop-env where i can set the ip-address > for >>> secondary name node. >>> >>> if i set node-1 and node-2 in masters, and when we start dfs, in > both the >>> m/cs, the namenode n secondary namenode processes r present. but i > think >>> only node1 is active. >>> n my namenode fail over operation fails. >>> >>> ny suggesstions? >>> >>> Regards, >>> Rakhi >>> >> > > > > -- > Alpha Chapters of my book on Hadoop are available > http://www.apress.com/book/view/9781430219422 > www.prohadoopbook.com a community for Hadoop Professionals
Re: Setting up another machine as secondary node
Hey Koji, It's an expensive operation - for the secondary namenode, not the namenode itself, right? I don't particularly care if I stress out a dedicated node that doesn't have to respond to queries ;) Locally we checkpoint+backup fairly frequently (not 5 minutes ... maybe less than the default hour) due to sheer paranoia of losing metadata. Brian On May 14, 2009, at 12:25 PM, Koji Noguchi wrote: The secondary namenode takes a snapshot at 5 minute (configurable) intervals, This is a bit too aggressive. Checkpointing is still an expensive operation. I'd say every hour or even every day. Isn't the default 3600 seconds? Koji -Original Message- From: jason hadoop [mailto:jason.had...@gmail.com] Sent: Thursday, May 14, 2009 7:46 AM To: core-user@hadoop.apache.org Subject: Re: Setting up another machine as secondary node any machine put in the conf/masters file becomes a secondary namenode. At some point there was confusion on the safety of more than one machine, which I believe was settled, as many are safe. The secondary namenode takes a snapshot at 5 minute (configurable) intervals, rebuilds the fsimage and sends that back to the namenode. There is some performance advantage of having it on the local machine, and some safety advantage of having it on an alternate machine. Could someone who remembers speak up on the single vrs multiple secondary namenodes? On Thu, May 14, 2009 at 6:07 AM, David Ritch wrote: First of all, the secondary namenode is not a what you might think a secondary is - it's not failover device. It does make a copy of the filesystem metadata periodically, and it integrates the edits into the image. It does *not* provide failover. Second, you specify its IP address in hadoop-site.xml. This is where you can override the defaults set in hadoop-default.xml. dbr On Thu, May 14, 2009 at 9:03 AM, Rakhi Khatwani wrote: Hi, I wanna set up a cluster of 5 nodes in such a way that node1 - master node2 - secondary namenode node3 - slave node4 - slave node5 - slave How do we go about that? there is no property in hadoop-env where i can set the ip-address for secondary name node. if i set node-1 and node-2 in masters, and when we start dfs, in both the m/cs, the namenode n secondary namenode processes r present. but i think only node1 is active. n my namenode fail over operation fails. ny suggesstions? Regards, Rakhi -- Alpha Chapters of my book on Hadoop are available http://www.apress.com/book/view/9781430219422 www.prohadoopbook.com a community for Hadoop Professionals
RE: Setting up another machine as secondary node
> The secondary namenode takes a snapshot > at 5 minute (configurable) intervals, > This is a bit too aggressive. Checkpointing is still an expensive operation. I'd say every hour or even every day. Isn't the default 3600 seconds? Koji -Original Message- From: jason hadoop [mailto:jason.had...@gmail.com] Sent: Thursday, May 14, 2009 7:46 AM To: core-user@hadoop.apache.org Subject: Re: Setting up another machine as secondary node any machine put in the conf/masters file becomes a secondary namenode. At some point there was confusion on the safety of more than one machine, which I believe was settled, as many are safe. The secondary namenode takes a snapshot at 5 minute (configurable) intervals, rebuilds the fsimage and sends that back to the namenode. There is some performance advantage of having it on the local machine, and some safety advantage of having it on an alternate machine. Could someone who remembers speak up on the single vrs multiple secondary namenodes? On Thu, May 14, 2009 at 6:07 AM, David Ritch wrote: > First of all, the secondary namenode is not a what you might think a > secondary is - it's not failover device. It does make a copy of the > filesystem metadata periodically, and it integrates the edits into the > image. It does *not* provide failover. > > Second, you specify its IP address in hadoop-site.xml. This is where you > can override the defaults set in hadoop-default.xml. > > dbr > > On Thu, May 14, 2009 at 9:03 AM, Rakhi Khatwani >wrote: > > > Hi, > > I wanna set up a cluster of 5 nodes in such a way that > > node1 - master > > node2 - secondary namenode > > node3 - slave > > node4 - slave > > node5 - slave > > > > > > How do we go about that? > > there is no property in hadoop-env where i can set the ip-address for > > secondary name node. > > > > if i set node-1 and node-2 in masters, and when we start dfs, in both the > > m/cs, the namenode n secondary namenode processes r present. but i think > > only node1 is active. > > n my namenode fail over operation fails. > > > > ny suggesstions? > > > > Regards, > > Rakhi > > > -- Alpha Chapters of my book on Hadoop are available http://www.apress.com/book/view/9781430219422 www.prohadoopbook.com a community for Hadoop Professionals
Re: Setting up another machine as secondary node
any machine put in the conf/masters file becomes a secondary namenode. At some point there was confusion on the safety of more than one machine, which I believe was settled, as many are safe. The secondary namenode takes a snapshot at 5 minute (configurable) intervals, rebuilds the fsimage and sends that back to the namenode. There is some performance advantage of having it on the local machine, and some safety advantage of having it on an alternate machine. Could someone who remembers speak up on the single vrs multiple secondary namenodes? On Thu, May 14, 2009 at 6:07 AM, David Ritch wrote: > First of all, the secondary namenode is not a what you might think a > secondary is - it's not failover device. It does make a copy of the > filesystem metadata periodically, and it integrates the edits into the > image. It does *not* provide failover. > > Second, you specify its IP address in hadoop-site.xml. This is where you > can override the defaults set in hadoop-default.xml. > > dbr > > On Thu, May 14, 2009 at 9:03 AM, Rakhi Khatwani >wrote: > > > Hi, > > I wanna set up a cluster of 5 nodes in such a way that > > node1 - master > > node2 - secondary namenode > > node3 - slave > > node4 - slave > > node5 - slave > > > > > > How do we go about that? > > there is no property in hadoop-env where i can set the ip-address for > > secondary name node. > > > > if i set node-1 and node-2 in masters, and when we start dfs, in both the > > m/cs, the namenode n secondary namenode processes r present. but i think > > only node1 is active. > > n my namenode fail over operation fails. > > > > ny suggesstions? > > > > Regards, > > Rakhi > > > -- Alpha Chapters of my book on Hadoop are available http://www.apress.com/book/view/9781430219422 www.prohadoopbook.com a community for Hadoop Professionals
Re: Setting up another machine as secondary node
First of all, the secondary namenode is not a what you might think a secondary is - it's not failover device. It does make a copy of the filesystem metadata periodically, and it integrates the edits into the image. It does *not* provide failover. Second, you specify its IP address in hadoop-site.xml. This is where you can override the defaults set in hadoop-default.xml. dbr On Thu, May 14, 2009 at 9:03 AM, Rakhi Khatwani wrote: > Hi, > I wanna set up a cluster of 5 nodes in such a way that > node1 - master > node2 - secondary namenode > node3 - slave > node4 - slave > node5 - slave > > > How do we go about that? > there is no property in hadoop-env where i can set the ip-address for > secondary name node. > > if i set node-1 and node-2 in masters, and when we start dfs, in both the > m/cs, the namenode n secondary namenode processes r present. but i think > only node1 is active. > n my namenode fail over operation fails. > > ny suggesstions? > > Regards, > Rakhi >
Setting up another machine as secondary node
Hi, I wanna set up a cluster of 5 nodes in such a way that node1 - master node2 - secondary namenode node3 - slave node4 - slave node5 - slave How do we go about that? there is no property in hadoop-env where i can set the ip-address for secondary name node. if i set node-1 and node-2 in masters, and when we start dfs, in both the m/cs, the namenode n secondary namenode processes r present. but i think only node1 is active. n my namenode fail over operation fails. ny suggesstions? Regards, Rakhi