As far as I know, setup a backup namenode dir is enough.
I haven't use the hadoop in a production environment. So, I can't tell you
what would be right way to reboot the server.
On Thu, Dec 23, 2010 at 6:50 PM, Bjoern Schiessle bjo...@schiessle.orgwrote:
Hi,
On Thu, 23 Dec 2010 09:30:17
Hi,
If you want to reboot the server:
1. stop mapred
2. stop dfs
the reboot
when you again want to restart hadoop
firstly start dfs then mepred.
--
*Regards*,
Rahul Patodi
Software Engineer,
Impetus Infotech (India) Pvt Ltd,
www.impetus.com
Mob:09907074413
On Thu, Dec 23, 2010 at 6:15 PM, li
All this aside, you really shouldn't have to safely stop all the Hadoop
services when you reboot any of your servers. Hadoop should be able to
survive a crash of any of the daemons. Any circumstance in which Hadoop
currently corrupts the edits log or fsimage is a serious bug, and should be
On Thu, Dec 23, 2010 at 2:50 AM, Bjoern Schiessle bjo...@schiessle.orgwrote:
1. I have set up a second dfs.name.dir which is stored at another
computer (mounted by sshfs)
I would strongly discourage the use of sshfs for the name dir. For one, it's
slow, and for two, I've sen it have some
On Thu, Dec 23, 2010 at 12:47 PM, Jakob Homan jgho...@gmail.com wrote:
Please move discussions of CDH issues to Cloudera's lists. Thanks.
Hi Jakob,
These bugs are clearly not CDH-specific. NameNode corruption bugs, and best
practices with regard to the storage of NN metadata, are clearly
Hi,
On Thu, 23 Dec 2010 09:15:41 -0800 Aaron T. Myers wrote:
All this aside, you really shouldn't have to safely stop all the
Hadoop services when you reboot any of your servers. Hadoop should be
able to survive a crash of any of the daemons. Any circumstance in
which Hadoop currently
On Thu, 23 Dec 2010 12:02:51 -0800 Todd Lipcon wrote:
On Thu, Dec 23, 2010 at 2:50 AM, Bjoern Schiessle
bjo...@schiessle.orgwrote:
1. I have set up a second dfs.name.dir which is stored at another
computer (mounted by sshfs)
I would strongly discourage the use of sshfs for the name
Hi,
After a Kernel update and a reboot the namenode doesn't start. I run the
Cloudera cdh3 Hadoop distribution. I have already searched for a solution.
It looks like I'm not the only one with such a problem. Sadly I could only
find descriptions of similar problems, but no solutions...
This is
I can't help but with hindsight - it's advisable to snapshot your
namenodes as HDFS dies with them.
On 22 December 2010 15:03, Bjoern Schiessle bjo...@schiessle.org wrote:
Hi,
After a Kernel update and a reboot the namenode doesn't start. I run the
Cloudera cdh3 Hadoop distribution. I have
It seems the exception occurs during NameNode loads the editlog.
make sure the editlog file exists. or you can debug the application to see
what's wrong.
On Thu, Dec 23, 2010 at 2:01 AM, daniel sikar dsi...@gmail.com wrote:
I can't help but with hindsight - it's advisable to snapshot your
10 matches
Mail list logo