Re: DISCUSSION: Cut a hadoop-0.20.0-append release from the tip of branch-0.20-append branch?

2010-12-23 Thread Owen O'Malley
On Wed, Dec 22, 2010 at 11:07 PM, Roy T. Fielding field...@gbiv.com wrote: Features are not release version tags. If there is a security bug found then we would have to release a new version of the append version, and a round of severe trout slapping would result. Yeah, it isn't a perfect

Re: namenode doesn't start after reboot

2010-12-23 Thread li ping
As far as I know, setup a backup namenode dir is enough. I haven't use the hadoop in a production environment. So, I can't tell you what would be right way to reboot the server. On Thu, Dec 23, 2010 at 6:50 PM, Bjoern Schiessle bjo...@schiessle.orgwrote: Hi, On Thu, 23 Dec 2010 09:30:17

Re: namenode doesn't start after reboot

2010-12-23 Thread rahul patodi
Hi, If you want to reboot the server: 1. stop mapred 2. stop dfs the reboot when you again want to restart hadoop firstly start dfs then mepred. -- *Regards*, Rahul Patodi Software Engineer, Impetus Infotech (India) Pvt Ltd, www.impetus.com Mob:09907074413 On Thu, Dec 23, 2010 at 6:15 PM, li

Re: namenode doesn't start after reboot

2010-12-23 Thread Aaron T. Myers
All this aside, you really shouldn't have to safely stop all the Hadoop services when you reboot any of your servers. Hadoop should be able to survive a crash of any of the daemons. Any circumstance in which Hadoop currently corrupts the edits log or fsimage is a serious bug, and should be

Re: DISCUSSION: Cut a hadoop-0.20.0-append release from the tip of branch-0.20-append branch?

2010-12-23 Thread Stack
On Thu, Dec 23, 2010 at 12:00 AM, Owen O'Malley omal...@apache.org wrote: If I remember right, there were also protocol changes in the append branch, which was another reason we didn't want to put it directly into the 0.20 branch. That is indeed the case Owen. St.Ack

Re: DISCUSSION: Cut a hadoop-0.20.0-append release from the tip of branch-0.20-append branch?

2010-12-23 Thread M. C. Srivas
[ Sorry if this is be-laboring the obvious ] There are two append solutions floating around, and they are incompatible with each other. Thus, the two branches will forever remain incompatible with each other, regardless of how they are numbered (0.22, 0.23, 0.20.3, e.t.c.) Unless both are

Re: DISCUSSION: Cut a hadoop-0.20.0-append release from the tip of branch-0.20-append branch?

2010-12-23 Thread Todd Lipcon
On Thu, Dec 23, 2010 at 10:15 AM, M. C. Srivas mcsri...@gmail.com wrote: Regardless, there will still be 2 incompatible branches. And that is only the beginning. Some future features will be done only on branch 1 (since company 1 uses that), and other features on branch 2 (by company 2,

Re: namenode doesn't start after reboot

2010-12-23 Thread Todd Lipcon
On Thu, Dec 23, 2010 at 2:50 AM, Bjoern Schiessle bjo...@schiessle.orgwrote: 1. I have set up a second dfs.name.dir which is stored at another computer (mounted by sshfs) I would strongly discourage the use of sshfs for the name dir. For one, it's slow, and for two, I've sen it have some

Re: namenode doesn't start after reboot

2010-12-23 Thread Todd Lipcon
On Thu, Dec 23, 2010 at 12:47 PM, Jakob Homan jgho...@gmail.com wrote: Please move discussions of CDH issues to Cloudera's lists. Thanks. Hi Jakob, These bugs are clearly not CDH-specific. NameNode corruption bugs, and best practices with regard to the storage of NN metadata, are clearly

Re: namenode doesn't start after reboot

2010-12-23 Thread Bjoern Schiessle
Hi, On Thu, 23 Dec 2010 09:15:41 -0800 Aaron T. Myers wrote: All this aside, you really shouldn't have to safely stop all the Hadoop services when you reboot any of your servers. Hadoop should be able to survive a crash of any of the daemons. Any circumstance in which Hadoop currently

Re: namenode doesn't start after reboot

2010-12-23 Thread Bjoern Schiessle
On Thu, 23 Dec 2010 12:02:51 -0800 Todd Lipcon wrote: On Thu, Dec 23, 2010 at 2:50 AM, Bjoern Schiessle bjo...@schiessle.orgwrote: 1. I have set up a second dfs.name.dir which is stored at another computer (mounted by sshfs) I would strongly discourage the use of sshfs for the name

Re: DISCUSSION: Cut a hadoop-0.20.0-append release from the tip of branch-0.20-append branch?

2010-12-23 Thread Konstantin Shvachko
I also think building 0.20-append will be a major distraction from moving 0.22 forward with all the great new features, including the new append implementation, sitting on the bench because we are delaying the release. It seems to be beneficial for the entire community to focus on 0.22 rather than

Re: DISCUSSION: Cut a hadoop-0.20.0-append release from the tip of branch-0.20-append branch?

2010-12-23 Thread Jeff Hammerbacher
After reading through the reasoning on both sides of this issue, I agree with Ian, Konstantin, and Jakob. Nigel has already volunteered to run the 0.22 release process; let's put our energy there. Stack, the energy you would have put into the 0.20-append release could help ensure the 0.22 release

Re: DISCUSSION: Cut a hadoop-0.20.0-append release from the tip of branch-0.20-append branch?

2010-12-23 Thread Konstantin Boudnik
On Thu, Dec 23, 2010 at 14:18, Konstantin Shvachko shv.had...@gmail.com wrote: I also think building 0.20-append will be a major distraction from moving 0.22 forward with all the great new features, including the new append implementation, sitting on the bench because we are delaying the

Re: DISCUSSION: Cut a hadoop-0.20.0-append release from the tip of branch-0.20-append branch?

2010-12-23 Thread Stack
The intent of the proposed release off the branch-0.20-append was never to derail, “hurt”, or distract from the Hadoop 0.22 effort. The HBase crew are up for helping out testing and debugging and the intent is to run atop the 0.22 version of append as well as 0.20’s append. A release off the