--
From: M. C. Srivas mcsri...@gmail.com
Date: Sun, Dec 11, 2011 at 10:47 PM
Subject: Re: HDFS Backup nodes
To: common-user@hadoop.apache.org
You are out of luck if you don't want to use NFS, and yet want
redundancy
for the NN. Even the new NN HA work being done
On Wed, Dec 14, 2011 at 10:09AM, Scott Carey wrote:
On 12/13/11 11:28 PM, Konstantin Boudnik c...@apache.org wrote:
On Tue, Dec 13, 2011 at 11:00PM, M. C. Srivas wrote:
Suresh,
As of today, there is no option except to use NFS. And as you yourself
mention, the first HA prototype
On Wed, Dec 14, 2011 at 10:00 AM, Scott Carey sc...@richrelevance.com wrote:
As of today, there is no option except to use NFS. And as you yourself
mention, the first HA prototype when it comes out will require NFS.
How will it 'require' NFS? Won't any 'remote, high availability storage'
be used for HA.
2. We have a short term goal also to enable editlogs going to HDFS itself.
The work is in progress.
Regards,
Suresh
-- Forwarded message --
From: M. C. Srivas mcsri...@gmail.com
Date: Sun, Dec 11, 2011 at 10:47 PM
Subject: Re: HDFS Backup nodes
To: common
,
randy
- Original Message -
From: Joey Echeverriaj...@cloudera.com
To: common-user@hadoop.apache.org
Sent: Wednesday, December 7, 2011 6:07:58 AM
Subject: Re: HDFS Backup nodes
You should also configure the Namenode to use an NFS mount for one of
it's storage directories
...@gmail.com
Date: Sun, Dec 11, 2011 at 10:47 PM
Subject: Re: HDFS Backup nodes
To: common-user@hadoop.apache.org
You are out of luck if you don't want to use NFS, and yet want redundancy
for the NN. Even the new NN HA work being done by the community will
require NFS ... and the NFS itself
On Tue, Dec 13, 2011 at 10:42 PM, M. C. Srivas mcsri...@gmail.com wrote:
Any simple file meta-data test will cause the NN to spiral to death with
infinite GC. For example, try create many many files. Or even simple
stat a bunch of file continuously.
Sure. If I run dd if=/dev/zero of=foo my
to enable editlogs going to HDFS itself.
The work is in progress.
Regards,
Suresh
-- Forwarded message --
From: M. C. Srivas mcsri...@gmail.com
Date: Sun, Dec 11, 2011 at 10:47 PM
Subject: Re: HDFS Backup nodes
To: common-user@hadoop.apache.org
You
mcsri...@gmail.com
Date: Sun, Dec 11, 2011 at 10:47 PM
Subject: Re: HDFS Backup nodes
To: common-user@hadoop.apache.org
You are out of luck if you don't want to use NFS, and yet want redundancy
for the NN. Even the new NN HA work being done by the community will
require NFS
Message -
From: Joey Echeverriaj...@cloudera.com
To: common-user@hadoop.apache.org
Sent: Wednesday, December 7, 2011 6:07:58 AM
Subject: Re: HDFS Backup nodes
You should also configure the Namenode to use an NFS mount for one of
it's storage directories. That will give the most up-to-date
-Oorspronkelijk bericht-
Van: Joey Echeverria [mailto:j...@cloudera.com]
Verzonden: woensdag 7 december 2011 12:08
Aan: common-user@hadoop.apache.org
Onderwerp: Re: HDFS Backup nodes
You should also configure the Namenode to use an NFS mount for one of
it's storage directories
AFAIK backup node introduced in 0.21 version onwards.
From: praveenesh kumar [praveen...@gmail.com]
Sent: Wednesday, December 07, 2011 12:40 PM
To: common-user@hadoop.apache.org
Subject: HDFS Backup nodes
Does hadoop 0.20.205 supports configuring HDFS
This means still we are relying on Secondary NameNode idealogy for
Namenode's backup.
Can OS-mirroring of Namenode is a good alternative keep it alive all the
time ?
Thanks,
Praveenesh
On Wed, Dec 7, 2011 at 1:35 PM, Uma Maheswara Rao G mahesw...@huawei.comwrote:
AFAIK backup node introduced
[mailto:praveen...@gmail.com]
Sent: Wednesday, December 07, 2011 1:47 PM
To: common-user@hadoop.apache.org
Subject: Re: HDFS Backup nodes
This means still we are relying on Secondary NameNode idealogy for Namenode's
backup.
Can OS-mirroring of Namenode is a good alternative keep it alive all
You should also configure the Namenode to use an NFS mount for one of
it's storage directories. That will give the most up-to-date back of
the metadata in case of total node failure.
-Joey
On Wed, Dec 7, 2011 at 3:17 AM, praveenesh kumar praveen...@gmail.com wrote:
This means still we are
moved away from NFS and I'm using DRBD instead. Not having
any problems anymore whatsoever.
YMMV.
Jorn
-Oorspronkelijk bericht-
Van: Joey Echeverria [mailto:j...@cloudera.com]
Verzonden: woensdag 7 december 2011 12:08
Aan: common-user@hadoop.apache.org
Onderwerp: Re: HDFS Backup nodes
: HDFS Backup nodes
You should also configure the Namenode to use an NFS mount for one of
it's storage directories. That will give the most up-to-date back of
the metadata in case of total node failure.
-Joey
On Wed, Dec 7, 2011 at 3:17 AM, praveenesh kumar praveen...@gmail.com wrote:
This means
@hadoop.apache.org
Sent: Wednesday, December 7, 2011 6:07:58 AM
Subject: Re: HDFS Backup nodes
You should also configure the Namenode to use an NFS mount for one of
it's storage directories. That will give the most up-to-date back of
the metadata in case of total node failure.
-Joey
On Wed, Dec 7
or isn't reachable? Does hdfs lock
up? Does it gracefully ignore the nfs copy?
Thanks,
randy
- Original Message -
From: Joey Echeverriaj...@cloudera.com
To: common-user@hadoop.apache.org
Sent: Wednesday, December 7, 2011 6:07:58 AM
Subject: Re: HDFS Backup nodes
You should also configure
ignore the nfs copy?
Thanks,
randy
- Original Message -
From: Joey Echeverriaj...@cloudera.com
To: common-user@hadoop.apache.org
Sent: Wednesday, December 7, 2011 6:07:58 AM
Subject: Re: HDFS Backup nodes
You should also configure the Namenode to use an NFS mount for one of
it's
20 matches
Mail list logo