Hi Bharath / Harsh,

How about this facebook-hadoop :

https://github.com/facebook/hadoop-20

or

https://github.com/gnawux/hadoop-cmri/tree/master/bin

or

http://de-de.facebook.com/note.php?note_id=106157472002

Have you tried one of these? I'm not really understand hadoop too deep, so
I'm thinking that you could make a suggestion for me about above link.

Thanks.

On Thu, Jan 5, 2012 at 1:45 AM, Bharath Mundlapudi <mundlap...@gmail.com>wrote:

> Hi Martinus,
>
> As Harsha mentioned, HA is under development.
>
> Couple of things you can do for HOT-COLD setup are:
>
> 1. Multiple dirs for ${dfs.name.dir}
> 2. Place ${dfs.name.dir} on a RAID 1 mirror setup
> 3. NFS as one of the ${dfs.name.dir}
>
>
> -Bharath
>
>
>
>
> On Wed, Jan 4, 2012 at 1:19 AM, Harsh J <ha...@cloudera.com> wrote:
>
>> Martinus,
>>
>> High-Availability NameNode is being worked upon and an initial version
>> will be out soon. Check out the
>> https://issues.apache.org/jira/browse/HDFS-1623 JIRA for its
>> state/discussions.
>>
>> You can also clone the Hadoop repo and switch to branch 'HDFS-1623' to
>> give it a whirl, although it is still being worked upon presently.
>>
>> For now, we recommend using multiple ${dfs.name.dir} directories
>> (across mounts), preferably one of them being a reliable-enough NFS
>> point.
>>
>> On Wed, Jan 4, 2012 at 2:26 PM, Martinus Martinus <martinus...@gmail.com>
>> wrote:
>> > Hi Bharath,
>> >
>> > Thanks for your answer. I remembered hadoop has single point of failure,
>> > which is it's namenode. Is there a way to make my hadoop clusters to
>> become
>> > fault tolerant, even when the master node (namenode) fail?
>> >
>> >
>> > Thanks and Happy New Year 2012.
>> >
>> > On Tue, Jan 3, 2012 at 2:20 AM, Bharath Mundlapudi <
>> mundlap...@gmail.com>
>> > wrote:
>> >>
>> >> You might want to check the datanode logs. Go to the 3 remaining nodes
>> >> which didn't start and restart the datanode.
>> >>
>> >> -Bharath
>> >>
>> >>
>> >> On Sun, Jan 1, 2012 at 7:23 PM, Martinus Martinus <
>> martinus...@gmail.com>
>> >> wrote:
>> >>>
>> >>> Hi,
>> >>>
>> >>> I have setup a hadoop clusters with 4 nodes and I have start-all.sh
>> and
>> >>> checked in every node, there are tasktracker and datanode run, but
>> when I
>> >>> run hadoop dfsadmin -report it's said like this :
>> >>>
>> >>> Configured Capacity: 30352158720 (28.27 GB)
>> >>> Present Capacity: 3756392448 (3.5 GB)
>> >>> DFS Remaining: 3756355584 (3.5 GB)
>> >>> DFS Used: 36864 (36 KB)
>> >>> DFS Used%: 0%
>> >>> Under replicated blocks: 1
>> >>> Blocks with corrupt replicas: 0
>> >>> Missing blocks: 0
>> >>>
>> >>> -------------------------------------------------
>> >>> Datanodes available: 1 (1 total, 0 dead)
>> >>>
>> >>> Name: 192.168.1.1:50010
>> >>> Decommission Status : Normal
>> >>> Configured Capacity: 30352158720 (28.27 GB)
>> >>> DFS Used: 36864 (36 KB)
>> >>> Non DFS Used: 26595766272 (24.77 GB)
>> >>> DFS Remaining: 3756355584(3.5 GB)
>> >>> DFS Used%: 0%
>> >>> DFS Remaining%: 12.38%
>> >>> Last contact: Mon Jan 02 11:19:44 CST 2012
>> >>>
>> >>> Why is there only total 1 node available? How to fix this problem?
>> >>>
>> >>> Thanks.
>> >>
>> >>
>> >
>>
>>
>>
>> --
>> Harsh J
>>
>
>

Reply via email to