Both Hadoop 1 and Hadoop 2 work. If you start from scratch you should probably
start with Hadoop 2.
Note that if you want to use Hadoop 2.2.x you need to change the protobuf
dependency in HBase's pom.xml to 2.5.
(there's a certain irony here that the protocol/library we use to get version
Check out HBase's importtsv.
(http://hbase.apache.org/book/ops_mgt.html#importtsv)
-- Lars
- Original Message -
From: iwannaplay games funnlearnfork...@gmail.com
To: hdfs-user@hadoop.apache.org
Cc:
Sent: Thursday, July 19, 2012 3:33 AM
Subject: Re: Loading data in hdfs
Thanks Tariq
I was thinking the same.
In an ideal world, I guess, the namenodes would be quorum based.
Clients would be aware of all the namenodes and fire updates in parallel to all
namenodes, and updates not return until N namenodes confirmed the update.
To make it easier one could initially require that