Also, you might want to look at HBASE-3880, which is committed but not
released yet. It allows you to specify a custom Mapper class when running
ImportTsv. It seems like a similar patch to make the input format plug-able
would be needed in your case though.
On Tue, Jun 14, 2011 at 9:53 AM, Todd L
Hi,
Unfortunately I don't think the importtsv will work in "local job runner"
mode. Try runnign it on an MR cluster (could be pseudo-distributed)
-Todd
On Tue, Jun 14, 2011 at 2:01 AM, King JKing wrote:
> Thank for your reply.
>
> I just test importtsv and have Warning:
>
> java.lang.IllegalAr
Thank for your reply.
I just test importtsv and have Warning:
java.lang.IllegalArgumentException: Can't read partitions file
at
org.apache.hadoop.hbase.mapreduce.hadoopbackport.TotalOrderPartitioner.setConf(TotalOrderPartitioner.java:111)
at org.apache.hadoop.util.ReflectionUtils.setConf(Reflecti
On Mon, Jun 13, 2011 at 8:17 PM, King JKing wrote:
> Dear all,
>
> I want to import data from Cassandra to HBase.
>
>
That's what we like to hear! ;-)
> I think the way maybe:
> Customize ImportTsv.java for read Cassandra data file (*.dbf) and convert
> to HBase data files, and use completebulk
Dear all,
I want to import data from Cassandra to HBase.
I think the way maybe:
Customize ImportTsv.java for read Cassandra data file (*.dbf) and convert
to HBase data files, and use completebulkload tool
Could you give me show advice?
Thank a lot for support.