tes("cf")
>>val qual = Bytes.toBytes("c1")
>>val value = Bytes.toBytes("val_xxx")
>>
>>val kv = new KeyValue(rowkeyBytes,colfam,qual,value)
>>List(kv)
>> }
>
= Bytes.toBytes("val_xxx")
>
>val kv = new KeyValue(rowkeyBytes,colfam,qual,value)
>List(kv)
> }
>
>
> Thanks,
> Sun
> ----------
> fightf...@163.com
>
>
> *From:* Jim Green
> *Date:
val kv = new KeyValue(rowkeyBytes,colfam,qual,value)
List(kv)
}
Thanks,
Sun
fightf...@163.com
From: Jim Green
Date: 2015-01-28 04:44
To: Ted Yu
CC: user
Subject: Re: Bulk loading into hbase using saveAsNewAPIHadoopFile
I used below code, and it still failed with
I used below code, and it still failed with the same error.
Anyone has experience on bulk loading using scala?
Thanks.
import org.apache.spark._
import org.apache.spark.rdd.NewHadoopRDD
import org.apache.hadoop.hbase.{HBaseConfiguration, HTableDescriptor}
import org.apache.hadoop.hbase.client.HBas
Thanks Ted. Could you give me a simple example to load one row data in
hbase? How should I generate the KeyValue?
I tried multiple times, and still can not figure it out.
On Tue, Jan 27, 2015 at 12:10 PM, Ted Yu wrote:
> Here is the method signature used by HFileOutputFormat :
> public voi
Here is the method signature used by HFileOutputFormat :
public void write(ImmutableBytesWritable row, KeyValue kv)
Meaning, KeyValue is expected, not Put.
On Tue, Jan 27, 2015 at 10:54 AM, Jim Green wrote:
> Hi Team,
>
> I need some help on writing a scala to bulk load some data into hba