Re: CompleteBulkLoad Error

2018-01-11 Thread Yung-An He
Ankit and Ashish, thanks for reply,

I saw the ImportTsv command
`org.apache.hadoop.hbase.tool.LoadIncrementalHFiles`
from the HBase book <http://hbase.apache.org/book.html#completebulkload> on
the website,
and according to official documents to run the command. But the command is
for HBase-2.0.

Perhaps someone has the same situation with me.
If there are official reference guides for Individual version, and the
information would be more clear.


Regards,
Yung-An

2018-01-11 15:06 GMT+08:00 ashish singhi <ashish.sin...@huawei.com>:

> Hi,
>
> The path of tool you are passing is wrong, it is org.apache.hadoop.hbase.
> mapreduce.LoadIncrementalHFiles.
> So the command will be, hbase 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
> hdfs://hbase-master:9000/tmp/bktableoutput bktable
>
> Regards,
> Ashish
>
> -Original Message-
> From: Yung-An He [mailto:mathst...@gmail.com]
> Sent: Thursday, January 11, 2018 12:19 PM
> To: user@hbase.apache.org
> Subject: CompleteBulkLoad Error
>
> Hi,
>
> I import data from files to HBase table via the ImportTsv command as below:
>
> hbase org.apache.hadoop.hbase.mapreduce.ImportTsv
> -Dimporttsv.columns=HBASE_ROW_KEY,cf:c1,cf:c2-Dimporttsv.
> skip.bad.lines=false
> '-Dimporttsv.separator=,'
> -Dimporttsv.bulk.output=hdfs://hbase-master:9000/tmp/bktableoutput
> bktable hdfs://hbase-master:9000/tmp/importsv
>
> and the MR job runs successfully. When I execute the completebulkload
> command as below:
>
> hbase org.apache.hadoop.hbase.tool.LoadIncrementalHFiles
> hdfs://hbase-master:9000/tmp/bktableoutput bktable
>
> and it throws the exception:
> Error: Could not find or load main class org.apache.hadoop.hbase.tool.
> LoadIncrementalHFiles
>
> I try the other command:
> HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath`
> ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/lib/hbase-server-1.2.6.jar
> completebulkload hdfs://hbase-master:9000/tmp/bktableoutput bktable
>
> and it succeeds.
>
> Does anyone have the idea?
>
>
> Here is the information of HBase cluster :
>
> * HBase version 1.2.6
> * Hadoop version 2.7.5
> * With 5 work nodes.
>


CompleteBulkLoad Error

2018-01-10 Thread Yung-An He
Hi,

I import data from files to HBase table via the ImportTsv command as below:

hbase org.apache.hadoop.hbase.mapreduce.ImportTsv
-Dimporttsv.columns=HBASE_ROW_KEY,cf:c1,cf:c2-Dimporttsv.skip.bad.lines=false
'-Dimporttsv.separator=,'
-Dimporttsv.bulk.output=hdfs://hbase-master:9000/tmp/bktableoutput bktable
hdfs://hbase-master:9000/tmp/importsv

and the MR job runs successfully. When I execute the completebulkload
command as below:

hbase org.apache.hadoop.hbase.tool.LoadIncrementalHFiles
hdfs://hbase-master:9000/tmp/bktableoutput bktable

and it throws the exception:
Error: Could not find or load main class
org.apache.hadoop.hbase.tool.LoadIncrementalHFiles

I try the other command:
HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath`
${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/lib/hbase-server-1.2.6.jar
completebulkload hdfs://hbase-master:9000/tmp/bktableoutput bktable

and it succeeds.

Does anyone have the idea?


Here is the information of HBase cluster :

* HBase version 1.2.6
* Hadoop version 2.7.5
* With 5 work nodes.


Re: Hbase Question

2017-11-30 Thread Yung-An He
Hi,

No matter how many versions of HBase class in your jar, the classloader
will choose the first one on the classpath.
Perhaps you could consider OSGi (A kind of module system).

2017-11-17 18:57 GMT+08:00 apple :

> Hi:
>  I expect synchrodata between hbase 0.9 and hbase 1.2.
> What's more,I find several ways to do it.
> Follow :
> 1.replication (need modify)
> 2.sync hlog before delete to hdfs .oldlog (need modify)
> 3.client writes data to two hbase
>


> 4.client writes data to kafka and consume to two hbase
>
This is a good choice to satisfy your scenario.


> But, I think the bigest question is one java client how to use two
> hbase-cliet jar,It must be conflict,How can I do?
>


Re: hbase data migration from one cluster to another cluster on different versions

2017-10-26 Thread Yung-An He
Hi Manjeet,

I am sorry that I misunderstood your question.

The hbase book http://hbase.apache.org/book.html#_upgrade_paths describes:
"You must stop your cluster, install the 1.x.x software, run the migration
described at
Executing the 0.96 Upgrade
<http://hbase.apache.org/book.html#executing.the.0.96.upgrade>
(substituting 1.x.x. wherever we make mention of 0.96.x in the section
below),
and then restart. Be sure to upgrade your ZooKeeper if it is a version less
than the required 3.4.x."

This is what I mean about "the environment ready for the upgrade".
Since the HBase 1.2.1 cluster is the all new cluster, this is the different
situation from yours.

If the contents of /data/ExportedFiles have been put to the HDFS on HBase
1.2.1 cluster,
try the below command:
sudo -u hdfs hbase -Dhbase.import.version=0.94
org.apache.hadoop.hbase.mapreduce.Import test_table /data/ExportedFiles
instead of yours.

Best Regards.


2017-10-26 13:27 GMT+08:00 Manjeet Singh <manjeet.chand...@gmail.com>:

> Furthermore, clarity why I used scp command is :
>
> I have copy source cluster files to destination cluster by using scp
> command and put them into destination cluster HDFS (It's because of two
> different version of Haddop  destination cluster hadoop is 1.2.1 and
> destination is having Hadoop 2.0 ) First I get HDFS files to local linux
> and use scp command to put them into destination cluster.
>
> Thanks
> Manjeet Singh
>
> On Thu, Oct 26, 2017 at 10:26 AM, Manjeet Singh <
> manjeet.chand...@gmail.com>
> wrote:
>
> > Hi Yung,
> >
> > First thanks for reply
> > The link provided by you is for upgrading the Hbase version and problem
> > statement is different
> > Problem is when I am trying to export hbase data from one cluster to
> > another cluster in same N/W, but with a different hbase version  i.e.
> > 0.94.27 (source cluster hbase) and another is destination cluster hbase
> > version is 1.2.1.
> > So this link shall be refer
> > http://hbase.apache.org/0.94/book/ops_mgt.html#export
> >
> >
> > for the second point which I forget to mention in mail, I did copy
> contents
> > of /data/ExportedFiles
> > in destination cluster which is having HBase 1.2.1 but not with
> > distcp instead of I used scp command
> > and when I am trying to import data I am getting below error
> >
> > 17/10/23 16:13:50 INFO mapreduce.Job: Task Id :
> > attempt_1505781444745_0070_m_03_0, Status : FAILED
> > Error: java.io.IOException: keyvalues=NONE read 2 bytes, should read
> 121347
> > at org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(
> > SequenceFile.java:2306)
> > at org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordRead
> > er.nextKeyValue(SequenceFileRecordReader.java:78)
> > at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nex
> > tKeyValue(MapTask.java:556)
> > at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue
> > (MapContextImpl.java:80)
> > at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.ne
> > xtKeyValue(WrappedMapper.java:91)
> > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
> > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.
> java:787)
> > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
> > at java.security.AccessController.doPrivileged(Native Method)
> > at javax.security.auth.Subject.doAs(Subject.java:422)
> > at org.apache.hadoop.security.UserGroupInformation.doAs(UserGro
> > upInformation.java:1693)
> > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> >
> >
> >
> > can you please elaborate more about  "Is the environment ready for the
> > upgrade?"
> >
> > Thanks
> > Manjeet Singh
> >
> >
> >
> > On Thu, Oct 26, 2017 at 8:32 AM, Yung-An He <mathst...@gmail.com> wrote:
> >
> >> Hi,
> >>
> >> Have you seen the reference guide
> >> <http://hbase.apache.org/book.html#_upgrade_paths> to make sure that
> the
> >> environment is ready for the upgrade?
> >> Perhaps you could try to copy the contents of /data/ExportedFiles to the
> >> HBase 1.2.1 cluster using distcp before import data instead of using
> >> "hdfs://:8020/data/ExportedFiles" directly.
> >> Then create the table on the HBase 1.2.1 cluster using HBase Shell.
> Column
> >> families must be identical to the table on the old one.
> >> Finally, import da

Re: hbase data migration from one cluster to another cluster on different versions

2017-10-25 Thread Yung-An He
Hi,

Have you seen the reference guide
 to make sure that the
environment is ready for the upgrade?
Perhaps you could try to copy the contents of /data/ExportedFiles to the
HBase 1.2.1 cluster using distcp before import data instead of using
"hdfs://:8020/data/ExportedFiles" directly.
Then create the table on the HBase 1.2.1 cluster using HBase Shell. Column
families must be identical to the table on the old one.
Finally, import data from /data/ExportedFiles on the HBase 1.2.1 cluster.


Best Regards.

2017-10-24 1:27 GMT+08:00 Manjeet Singh :

> Hi All,
>
> Can anyone help?
>
> adding few more investigations I have move all files to the destination
> cluster hdfs and I have run below command:-
>
> sudo -u hdfs hbase org.apache.hadoop.hbase.mapreduce.Import test_table
> hdfs://:8020/data/ExportedFiles
>
> I am getting below error
>
> 17/10/23 16:13:50 INFO mapreduce.Job: Task Id :
> attempt_1505781444745_0070_m_03_0, Status : FAILED
> Error: java.io.IOException: keyvalues=NONE read 2 bytes, should read 121347
> at
> org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.
> java:2306)
> at
> org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.
> nextKeyValue(SequenceFileRecordReader.java:78)
> at
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.
> nextKeyValue(MapTask.java:556)
> at
> org.apache.hadoop.mapreduce.task.MapContextImpl.
> nextKeyValue(MapContextImpl.java:80)
> at
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.
> nextKeyValue(WrappedMapper.java:91)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1693)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
>
>
>
>
> can anyone suggest how to migrate data?
>
> Thanks
> Manjeet Singh
>
>
>
>
>
> Hi All,
>
> I have query regarding hbase data migration from one cluster to another
> cluster in same N/W, but with a different version of hbase one is 0.94.27
> (source cluster hbase) and another is destination cluster hbase version is
> 1.2.1.
>
> I have used below command to take backup of hbase table on source cluster
> is:
>  ./hbase org.apache.hadoop.hbase.mapreduce.Export SPDBRebuild
> /data/backupData/
>
> below files were genrated by using above command:-
>
>
> drwxr-xr-x 3 root root4096 Dec  9  2016 _logs
> -rw-r--r-- 1 root root   788227695 Dec 16  2016 part-m-0
> -rw-r--r-- 1 root root  1098757026 Dec 16  2016 part-m-1
> -rw-r--r-- 1 root root   906973626 Dec 16  2016 part-m-2
> -rw-r--r-- 1 root root  1981769314 Dec 16  2016 part-m-3
> -rw-r--r-- 1 root root  2099785782 Dec 16  2016 part-m-4
> -rw-r--r-- 1 root root  4118835540 Dec 16  2016 part-m-5
> -rw-r--r-- 1 root root 14217981341 Dec 16  2016 part-m-6
> -rw-r--r-- 1 root root   0 Dec 16  2016 _SUCCESS
>
>
> in order to restore these files I am assuming I have to move these files in
> destination cluster and have to run below command
>
> hbase org.apache.hadoop.hbase.mapreduce.Import 
> /data/backupData/
>
> Please suggest if I am on correct direction, second if anyone have another
> option.
> I have tryed this with test data but above command took very long time and
> at end it gets fails
>
> 17/10/23 11:54:21 INFO mapred.JobClient:  map 0% reduce 0%
> 17/10/23 12:04:24 INFO mapred.JobClient: Task Id :
> attempt_201710131340_0355_m_02_0, Status : FAILED
> Task attempt_201710131340_0355_m_02_0 failed to report status for 600
> seconds. Killing!
>
>
> Thanks
> Manjeet Singh
>
>
>
>
>
>
> --
> luv all
>


Re: Multitenancy in HBase

2017-10-05 Thread Yung-An He
Hi

HBASE-6721 is an issue
about multitenancy in HBase.

I hope it helps you.

2017-09-30 2:57 GMT+08:00 Sambhaji Sawant :

> Hello HBase Experts
> Can you please suggest how can I use Multitenancy in HBase.I was searching
> it but cannot get proper information. Please post some information related
> to the subject.
>


Re: HBase Book: C/C++ Apache HBase Client, the repository of FaceBook has been removed

2017-09-04 Thread Yung-An He
Thank you, Ted.

Maybe someone has the same situation with me.

Should the hyperlink of repository of FaceBook in C/C++ Apache HBase Client
<http://hbase.apache.org/book.html#c> section be re-edited to HBASE-14850 ?
Or do something to make it correct .


2017-09-05 11:41 GMT+08:00 Yung-An He <mathst...@gmail.com>:

> Sorry, I mean to fix the HBase Book.
>
> 2017-09-05 11:29 GMT+08:00 Yung-An He <mathst...@gmail.com>:
>
>> Hi
>>
>> I found the repository of FaceBook has been removed on Hbase Book in C/C++
>> Apache HBase Client <http://hbase.apache.org/book.html#c> section.
>>
>> Is there any plan to fix this issue?
>>
>
>


Re: HBase Book: C/C++ Apache HBase Client, the repository of FaceBook has been removed

2017-09-04 Thread Yung-An He
Sorry, I mean to fix the HBase Book.

2017-09-05 11:29 GMT+08:00 Yung-An He <mathst...@gmail.com>:

> Hi
>
> I found the repository of FaceBook has been removed on Hbase Book in C/C++
> Apache HBase Client <http://hbase.apache.org/book.html#c> section.
>
> Is there any plan to fix this issue?
>


HBase Book: C/C++ Apache HBase Client, the repository of FaceBook has been removed

2017-09-04 Thread Yung-An He
Hi

I found the repository of FaceBook has been removed on Hbase Book in C/C++
Apache HBase Client  section.

Is there any plan to fix this issue?