Re: [ANNOUNCE] New HBase committer Jingyun Tian

2018-11-13 Thread Ashish Singhi
Congratulations & Welcome!

Regards,
Ashish

On Tue, Nov 13, 2018 at 1:24 PM 张铎(Duo Zhang)  wrote:

> On behalf of the Apache HBase PMC, I am pleased to announce that Jingyun
> Tian has accepted the PMC's invitation to become a committer on the
> project. We appreciate all of Jingyun's generous contributions thus far and
> look forward to his continued involvement.
>
> Congratulations and welcome, Jingyun!
>


Re: Live Broadcast link for HBaseConAsia2018

2018-08-17 Thread Ashish Singhi
This is so great to see! Thanks for sharing the link.

On Fri, Aug 17, 2018 at 8:59 AM, Yu Li  wrote:

> Hi All,
>
> HBaseConAsia2018 is ongoing and here comes the live broadcast link[1].
> Please follow the link to watch the speaks if failed to come to the scene.
> Enjoy the day.
>
> 大家好,
>
> HBaseConAsia2018正在北京歌华开元大酒店进行中,对于无法到现场的用户我们提供了直播链接[1],大家可以在网上观看直播。
>
> [1]
> https://yq.aliyun.com/promotion/631?id=129203&from=
> groupmessage&isappinstalled=0
>
> Yu - On behalf of the HBaseConAsia2018 PC
>


RE: [ANNOUNCE] New HBase committer Guangxu Cheng

2018-06-04 Thread ashish singhi
Congrats and Welcome!

Regards,
Ashish 
-Original Message-
From: 张铎(Duo Zhang) [mailto:palomino...@gmail.com] 
Sent: Monday, June 04, 2018 12:30 PM
To: HBase Dev List ; hbase-user 
Subject: [ANNOUNCE] New HBase committer Guangxu Cheng

On behalf of the Apache HBase PMC, I am pleased to announce that Guangxu Cheng 
has accepted the PMC's invitation to become a committer on the project. We 
appreciate all of Guangxu's generous contributions thus far and look forward to 
his continued involvement.

Congratulations and welcome, Guangxu!


RE: Hbase Audit Logs

2018-02-26 Thread ashish singhi
Hi,

You need to enable TRACE level logging for AccessController.

Change 
log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController=INFO
 to 
log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController=TRACE

Regards,
Ashish

-Original Message-
From: Subash Kunjupillai [mailto:subas...@ericsson.com] 
Sent: Monday, February 26, 2018 1:29 PM
To: user@hbase.apache.org
Subject: Hbase Audit Logs

Hi,

I've enabled HBase Authorization by adding below properties in HBase-site.xml 
and also in log4j Security audit appender is as below.


*hbase-site.xml*

/
 hbase.security.authorization
 true


 hbase.coprocessor.master.classes
 org.apache.hadoop.hbase.security.access.AccessController


 hbase.coprocessor.region.classes

org.apache.hadoop.hbase.security.token.TokenProvider,org.apache.hadoop.hbase.security.access.AccessController
/

*log4j.properties*

/hbase.security.log.file=SecurityAuth.audit
hbase.security.log.maxfilesize=256MB
hbase.security.log.maxbackupindex=20
log4j.appender.RFAS=org.apache.log4j.RollingFileAppender
log4j.appender.RFAS.File=${hbase.log.dir}/${hbase.security.log.file}
log4j.appender.RFAS.MaxFileSize=${hbase.security.log.maxfilesize}
log4j.appender.RFAS.MaxBackupIndex=${hbase.security.log.maxbackupindex}
log4j.appender.RFAS.layout=org.apache.log4j.PatternLayout
log4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n 
log4j.category.SecurityLogger=${hbase.security.logger}
log4j.additivity.SecurityLogger=false
log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController=INFO
log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.visibility.VisibilityController=INFO/

I'm able to see the logs being written to SecurityAuth.audit. But my question 
is, what configurations should be done to get audit details in log for 
operations like put, get, delete, table create.



--
Sent from: http://apache-hbase.679495.n3.nabble.com/HBase-User-f4020416.html


RE: CompleteBulkLoad Error

2018-01-11 Thread ashish singhi
As per my understanding the online HBase book refers to master version.

For the version specific document we can refer to book which is part of release 
tar,  hbase-1.2.6-bin.tar.gz\hbase-1.2.6\docs\book.pdf

Regards,
Ashish

-Original Message-
From: Yung-An He [mailto:mathst...@gmail.com] 
Sent: Thursday, January 11, 2018 2:21 PM
To: user@hbase.apache.org
Subject: Re: CompleteBulkLoad Error

Ankit and Ashish, thanks for reply,

I saw the ImportTsv command
`org.apache.hadoop.hbase.tool.LoadIncrementalHFiles`
from the HBase book <http://hbase.apache.org/book.html#completebulkload> on the 
website, and according to official documents to run the command. But the 
command is for HBase-2.0.

Perhaps someone has the same situation with me.
If there are official reference guides for Individual version, and the 
information would be more clear.


Regards,
Yung-An

2018-01-11 15:06 GMT+08:00 ashish singhi :

> Hi,
>
> The path of tool you are passing is wrong, it is org.apache.hadoop.hbase.
> mapreduce.LoadIncrementalHFiles.
> So the command will be, hbase 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
> hdfs://hbase-master:9000/tmp/bktableoutput bktable
>
> Regards,
> Ashish
>
> -Original Message-
> From: Yung-An He [mailto:mathst...@gmail.com]
> Sent: Thursday, January 11, 2018 12:19 PM
> To: user@hbase.apache.org
> Subject: CompleteBulkLoad Error
>
> Hi,
>
> I import data from files to HBase table via the ImportTsv command as below:
>
> hbase org.apache.hadoop.hbase.mapreduce.ImportTsv
> -Dimporttsv.columns=HBASE_ROW_KEY,cf:c1,cf:c2-Dimporttsv.
> skip.bad.lines=false
> '-Dimporttsv.separator=,'
> -Dimporttsv.bulk.output=hdfs://hbase-master:9000/tmp/bktableoutput
> bktable hdfs://hbase-master:9000/tmp/importsv
>
> and the MR job runs successfully. When I execute the completebulkload 
> command as below:
>
> hbase org.apache.hadoop.hbase.tool.LoadIncrementalHFiles
> hdfs://hbase-master:9000/tmp/bktableoutput bktable
>
> and it throws the exception:
> Error: Could not find or load main class org.apache.hadoop.hbase.tool.
> LoadIncrementalHFiles
>
> I try the other command:
> HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` 
> ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/lib/hbase-server-1.2.6.jar
> completebulkload hdfs://hbase-master:9000/tmp/bktableoutput bktable
>
> and it succeeds.
>
> Does anyone have the idea?
>
>
> Here is the information of HBase cluster :
>
> * HBase version 1.2.6
> * Hadoop version 2.7.5
> * With 5 work nodes.
>


RE: CompleteBulkLoad Error

2018-01-10 Thread ashish singhi
Hi, 

The path of tool you are passing is wrong, it is 
org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.
So the command will be, hbase 
org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles 
hdfs://hbase-master:9000/tmp/bktableoutput bktable

Regards,
Ashish

-Original Message-
From: Yung-An He [mailto:mathst...@gmail.com] 
Sent: Thursday, January 11, 2018 12:19 PM
To: user@hbase.apache.org
Subject: CompleteBulkLoad Error

Hi,

I import data from files to HBase table via the ImportTsv command as below:

hbase org.apache.hadoop.hbase.mapreduce.ImportTsv
-Dimporttsv.columns=HBASE_ROW_KEY,cf:c1,cf:c2-Dimporttsv.skip.bad.lines=false
'-Dimporttsv.separator=,'
-Dimporttsv.bulk.output=hdfs://hbase-master:9000/tmp/bktableoutput bktable 
hdfs://hbase-master:9000/tmp/importsv

and the MR job runs successfully. When I execute the completebulkload command 
as below:

hbase org.apache.hadoop.hbase.tool.LoadIncrementalHFiles
hdfs://hbase-master:9000/tmp/bktableoutput bktable

and it throws the exception:
Error: Could not find or load main class 
org.apache.hadoop.hbase.tool.LoadIncrementalHFiles

I try the other command:
HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` ${HADOOP_HOME}/bin/hadoop 
jar ${HBASE_HOME}/lib/hbase-server-1.2.6.jar
completebulkload hdfs://hbase-master:9000/tmp/bktableoutput bktable

and it succeeds.

Does anyone have the idea?


Here is the information of HBase cluster :

* HBase version 1.2.6
* Hadoop version 2.7.5
* With 5 work nodes.


Re: [ANNOUNCE] New HBase committer Zheng Hu

2017-10-23 Thread Ashish Singhi
Congratulations.

On Mon, Oct 23, 2017 at 11:48 AM, Duo Zhang  wrote:

> On behalf of the Apache HBase PMC, I am pleased to announce that Zheng Hu
> has accepted the PMC's invitation to become a committer on the project. We
> appreciate all of Zheng's generous contributions thus far and look forward
> to his continued involvement.
>
> Congratulations and welcome, Zheng!
>


Re: Welcome Chia-Ping Tsai to the HBase PMC

2017-09-29 Thread Ashish Singhi
Congratulations, Chia-Ping.

On Sat, Sep 30, 2017 at 3:49 AM, Misty Stanley-Jones 
wrote:

> The HBase PMC is delighted to announce that Chia-Ping Tsai has agreed to
> join
> the HBase PMC, and help to make the project run smoothly. Chia-Ping became
> an
> HBase committer over 6 months ago, based on long-running participate in the
> HBase project, a consistent record of resolving HBase issues, and
> contributions
> to testing and performance.
>
> Thank you for stepping up to serve, Chia-Ping!
>
> As a reminder, if anyone would like to nominate another person as a
> committer or PMC member, even if you are not currently a committer or PMC
> member, you can always drop a note to priv...@hbase.apache.org to let us
> know!
>
> Thanks,
> Misty (on behalf of the HBase PMC)
>


RE: Please congratulate our new PMC Chair Misty Stanley-Jones

2017-09-22 Thread ashish singhi
Many Congratulations for the new role, Misty.

-Original Message-
From: Andrew Purtell [mailto:apurt...@apache.org] 
Sent: 22 September 2017 00:38
To: d...@hbase.apache.org; user@hbase.apache.org
Subject: Please congratulate our new PMC Chair Misty Stanley-Jones

At today's meeting of the Board, Special Resolution B changing the HBase 
project Chair to Misty Stanley-Jones was passed unanimously.

Please join me in congratulating Misty on her new role!

​(If you need any help or advice please don't hesitate to ping me, Misty, but I 
suspect you'll do just fine and won't need it.)​


--
Best regards,
Andrew


Re: [ANNOUNCE] Chunhui Shen joins the Apache HBase PMC

2017-07-05 Thread Ashish Singhi
Congratulations!

Sent from my iPhone

> On 04-Jul-2017, at 10:54 AM, Yu Li  wrote:
> 
> On behalf of the Apache HBase PMC I am pleased to announce that Chunhui Shen
> has accepted our invitation to become a PMC member on the Apache
> HBase project. He has been an active contributor to HBase for past many
> years. Looking forward for many more contributions from him.
> 
> Please join me in welcoming Chunhui to the HBase PMC!
> 
> Best Regards,
> Yu


Re: [ANNOUNCE] Devaraj Das joins the Apache HBase PMC

2017-07-05 Thread Ashish Singhi
Congratulations!

Sent from my iPhone

> On 05-Jul-2017, at 9:57 PM, Josh Elser  wrote:
> 
> I'm pleased to announce yet another PMC addition in the form of Devaraj Das. 
> One of the "old guard" in the broader Hadoop umbrella, he's also a 
> long-standing member in our community. We all look forward to the continued 
> contributions and project leadership.
> 
> Please join me in welcoming Devaraj!
> 
> - Josh (on behalf of the PMC)


RE: HBASE and MOB

2017-05-15 Thread ashish singhi
Hi.

Can we revive HBASE-15370: Backport Moderate Object Storage (MOB) to branch-1 ?

Even we have customers using this feature and it requires lots of effort to 
backport MOB patches from master branch to the released versions as the code 
base has lots of differences.

Regards,
Ashish

-Original Message-
From: Ted Yu [mailto:yuzhih...@gmail.com] 
Sent: 12 May 2017 22:25
To: user@hbase.apache.org
Subject: Re: HBASE and MOB

MOB is also backported in HDP 2.5.x

FYI

On Fri, May 12, 2017 at 9:51 AM, anil gupta  wrote:

> Backporting MOB wont be a trivial task.
> AFAIK, Cloudera backported MOB to HBase1.x  branch for CDH(its not in 
> apache HBase1.x branch yet). It might be easier to just use CDH for MOB.
>
> On Fri, May 12, 2017 at 8:51 AM, Jean-Marc Spaggiari < 
> jean-m...@spaggiari.org> wrote:
>
> > Thanks for those details.
> >
> > How big are you PDF? Are they all small size? If they are not above 
> > 1MB, MOBs will not really be 100% mandatory. Even if few of them are above.
> >
> > If you want to apply the patch on another branch,this is what is 
> > called a back port (like Ted said before) and will require a pretty 
> > good amount of work. You can jump on that, but if you are not used 
> > to the HBase code, it might be a pretty big challenge...
> >
> > Another way is to look for an HBase distribution that already 
> > includes
> the
> > MOB code already.
> >
> > JMS
> >
> > 2017-05-12 11:21 GMT-04:00 F. T. :
> >
> > > Hi Jean Marc
> > >
> > > I'm using a 1.2.3 version. I downloaded a "bin" version from 
> > > Apache official web site. Maybe I've to install it from the "src" 
> > > option with
> > mvn ?
> > >
> > > I would like index PDF into Hbase and use it in a Solr collection.
> > >
> > > In fact I would like reproduce this process :
> > > http://blog.cloudera.com/blog/2015/10/how-to-index-scanned-
> > > pdfs-at-scale-using-fewer-than-50-lines-of-code/
> > >
> > >
> > > But maybe is there another solution to reproduce it .
> > >
> > > Fred
> > >
> > >
> > > 
> > > De : Jean-Marc Spaggiari  Envoyé : 
> > > vendredi 12 mai 2017 17:06 À : user Objet : Re: HBASE and MOB
> > >
> > > Hi Fred,
> > >
> > > Can you please confirm the following information?
> > >
> > > 1) What exact version of HBase are you using? From a distribution,
> build
> > by
> > > yourself, from the JARs, etc.
> > > 2) Why do you think you need the MOB feature
> > > 3) Is an upgrade an option for you or not really.
> > >
> > > Thanks,
> > >
> > > JMS
> > >
> > >
> > > 2017-05-12 11:02 GMT-04:00 Ted Yu :
> > >
> > > > It is defined here in
> > > > hbase-client/src/main/java/org/apache/hadoop/hbase/
> > > HColumnDescriptor.java:
> > > >   public static final String IS_MOB = "IS_MOB";
> > > >
> > > > MOB feature hasn't been backported to branch-1 (or earlier releases).
> > > >
> > > > Looks like you're using a vendor's release.
> > > >
> > > > Consider contacting the corresponding mailing list if you are stuck.
> > > >
> > > > On Fri, May 12, 2017 at 7:59 AM, F. T.  wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > I'd like to use MOB in HBase to store PDF files. I'm using 
> > > > > Hbase
> > 1.2.3
> > > > but
> > > > > I'get this error creating a table with MOB column : NameError:
> > > > > uninitialized constant IS_MOB.
> > > > >
> > > > > A lot of web sites (including Apache official web site) talk 
> > > > > about
> > the
> > > > > patch 11339 or HBase 2.0.0, but, I don't find any explanation 
> > > > > about
> > the
> > > > way
> > > > > to install this patch and
> > > > >
> > > > > I can't find the 2.0.0 version anywhere. So I'm completly lost.
> Could
> > > you
> > > > > help me please ?
> > > > >
> > > > >
> > > >
> > >
> >
>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>


RE: Baffling situation with tableExists and createTable

2017-04-26 Thread ashish singhi
This is already handled through Procedure-V2 code in HBase 1.1+ versions.

Regards,
Ashish

-Original Message-
From: Anoop John [mailto:anoop.hb...@gmail.com] 
Sent: 26 April 2017 15:31
To: user@hbase.apache.org
Subject: Re: Baffling situation with tableExists and createTable

Ur earlier attempt to create this table would have failed in btw..  So the 
status of the table in zk and in master may be diff.. Table exist might be 
checking one and the next steps of crate table another..
Sorry forgot that area of code.. But have seen this kind of situation.
  Not sure whether in some latest versions, these kind of probs are solved or 
not.

-Anoop-

On Wed, Apr 26, 2017 at 6:12 AM, Ted Yu  wrote:
> Which hbase release are you using ?
>
> Can you check master log to see if there is some clue w.r.t. LoadTest ?
>
> Using "hbase zkcli", you can inspect the znode status. Below is a sample:
>
> [zk: cn011.x.com:2181,cn013.x.com:2181,cn012.x.com:2181(CONNECTED) 2] 
> ls /hbase-unsecure/table [hbase:meta, hbase:namespace, 
> IntegrationTestBigLinkedList, datatsv, usertable, hbase:backup, 
> TestTable, t2]
> [zk: cn011.x.com:2181,cn013.x.com:2181,cn012.x.com:2181(CONNECTED) 3] 
> ls
> /hbase-unsecure/table/2
> Node does not exist: /hbase-unsecure/table/2
> [zk: cn011.x.com:2181,cn013.x.com:2181,cn012.x.com:2181(CONNECTED) 4] 
> ls
> /hbase-unsecure/table/t2
> []
> [zk: cn011.x.com:2181,cn013.x.com:2181,cn012.x.com:2181(CONNECTED) 5] 
> get
> /hbase-unsecure/table/t2
>  master:16000K  W , PBUF
> cZxid = 0x1000a7f01
> ctime = Mon Mar 27 16:50:52 UTC 2017
> mZxid = 0x1000a7f17
> mtime = Mon Mar 27 16:50:52 UTC 2017
> pZxid = 0x1000a7f01
> cversion = 0
> dataVersion = 2
>
> On Tue, Apr 25, 2017 at 4:09 PM, jeff saremi  wrote:
>
>> BTW on the page
>> http://localhost:16010/master-status#userTables
>> there is no sign of the supposedly existing table either
>>
>> 
>> From: jeff saremi 
>> Sent: Tuesday, April 25, 2017 4:05:56 PM
>> To: user@hbase.apache.org
>> Subject: Baffling situation with tableExists and createTable
>>
>> I have a super simple piece of code which tries to create a test 
>> table if it does not exist
>>
>> calling admin.tableExists(TableName.valueOf(table)) returns false 
>> causing the control to be passed to the line that creates it 
>> admin.createTable(tableDescriptor).
>> Then i get an exception that the table exists!
>>
>> Exception in thread "main" org.apache.hadoop.hbase.TableExistsException:
>> LoadTest
>>
>>
>> String table = config.tableName;
>> ...
>> Connection conn = ConnectionFactory.createConnection(hbaseconf);
>> Admin admin = conn.getAdmin();
>> if(!admin.tableExists(TableName.valueOf(table))) {
>> Log.info("table " + table + " does not exist. Creating it...");
>> HTableDescriptor tableDescriptor = new HTableDescriptor(TableName.
>> valueOf(table));
>> tableDescriptor.addFamily(new HColumnDescriptor(config.FAMILY));
>> admin.createTable(tableDescriptor);
>> }
>>
>> Jeff
>>


RE: API to get HBase replication status

2017-04-24 Thread ashish singhi
Hi, 

There is no API for that in ReplicationAdmin. You can try using,
1.  Admin#getClusterStatus
2. ClusterStatus#getServerLoad
3.a ServerLoad#getReplicationLoadSink
3.b ServerLoad#getReplicationLoadSourceList

From ReplicationLoadSink and ReplicationLoadSource you can get the access to 
the methods which will fetch you these metrics value.

HTH

Regards,
Ashish

-Original Message-
From: Sreeram [mailto:sreera...@gmail.com] 
Sent: 24 April 2017 14:01
To: user@hbase.apache.org
Subject: API to get HBase replication status

Hi,

 I am trying to understand if the hbase shell commands to get the replication 
status are based on any underlying API.

Specifically I am trying to fetch values of last shipped timestamp and 
replication lag per regionserver. The ReplicationAdmin does not seem to be 
providing the information ( or may be its not obvious for me).

The version of HBase that I use is 1.2.0-cdh5.8.2

Any help on this regard ?

Thanks,
Sreeram


Re: Is this expected HBase replication behavior or am I doing something wrong?

2017-03-30 Thread Ashish Singhi
Hello,

Did you check the RegionServer logs, was there any exception ?

Regards,
Ashish

On Thu, Mar 30, 2017 at 10:14 PM, James Johansville <
james.johansvi...@gmail.com> wrote:

> Hello,
>
> I attempted HBase replication for the first time and am trying to
> understand how it works.
>
> I have three HBase clusters each with their own ZK ensemble, A, B, C. I
> wanted to have complete acyclical replication between all 3 clusters, so I
> added B as a peer of A, C as a peer of B, A as a peer of C. I enabled
> replications on the appropriate table (which were empty). I set
> hbase.replication=true in all 3 hbase-site.xml and restarted all 3
> clusters.
>
> So, I tried a few PUTs in each cluster's table. The PUTs would not get
> replicated; they stayed where they got upserted. There was no other traffic
> going to these tables.
>
> However, and this is where the behavior gets bizarre (for me) ... if I did
> something to trigger a restart of regions in some way, such as calling a
> no-op ALTER TABLE statement, then all the data got replicated just fine and
> as expected, and all three clusters have a consistent view. But add in a
> few more rows and nothing gets replicated until I do something to restart
> the regions. Something to do with memstore flushes, maybe?
>
> Is this expected behavior, or have I set something up incorrectly?
>
> Thanks,
> James
>


RE: [ANNOUNCE] - Welcome our new HBase committer Anastasia Braginsky

2017-03-27 Thread ashish singhi
Congrats and Welcome!

-Original Message-
From: ramkrishna vasudevan [mailto:ramkrishna.s.vasude...@gmail.com] 
Sent: 27 March 2017 18:08
To: d...@hbase.apache.org; user@hbase.apache.org
Subject: [ANNOUNCE] - Welcome our new HBase committer Anastasia Braginsky

Hi All

Welcome Anastasia Braginsky, one more female committer to HBase. She has been 
active now for a while with her Compacting memstore feature and she along with 
Eshcar have done lot of talks in various meetups and HBaseCon on their feature.

Welcome onboard and looking forward to work with you Anastasia !!!

Regards
Ram


RE: hbase table creation

2017-03-16 Thread ashish singhi
No.

Regards,
Ashish

-Original Message-
From: Rajeshkumar J [mailto:rajeshkumarit8...@gmail.com] 
Sent: 16 March 2017 18:05
To: user@hbase.apache.org
Subject: Re: hbase table creation

Also Ashish while specifying region location is there any option to use regular 
expression?

On Thu, Mar 16, 2017 at 5:55 PM, Rajeshkumar J 
wrote:

> thanks ashish. I got that as that region doesn't contain any data and 
> data is available in other regions.
>
> On Thu, Mar 16, 2017 at 5:48 PM, ashish singhi 
> 
> wrote:
>
>> Was any data added into this table region ? If not then you can skip 
>> this region directory from completebulkload.
>>
>> -Original Message-
>> From: Rajeshkumar J [mailto:rajeshkumarit8...@gmail.com]
>> Sent: 16 March 2017 17:44
>> To: user@hbase.apache.org
>> Subject: Re: hbase table creation
>>
>> Ashish,
>>
>> I have tried as u said but I dont have any data in this folder
>>
>> /hbase/tmp/t1/region1/d
>>
>> So in the log
>>
>> 2017-03-16 13:12:40,120 WARN  [main] mapreduce.LoadIncrementalHFiles:
>> Bulk load operation did not find any files to load in directory 
>> /hbase/tmp/t1/region1.  Does it contain files in subdirectories that 
>> correspond to column family names?
>>
>> So is this data corrupted?
>>
>>
>>
>> On Thu, Mar 16, 2017 at 5:14 PM, ashish singhi 
>> 
>> wrote:
>>
>> > Hi,
>> >
>> > You can try completebulkload tool to load the data into the table.
>> > Below is the command usage,
>> >
>> > hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
>> >
>> > usage: completebulkload /path/to/hfileoutputformat-output tablename 
>> > -Dcreate.table=no - can be used to avoid creation of table by this tool
>> >   Note: if you set this to 'no', then the target table must already 
>> > exist in HBase.
>> >
>> >
>> > For example:
>> > Consider tablename as t1 you have copied the data of t1 from 
>> > cluster1 to
>> > /hbase/tmp/t1 directory in cluster2 .
>> > Delete the recovered.edits directory or any other directory except 
>> > column family directory(store dir) from the region directory of 
>> > that table, Suppose you have two regions in the table t1 and list 
>> > output of table dir is like below
>> >
>> > ls /hbase/tmp/t1
>> >
>> > drwxr-xr-x/hbase/tmp/t1/.tabledesc
>> > -rw-r--r--/hbase/tmp/t1/.tabledesc/.tableinfo.01
>> > drwxr-xr-x/hbase/tmp/t1/.tmp
>> > drwxr-xr-x/hbase/tmp/t1/region1
>> > -rw-r--r--/hbase/tmp/t1/region1/.regioninfo
>> > drwxr-xr-x/hbase/tmp/t1/region1/d
>> > -rwxrwxrwx/hbase/tmp/t1/region1/d/0fcaf624cf124d7cab50ace0a6f0f9
>> > df_SeqId_4_
>> > drwxr-xr-x/hbase/tmp/t1/region1/recovered.edits
>> > -rw-r--r--/hbase/tmp/t1/region1/recovered.edits/2.seqid
>> > drwxr-xr-x/hbase/tmp/t1/region2
>> > -rw-r--r--/hbase/tmp/t1/region2/.regioninfo
>> > drwxr-xr-x/hbase/tmp/t1/region2/d
>> > -rwxrwxrwx/hbase/tmp/t1/region2/d/14925680d8a5457e9be1c05087f44d
>> > f5_SeqId_4_
>> > drwxr-xr-x/hbase/tmp/t1/region2/recovered.edits
>> > -rw-r--r--/hbase/tmp/t1/region2/recovered.edits/2.seqid
>> >
>> > Delete the /hbase/tmp/t1/region1/recovered.edits and 
>> > /hbase/tmp/t1/region2/recovered.edits
>> >
>> > And now run the completebulkload for each region like below,
>> >
>> > 1) hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
>> > /hbase/tmp/t1/region1 t1
>> > 2) hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
>> > /hbase/tmp/t1/region2 t1
>> >
>> > Note: The tool will create the table if doesn't exist with only one 
>> > region. If you want the same table properties as it is in cluster1 
>> > then you will have to create it manually in cluster2.
>> >
>> > I hope this helps.
>> >
>> > Regards,
>> > Ashish
>> >
>> > -Original Message-
>> > From: Rajeshkumar J [mailto:rajeshkumarit8...@gmail.com]
>> > Sent: 16 March 2017 16:46
>> > To: user@hbase.apache.org
>> > Subject: Re: hbase table creation
>> >
>> > ​Karthi,
>> >
>> >I have mentioned that as of now I dont have any data in that old 
>> > cluster. Now only have that copied files in the new cluster. I 
>> > think i can't use 

RE: hbase table creation

2017-03-16 Thread ashish singhi
Was any data added into this table region ? If not then you can skip this 
region directory from completebulkload.

-Original Message-
From: Rajeshkumar J [mailto:rajeshkumarit8...@gmail.com] 
Sent: 16 March 2017 17:44
To: user@hbase.apache.org
Subject: Re: hbase table creation

Ashish,

I have tried as u said but I dont have any data in this folder

/hbase/tmp/t1/region1/d

So in the log

2017-03-16 13:12:40,120 WARN  [main] mapreduce.LoadIncrementalHFiles: Bulk load 
operation did not find any files to load in directory /hbase/tmp/t1/region1.  
Does it contain files in subdirectories that correspond to column family names?

So is this data corrupted?



On Thu, Mar 16, 2017 at 5:14 PM, ashish singhi 
wrote:

> Hi,
>
> You can try completebulkload tool to load the data into the table. 
> Below is the command usage,
>
> hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
>
> usage: completebulkload /path/to/hfileoutputformat-output tablename  
> -Dcreate.table=no - can be used to avoid creation of table by this tool
>   Note: if you set this to 'no', then the target table must already 
> exist in HBase.
>
>
> For example:
> Consider tablename as t1 you have copied the data of t1 from cluster1 
> to
> /hbase/tmp/t1 directory in cluster2 .
> Delete the recovered.edits directory or any other directory except 
> column family directory(store dir) from the region directory of that 
> table, Suppose you have two regions in the table t1 and list output of 
> table dir is like below
>
> ls /hbase/tmp/t1
>
> drwxr-xr-x/hbase/tmp/t1/.tabledesc
> -rw-r--r--/hbase/tmp/t1/.tabledesc/.tableinfo.01
> drwxr-xr-x/hbase/tmp/t1/.tmp
> drwxr-xr-x/hbase/tmp/t1/region1
> -rw-r--r--/hbase/tmp/t1/region1/.regioninfo
> drwxr-xr-x/hbase/tmp/t1/region1/d
> -rwxrwxrwx/hbase/tmp/t1/region1/d/0fcaf624cf124d7cab50ace0a6f0f9
> df_SeqId_4_
> drwxr-xr-x/hbase/tmp/t1/region1/recovered.edits
> -rw-r--r--/hbase/tmp/t1/region1/recovered.edits/2.seqid
> drwxr-xr-x/hbase/tmp/t1/region2
> -rw-r--r--/hbase/tmp/t1/region2/.regioninfo
> drwxr-xr-x/hbase/tmp/t1/region2/d
> -rwxrwxrwx/hbase/tmp/t1/region2/d/14925680d8a5457e9be1c05087f44d
> f5_SeqId_4_
> drwxr-xr-x/hbase/tmp/t1/region2/recovered.edits
> -rw-r--r--/hbase/tmp/t1/region2/recovered.edits/2.seqid
>
> Delete the /hbase/tmp/t1/region1/recovered.edits and 
> /hbase/tmp/t1/region2/recovered.edits
>
> And now run the completebulkload for each region like below,
>
> 1) hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
> /hbase/tmp/t1/region1 t1
> 2) hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
> /hbase/tmp/t1/region2 t1
>
> Note: The tool will create the table if doesn't exist with only one 
> region. If you want the same table properties as it is in cluster1 
> then you will have to create it manually in cluster2.
>
> I hope this helps.
>
> Regards,
> Ashish
>
> -Original Message-
> From: Rajeshkumar J [mailto:rajeshkumarit8...@gmail.com]
> Sent: 16 March 2017 16:46
> To: user@hbase.apache.org
> Subject: Re: hbase table creation
>
> ​Karthi,
>
>I have mentioned that as of now I dont have any data in that old 
> cluster. Now only have that copied files in the new cluster. I think i 
> can't use this utility?​
>
> On Thu, Mar 16, 2017 at 4:10 PM, karthi keyan 
> 
> wrote:
>
> > Ted-
> >
> > Cool !! Will consider hereafter .
> >
> > On Thu, Mar 16, 2017 at 4:06 PM, Ted Yu  wrote:
> >
> > > karthi:
> > > The link you posted was for 0.94
> > >
> > > We'd better use up-to-date link from refguide (see my previous reply).
> > >
> > > Cheers
> > >
> > > On Thu, Mar 16, 2017 at 3:26 AM, karthi keyan 
> > >  > >
> > > wrote:
> > >
> > > > Rajesh,
> > > >
> > > > Use HBase snapshots for backup and move the data from your "
> > > > /hbase/default/data/testing" with its snapshot and clone them to 
> > > > your destination cluster.
> > > >
> > > > Snapshot ref link  - http://hbase.apache.org/0.94/
> > > book/ops.snapshots.html
> > > > <http://hbase.apache.org/0.94/book/ops.snapshots.html>
> > > >
> > > >
> > > >
> > > > On Thu, Mar 16, 2017 at 3:51 PM, sudhakara st 
> > > > 
> > > > wrote:
> > > >
> > > > > You have to use 'copytable', here is more info 
> > > > > https://hbase.apache.org/book.html#copy.table
> > > > >
&g

RE: hbase table creation

2017-03-16 Thread ashish singhi
Hi,

You can try completebulkload tool to load the data into the table. Below is the 
command usage,

hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles

usage: completebulkload /path/to/hfileoutputformat-output tablename
 -Dcreate.table=no - can be used to avoid creation of table by this tool
  Note: if you set this to 'no', then the target table must already exist in 
HBase.


For example:
Consider tablename as t1 you have copied the data of t1 from cluster1 to 
/hbase/tmp/t1 directory in cluster2 .
Delete the recovered.edits directory or any other directory except column 
family directory(store dir) from the region directory of that table,
Suppose you have two regions in the table t1 and list output of table dir is 
like below

ls /hbase/tmp/t1

drwxr-xr-x/hbase/tmp/t1/.tabledesc
-rw-r--r--/hbase/tmp/t1/.tabledesc/.tableinfo.01
drwxr-xr-x/hbase/tmp/t1/.tmp
drwxr-xr-x/hbase/tmp/t1/region1
-rw-r--r--/hbase/tmp/t1/region1/.regioninfo
drwxr-xr-x/hbase/tmp/t1/region1/d
-rwxrwxrwx/hbase/tmp/t1/region1/d/0fcaf624cf124d7cab50ace0a6f0f9df_SeqId_4_
drwxr-xr-x/hbase/tmp/t1/region1/recovered.edits
-rw-r--r--/hbase/tmp/t1/region1/recovered.edits/2.seqid
drwxr-xr-x/hbase/tmp/t1/region2
-rw-r--r--/hbase/tmp/t1/region2/.regioninfo
drwxr-xr-x/hbase/tmp/t1/region2/d
-rwxrwxrwx/hbase/tmp/t1/region2/d/14925680d8a5457e9be1c05087f44df5_SeqId_4_
drwxr-xr-x/hbase/tmp/t1/region2/recovered.edits
-rw-r--r--/hbase/tmp/t1/region2/recovered.edits/2.seqid

Delete the /hbase/tmp/t1/region1/recovered.edits and 
/hbase/tmp/t1/region2/recovered.edits

And now run the completebulkload for each region like below,

1) hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles 
/hbase/tmp/t1/region1 t1 
2) hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles 
/hbase/tmp/t1/region2 t1

Note: The tool will create the table if doesn't exist with only one region. If 
you want the same table properties as it is in cluster1 then you will have to 
create it manually in cluster2.

I hope this helps.

Regards,
Ashish

-Original Message-
From: Rajeshkumar J [mailto:rajeshkumarit8...@gmail.com] 
Sent: 16 March 2017 16:46
To: user@hbase.apache.org
Subject: Re: hbase table creation

​Karthi,

   I have mentioned that as of now I dont have any data in that old cluster. 
Now only have that copied files in the new cluster. I think i can't use this 
utility?​

On Thu, Mar 16, 2017 at 4:10 PM, karthi keyan 
wrote:

> Ted-
>
> Cool !! Will consider hereafter .
>
> On Thu, Mar 16, 2017 at 4:06 PM, Ted Yu  wrote:
>
> > karthi:
> > The link you posted was for 0.94
> >
> > We'd better use up-to-date link from refguide (see my previous reply).
> >
> > Cheers
> >
> > On Thu, Mar 16, 2017 at 3:26 AM, karthi keyan 
> >  >
> > wrote:
> >
> > > Rajesh,
> > >
> > > Use HBase snapshots for backup and move the data from your "
> > > /hbase/default/data/testing" with its snapshot and clone them to 
> > > your destination cluster.
> > >
> > > Snapshot ref link  - http://hbase.apache.org/0.94/
> > book/ops.snapshots.html
> > > 
> > >
> > >
> > >
> > > On Thu, Mar 16, 2017 at 3:51 PM, sudhakara st 
> > > 
> > > wrote:
> > >
> > > > You have to use 'copytable', here is more info 
> > > > https://hbase.apache.org/book.html#copy.table
> > > >
> > > > On Thu, Mar 16, 2017 at 3:46 PM, Rajeshkumar J < 
> > > > rajeshkumarit8...@gmail.com>
> > > > wrote:
> > > >
> > > > > I have copied hbase data of a table from one cluster to another.
> For
> > > > > instance I have a table testing and its data will be in the 
> > > > > path /hbase/default/data/testing
> > > > >
> > > > > I have copied these files from existing cluster to new 
> > > > > cluster. Is
> > > there
> > > > > any possibilty to create table and load data from these files 
> > > > > in
> the
> > > new
> > > > > cluster
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > >
> > > > Regards,
> > > > ...sudhakara
> > > >
> > >
> >
>


RE: Pattern for Bulk Loading to Remote HBase Cluster

2017-03-09 Thread ashish singhi
If I understand your question you are asking, how to completebulkload files 
which are on cluster1 to cluster2 without copying them to cluster2. Answer is 
with the existing code it's not possible.

Bq. How do I choose hdfs://storefile-outputdir in a way that does not perform 
an extra copy operation when completebulkload is invoked, without assuming 
knowledge of HBase server implementation details?

You can configure the output dir to remote cluster active Namenode IP, so that 
the output of importtsv is written there and then use completebulkload in the 
remote cluster specifying this output dir path as it argument.

Bq. In essence, how does my client application know that it should write to
hdfs://cluster2 even though the application is running in a context where 
fs.defaultFs is hdfs://cluster1?

If you are talking about importtsv then it read the URI from the path and 
connect to that respective NN. If you use the nameservices name in the path 
instead of active NN IP then you may have to write your own code something 
similar to importtsv where you can construct remote cluster configuration 
object and use it to write output there. You can refer HBASE-13153 for an idea 
to understand it much better.

-Original Message-
From: Ben Roling [mailto:ben.rol...@gmail.com] 
Sent: 09 March 2017 19:53
To: user@hbase.apache.org
Subject: Re: Pattern for Bulk Loading to Remote HBase Cluster

I'm not sure you understand my question.  Or perhaps I just don't quite 
understand yours?

I'm not using importtsv.  If I was, and I was using the form that prepares 
StoreFiles for completebulkload, then my question would be, how do I 
(generically as an application acting as an HBase client, and using importtsv 
to load data) choose the path to which I write the StoreFiles?

The following is an example of importtsv from the documentation:

bin/hbase org.apache.hadoop.hbase.mapreduce.ImportTsv
-Dimporttsv.columns=a,b,c
-Dimporttsv.bulk.output=hdfs://storefile-outputdir  


How do I choose hdfs://storefile-outputdir in a way that does not perform an 
extra copy operation when completebulkload is invoked, without assuming 
knowledge of HBase server implementation details?

In essence, how does my client application know that it should write to
hdfs://cluster2 even though the application is running in a context where 
fs.defaultFs is hdfs://cluster1?

How does the HBase installation share this information with client applications?

I know I can just go dig into the hdfs-site.xml on a RegionServer and figure 
this out (such as by looking at "hbase.rootdir" there), but my question is how 
to do it from the perspective of a generic HBase client application?

On Wed, Mar 8, 2017 at 11:13 PM ashish singhi 
wrote:

> Hi,
>
> Did you try giving the importtsv output path to remote HDFS ?
>
> Regards,
> Ashish
>
> -Original Message-
> From: Ben Roling [mailto:ben.rol...@gmail.com]
> Sent: 09 March 2017 03:22
> To: user@hbase.apache.org
> Subject: Pattern for Bulk Loading to Remote HBase Cluster
>
> My organization is looking at making some changes that would introduce 
> HBase bulk loads that write into a remote cluster.  Today our bulk 
> loads write to a local HBase.  By local, I mean the home directory of 
> the user preparing and executing the bulk load is on the same HDFS 
> filesystem as the HBase cluster.  In the remote cluster case, the 
> HBase being loaded to will be on a different HDFS filesystem.
>
> The thing I am wondering about is what the best pattern is for 
> determining the location to write HFiles to from the job preparing the bulk 
> load.
> Typical examples write the HFiles somewhere in the user's home directory.
> When HBase is local, that works perfectly well.  With remote HBase, it 
> can work, but results in writing the files twice: once from the 
> preparation job and a second time by the RegionServer when it reacts 
> to the bulk load by copying the HFiles into the filesystem it is running on.
>
> Ideally the preparation job would have some mechanism to know where to 
> write the files such that they are initially written on the same 
> filesystem as HBase itself.  This way the bulk load can simply move 
> them into the HBase storage directory like happens when bulk loading to a 
> local cluster.
>
> I've considered a pattern where the bulk load preparation job reads 
> the hbase.rootdir property and pulls the filesystem off of that.  
> Then, it sticks the output in some directory (e.g. /tmp) on that same 
> filesystem.
> I'm inclined to think that hbase.rootdir should only be considered a 
> server-side property and as such I shouldn't expect it to be present 
> in client configuration.  Under that assumption, this isn't really a 
> workable strategy.
>
> It feels like HBase should have a mechanism f

RE: Pattern for Bulk Loading to Remote HBase Cluster

2017-03-08 Thread ashish singhi
Hi,

Did you try giving the importtsv output path to remote HDFS ?

Regards,
Ashish

-Original Message-
From: Ben Roling [mailto:ben.rol...@gmail.com] 
Sent: 09 March 2017 03:22
To: user@hbase.apache.org
Subject: Pattern for Bulk Loading to Remote HBase Cluster

My organization is looking at making some changes that would introduce HBase 
bulk loads that write into a remote cluster.  Today our bulk loads write to a 
local HBase.  By local, I mean the home directory of the user preparing and 
executing the bulk load is on the same HDFS filesystem as the HBase cluster.  
In the remote cluster case, the HBase being loaded to will be on a different 
HDFS filesystem.

The thing I am wondering about is what the best pattern is for determining the 
location to write HFiles to from the job preparing the bulk load.
Typical examples write the HFiles somewhere in the user's home directory.
When HBase is local, that works perfectly well.  With remote HBase, it can 
work, but results in writing the files twice: once from the preparation job and 
a second time by the RegionServer when it reacts to the bulk load by copying 
the HFiles into the filesystem it is running on.

Ideally the preparation job would have some mechanism to know where to write 
the files such that they are initially written on the same filesystem as HBase 
itself.  This way the bulk load can simply move them into the HBase storage 
directory like happens when bulk loading to a local cluster.

I've considered a pattern where the bulk load preparation job reads the 
hbase.rootdir property and pulls the filesystem off of that.  Then, it sticks 
the output in some directory (e.g. /tmp) on that same filesystem.
I'm inclined to think that hbase.rootdir should only be considered a 
server-side property and as such I shouldn't expect it to be present in client 
configuration.  Under that assumption, this isn't really a workable strategy.

It feels like HBase should have a mechanism for sharing a staging directory 
with clients doing bulk loads.  Doing some searching, I ran across 
"hbase.bulkload.staging.dir", but my impression is that its intent does not 
exactly align with mine.  I've read about it here [1].  It seems the idea is 
that users prepare HFiles in their own directory, then SecureBulkLoad moves 
them to "hbase.bulkload.staging.dir".  A move like that isn't really a move 
when dealing with a remote HBase cluster.  Instead it is a copy.  A question 
would be why doesn't the job just write the files to 
"hbase.bulkload.staging.dir" initially and skip the extra step of moving them?

I've been inclined to invent my own application-specific Hadoop property to use 
to communicate an HBase-local staging directory with my bulk load preparation 
jobs.  I don't feel perfectly good about that idea though.  I'm curious to hear 
experiences or opinions from others.  Should I have my bulk load prep jobs look 
at "hbase.rootdir" or "hbase.bulkload.staging.dir" and make sure those get 
propagated to client configuration?  Is there some other mechanism that already 
exists for clients to discover an HBase-local directory to write the files?

[1] http://hbase.apache.org/book.html#hbase.secure.bulkload


RE: Hbase Replication || Impact on cluster || Storage

2017-01-16 Thread ashish singhi
Hi

bq. what if destination cluster goes down so does it mean replicated data will 
start retail slave data (replicated Wals) in Master cluster memory?

Data which is yet to be replicated is tracked through Zookeeper currently.

bq. if its true can we not disable it?

We should disable the table replication only when we don't need that table data 
to be replicated any more.

Suggest you to read https://hbase.apache.org/book.html#_cluster_replication 
section from the HBase book.

Regards,
Ashish

-Original Message-
From: Manjeet Singh [mailto:manjeet.chand...@gmail.com] 
Sent: 15 January 2017 11:15
To: user@hbase.apache.org; BAD BOY **
Subject: Hbase Replication || Impact on cluster || Storage

Hi All,

I have question regarding Hbase replication The clusters participating in 
replication can be of different sizes. The master cluster relies on 
randomization to attempt to balance the stream of replication on the slave 
clusters. It is expected that the slave cluster has storage capacity to hold 
the replicated data. If a slave cluster does not have this memory or is 
inaccessible for other reasons, it throws an error and the master retains the 
WAL and retries the replication at intervals

so based on above information what if destination cluster goes down so does it 
mean replicated data will start retail slave data (replicated Wals) in Master 
cluster memory?
if its true can we not disable it?

Thanks
Manjeet

--
luv all


RE: [ANNOUNCE] Duo Zhang (张铎) joins the Apache HBase PMC

2016-09-06 Thread ashish singhi
Congratulations!

-Original Message-
From: saint@gmail.com [mailto:saint@gmail.com] On Behalf Of Stack
Sent: 07 September 2016 09:56
To: HBase Dev List; Hbase-User
Subject: [ANNOUNCE] Duo Zhang (张铎) joins the Apache HBase PMC

On behalf of the Apache HBase PMC I am pleased to announce that 张铎
has accepted our invitation to become a PMC member on the Apache HBase project. 
Duo has healthy notions on where the project should be headed and over the last 
year and more has been working furiously to take us there.

Please join me in welcoming Duo to the HBase PMC!

One of us!
St.Ack


RE: [ANNOUNCE] Dima Spivak joins the Apache HBase PMC

2016-08-31 Thread ashish singhi
Congratulations Dima!

-Original Message-
From: Andrew Purtell [mailto:apurt...@apache.org] 
Sent: 01 September 2016 01:08
To: d...@hbase.apache.org; user@hbase.apache.org
Subject: [ANNOUNCE] Dima Spivak joins the Apache HBase PMC

On behalf of the Apache HBase PMC I am pleased to announce that Dima Spivak has 
accepted our invitation to become a committer and PMC member on the Apache 
HBase project. Dima has been an active contributor for some time, particularly 
in development and contribution of release tooling that all of our RMs now use, 
such as the API compatibility checker. Dima has also been active in testing and 
voting on release candidates. Release voting is important to project health and 
momentum and demonstrates interest and capability above and beyond just 
committing. We wish to recognize this and make those release votes binding. 
Please join me in thanking Dima for his contributions to date and anticipation 
of many more contributions.

Welcome to the HBase project, Dima!

--
Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein (via 
Tom White)


RE: How to enable log4j properties in hbase

2016-05-27 Thread ashish singhi
Ideally with that much changes it should work.

If you are running any master operation then please check in active master 
audit log file for the logs. Also check whether you have enough permission on 
that log file.
If still it's not logging then only thing I can think of is to debug and find 
out why it's not logging.

Regards,
Ashish

-Original Message-
From: Mahesh Sankaran [mailto:sankarmahes...@gmail.com] 
Sent: 27 May 2016 12:53
To: user@hbase.apache.org
Subject: Re: How to enable log4j properties in hbase

Hi,

Can you share steps. I think am missing something

Regards,
Mahesh

On Thu, May 26, 2016 at 2:36 PM, Mahesh Sankaran 
wrote:

> yes i have added above properties.
>
> On Thu, May 26, 2016 at 1:10 PM, ashish singhi 
> 
> wrote:
>
>> Forgot to say before, I hope you have added 
>> "org.apache.hadoop.hbase.security.access.AccessController" class in 
>> the master, rs and region coprocessor configuration in hbase-site.xml.
>>
>> Regards,
>> Ashish
>>
>> -Original Message-
>> From: Mahesh Sankaran [mailto:sankarmahes...@gmail.com]
>> Sent: 26 May 2016 12:33
>> To: user@hbase.apache.org
>> Subject: Re: How to enable log4j properties in hbase
>>
>> Hi Ashish,
>>
>> Yes i have restarted and tried hbase authorization operation. Still 
>> facing same issue.
>>
>> Thanks,
>>
>> Mahesh
>>
>> On Wed, May 25, 2016 at 2:04 PM, ashish singhi 
>> 
>> wrote:
>>
>> > Hi,
>> > Did you restart the HBase service ?
>> > Did you try any HBase operation ?
>> >
>> > Regards,
>> > Ashish
>> >
>> > -Original Message-
>> > From: Mahesh Sankaran [mailto:sankarmahes...@gmail.com]
>> > Sent: 25 May 2016 13:32
>> > To: user@hbase.apache.org
>> > Subject: Re: How to enable log4j properties in hbase
>> >
>> > Hi Ashish,
>> >
>> > Thanks for your quick reply.
>> > I uncommented mentioned property. But it is not working.
>> >
>> > Thanks,
>> > Mahesh
>> >
>> > On Wed, May 25, 2016 at 1:10 PM, ashish singhi 
>> > 
>> > wrote:
>> >
>> > > Uncomment
>> > >
>> >
>> "log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController=TRACE"
>> > > this line in log4j.properties file.
>> > >
>> > > Regards,
>> > > Ashish
>> > >
>> > > -Original Message-
>> > > From: Mahesh Sankaran [mailto:sankarmahes...@gmail.com]
>> > > Sent: 25 May 2016 13:01
>> > > To: user@hbase.apache.org
>> > > Subject: Re: How to enable log4j properties in hbase
>> > >
>> > > Hi,
>> > >
>> > > And also am using hbase-1.2.0 from cdh 5.7
>> > >
>> > > Thanks,
>> > > Mahesh
>> > >
>> > > On Wed, May 25, 2016 at 12:56 PM, Mahesh Sankaran < 
>> > > sankarmahes...@gmail.com>
>> > > wrote:
>> > >
>> > > > Hi All,
>> > > >
>> > > > I have configured hbase authorization in my hbase cluster. Now 
>> > > > i want to enable audit logs for hbase authorization to monitor users.
>> > > > For that i did following changes.
>> > > >
>> > > > 1.vim /etc/hbase/conf/log4j.properties
>> > > >
>> > > > log4j.rootLogger=${hbase.root.logger}
>> > > > hbase.root.logger=INFO,console
>> > > > log4j.appender.console=org.apache.log4j.ConsoleAppender
>> > > > log4j.appender.console.target=System.err
>> > > > log4j.appender.console.layout=org.apache.log4j.PatternLayout
>> > > > log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd
>> > > > HH:mm:ss} %p
>> > > > %c{2}: %m%n
>> > > >
>> > > > log4j.logger.SecurityLogger=TRACE, RFAS 
>> > > > log4j.additivity.SecurityLogger=false
>> > > > log4j.appender.RFAS=org.apache.log4j.RollingFileAppender
>> > > > log4j.appender.RFAS.File=${log.dir}/audit/SecurityAuth-hbase.au
>> > > > dit log4j.appender.RFAS.layout=org.apache.log4j.PatternLayout
>> > > > log4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601} %p %c:
>> > > > %m%n log4j.appender.RFAS.MaxFileSize=${max.log.file.size}
>> > > > log4j.appender.RFAS.MaxBackupIndex=${max.log.file.backup.index}
>> > > >
>> > > >
>> > > > 2.Restarted my hbase cluster. SecurityAuth-hbase.audit file is 
>> > > > created but there is no content inside that file.
>> > > >
>> > > > Kindly help me to enable audit logs for hbase authorization.
>> > > >
>> > > > Note: I am using cloudera
>> > > >
>> > > > Thanks,
>> > > >
>> > > > Mahesh
>> > > >
>> > >
>> >
>>
>
>


RE: [ANNOUNCE] Mikhail Antonov joins the Apache HBase PMC

2016-05-27 Thread ashish singhi
Congratulations!

Regards,
Ashish

-Original Message-
From: Andrew Purtell [mailto:apurt...@apache.org] 
Sent: 27 May 2016 00:00
To: d...@hbase.apache.org
Cc: user@hbase.apache.org
Subject: [ANNOUNCE] Mikhail Antonov joins the Apache HBase PMC

On behalf of the Apache HBase PMC I am pleased to announce that Mikhail Antonov 
has accepted our invitation to become a PMC member on the Apache HBase project. 
Mikhail has been an active contributor in many areas, including recently taking 
on the Release Manager role for the upcoming 1.3.x code line. Please join me in 
thanking Mikhail for his contributions to date and anticipation of many more 
contributions.

Welcome to the PMC, Mikhail!

--
Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein (via 
Tom White)


RE: How to enable log4j properties in hbase

2016-05-26 Thread ashish singhi
Forgot to say before, I hope you have added 
"org.apache.hadoop.hbase.security.access.AccessController" class in the master, 
rs and region coprocessor configuration in hbase-site.xml.

Regards,
Ashish

-Original Message-
From: Mahesh Sankaran [mailto:sankarmahes...@gmail.com] 
Sent: 26 May 2016 12:33
To: user@hbase.apache.org
Subject: Re: How to enable log4j properties in hbase

Hi Ashish,

Yes i have restarted and tried hbase authorization operation. Still facing same 
issue.

Thanks,

Mahesh

On Wed, May 25, 2016 at 2:04 PM, ashish singhi 
wrote:

> Hi,
> Did you restart the HBase service ?
> Did you try any HBase operation ?
>
> Regards,
> Ashish
>
> -Original Message-
> From: Mahesh Sankaran [mailto:sankarmahes...@gmail.com]
> Sent: 25 May 2016 13:32
> To: user@hbase.apache.org
> Subject: Re: How to enable log4j properties in hbase
>
> Hi Ashish,
>
> Thanks for your quick reply.
> I uncommented mentioned property. But it is not working.
>
> Thanks,
> Mahesh
>
> On Wed, May 25, 2016 at 1:10 PM, ashish singhi 
> 
> wrote:
>
> > Uncomment
> >
> "log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController=TRACE"
> > this line in log4j.properties file.
> >
> > Regards,
> > Ashish
> >
> > -Original Message-
> > From: Mahesh Sankaran [mailto:sankarmahes...@gmail.com]
> > Sent: 25 May 2016 13:01
> > To: user@hbase.apache.org
> > Subject: Re: How to enable log4j properties in hbase
> >
> > Hi,
> >
> > And also am using hbase-1.2.0 from cdh 5.7
> >
> > Thanks,
> > Mahesh
> >
> > On Wed, May 25, 2016 at 12:56 PM, Mahesh Sankaran < 
> > sankarmahes...@gmail.com>
> > wrote:
> >
> > > Hi All,
> > >
> > > I have configured hbase authorization in my hbase cluster. Now i 
> > > want to enable audit logs for hbase authorization to monitor users.
> > > For that i did following changes.
> > >
> > > 1.vim /etc/hbase/conf/log4j.properties
> > >
> > > log4j.rootLogger=${hbase.root.logger}
> > > hbase.root.logger=INFO,console
> > > log4j.appender.console=org.apache.log4j.ConsoleAppender
> > > log4j.appender.console.target=System.err
> > > log4j.appender.console.layout=org.apache.log4j.PatternLayout
> > > log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd
> > > HH:mm:ss} %p
> > > %c{2}: %m%n
> > >
> > > log4j.logger.SecurityLogger=TRACE, RFAS 
> > > log4j.additivity.SecurityLogger=false
> > > log4j.appender.RFAS=org.apache.log4j.RollingFileAppender
> > > log4j.appender.RFAS.File=${log.dir}/audit/SecurityAuth-hbase.audit
> > > log4j.appender.RFAS.layout=org.apache.log4j.PatternLayout
> > > log4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601} %p %c: 
> > > %m%n log4j.appender.RFAS.MaxFileSize=${max.log.file.size}
> > > log4j.appender.RFAS.MaxBackupIndex=${max.log.file.backup.index}
> > >
> > >
> > > 2.Restarted my hbase cluster. SecurityAuth-hbase.audit file is 
> > > created but there is no content inside that file.
> > >
> > > Kindly help me to enable audit logs for hbase authorization.
> > >
> > > Note: I am using cloudera
> > >
> > > Thanks,
> > >
> > > Mahesh
> > >
> >
>


RE: How to enable log4j properties in hbase

2016-05-25 Thread ashish singhi
Hi,
Did you restart the HBase service ?
Did you try any HBase operation ?

Regards,
Ashish

-Original Message-
From: Mahesh Sankaran [mailto:sankarmahes...@gmail.com] 
Sent: 25 May 2016 13:32
To: user@hbase.apache.org
Subject: Re: How to enable log4j properties in hbase

Hi Ashish,

Thanks for your quick reply.
I uncommented mentioned property. But it is not working.

Thanks,
Mahesh

On Wed, May 25, 2016 at 1:10 PM, ashish singhi 
wrote:

> Uncomment
> "log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController=TRACE"
> this line in log4j.properties file.
>
> Regards,
> Ashish
>
> -Original Message-
> From: Mahesh Sankaran [mailto:sankarmahes...@gmail.com]
> Sent: 25 May 2016 13:01
> To: user@hbase.apache.org
> Subject: Re: How to enable log4j properties in hbase
>
> Hi,
>
> And also am using hbase-1.2.0 from cdh 5.7
>
> Thanks,
> Mahesh
>
> On Wed, May 25, 2016 at 12:56 PM, Mahesh Sankaran < 
> sankarmahes...@gmail.com>
> wrote:
>
> > Hi All,
> >
> > I have configured hbase authorization in my hbase cluster. Now i 
> > want to enable audit logs for hbase authorization to monitor users. 
> > For that i did following changes.
> >
> > 1.vim /etc/hbase/conf/log4j.properties
> >
> > log4j.rootLogger=${hbase.root.logger}
> > hbase.root.logger=INFO,console
> > log4j.appender.console=org.apache.log4j.ConsoleAppender
> > log4j.appender.console.target=System.err
> > log4j.appender.console.layout=org.apache.log4j.PatternLayout
> > log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd 
> > HH:mm:ss} %p
> > %c{2}: %m%n
> >
> > log4j.logger.SecurityLogger=TRACE, RFAS 
> > log4j.additivity.SecurityLogger=false
> > log4j.appender.RFAS=org.apache.log4j.RollingFileAppender
> > log4j.appender.RFAS.File=${log.dir}/audit/SecurityAuth-hbase.audit
> > log4j.appender.RFAS.layout=org.apache.log4j.PatternLayout
> > log4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n 
> > log4j.appender.RFAS.MaxFileSize=${max.log.file.size}
> > log4j.appender.RFAS.MaxBackupIndex=${max.log.file.backup.index}
> >
> >
> > 2.Restarted my hbase cluster. SecurityAuth-hbase.audit file is 
> > created but there is no content inside that file.
> >
> > Kindly help me to enable audit logs for hbase authorization.
> >
> > Note: I am using cloudera
> >
> > Thanks,
> >
> > Mahesh
> >
>


RE: How to enable log4j properties in hbase

2016-05-25 Thread ashish singhi
Uncomment 
"log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController=TRACE"
 this line in log4j.properties file.

Regards,
Ashish

-Original Message-
From: Mahesh Sankaran [mailto:sankarmahes...@gmail.com] 
Sent: 25 May 2016 13:01
To: user@hbase.apache.org
Subject: Re: How to enable log4j properties in hbase

Hi,

And also am using hbase-1.2.0 from cdh 5.7

Thanks,
Mahesh

On Wed, May 25, 2016 at 12:56 PM, Mahesh Sankaran 
wrote:

> Hi All,
>
> I have configured hbase authorization in my hbase cluster. Now i want 
> to enable audit logs for hbase authorization to monitor users. For 
> that i did following changes.
>
> 1.vim /etc/hbase/conf/log4j.properties
>
> log4j.rootLogger=${hbase.root.logger}
> hbase.root.logger=INFO,console
> log4j.appender.console=org.apache.log4j.ConsoleAppender
> log4j.appender.console.target=System.err
> log4j.appender.console.layout=org.apache.log4j.PatternLayout
> log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} 
> %p
> %c{2}: %m%n
>
> log4j.logger.SecurityLogger=TRACE, RFAS 
> log4j.additivity.SecurityLogger=false
> log4j.appender.RFAS=org.apache.log4j.RollingFileAppender
> log4j.appender.RFAS.File=${log.dir}/audit/SecurityAuth-hbase.audit
> log4j.appender.RFAS.layout=org.apache.log4j.PatternLayout
> log4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n 
> log4j.appender.RFAS.MaxFileSize=${max.log.file.size}
> log4j.appender.RFAS.MaxBackupIndex=${max.log.file.backup.index}
>
>
> 2.Restarted my hbase cluster. SecurityAuth-hbase.audit file is created 
> but there is no content inside that file.
>
> Kindly help me to enable audit logs for hbase authorization.
>
> Note: I am using cloudera
>
> Thanks,
>
> Mahesh
>


RE: Hbase Replication no longer replicating, help diagnose

2016-04-15 Thread ashish singhi
Let me explain in theory how it works (considering default configuration values)

Assume 1 peer RS is already handling 
3(hbase.regionserver.replication.handler.count) replication requests and it was 
not completed within 1 minute(hbase.rpc.timeout) time (due to some unknown 
reasons, may be slow rs or network speed...) then source RS will get 
CallTimeOutException and it will resend this request again to the same peer RS, 
so now this requests will be added in this peer RS queue (Max queue size = 30, 
hbase.regionserver.replication.handler.count*hbase.ipc.server.max.callqueue.length).
Both running and waiting requests size will be counted for callQueueSize, so 
(running + waiting requests)*64MB(replication.source.size.capacity) will cross 
the call queue size 1GB(hbase.ipc.server.max.callqueue.size) and will result 
into CallQueueTooBigException exception.

Now why those running requests are not getting completed, I assume this can be 
a reason, 1 peer RS received a replication request and it internally 
distributes this batch call to other RS in the peer cluster and this may get 
stuck as other peer RS also would have received replication request from other 
source cluster RS... so it might result in a kind off dead lock, where 1 peer 
RS is waiting for another peer RS to finish the request and that RS in turn 
might be processing some other request and waiting for its completion.

So to avoid this problem, we need to find out the cause why peer RS is slow ? 
Based on that and network speed, need to adjust the hbase.rpc.timeout value and 
restart the source and peer cluster.

Regards,
Ashish

-Original Message-
From: Abraham Tom [mailto:work2m...@gmail.com] 
Sent: 14 April 2016 18:52
To: Hbase-User
Subject: Hbase Replication no longer replicating, help diagnose

my hbase replication has stopped

I am on hbase version 1.0.0-cdh5.4.8 (Cloudera build)

I have 2 clusters in 2 different datacenters

1 is master the other is slave



I see the following errors in log



2016-04-13 22:32:50,217 WARN
org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint:
Can't replicate because of a local or network error:
java.io.IOException: Call to
hadoop2-private.sjc03.infra.com/10.160.22.99:60020 failed on local
exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=1014, 
waitTime=121, operationTimeout=120 expired.
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1255)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1223)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:216)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:300)
at 
org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:21783)
at 
org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:65)
at 
org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint.replicate(HBaseInterClusterReplicationEndpoint.java:161)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:696)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:410)
Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=1014, 
waitTime=121, operationTimeout=120 expired.
at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:70)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1197)
... 7 more





which in turn fills the queue and I get

2016-04-13 22:35:19,555 WARN
org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint:
Can't replicate because of an error on the remote cluster:
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.ipc.RpcServer$CallQueueTooBigException):
Call queue is full on /0.0.0.0:60020, is hbase.ipc.server.max.callqueue.size 
too small?
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1219)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:216)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:300)
at 
org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:21783)
at 
org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:65)
at 
org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint.replicate(HBaseInterClusterReplicationEndpoint.java:161)
at 
org.apache.hadoop.hbase.replication.regionserver.Replicat

RE: Example of spinning up a Hbase mock style test for integration testing in scala

2016-03-14 Thread ashish singhi
Usually I pass -Dtest.build.data.basedirectory=D:/testDir as VM argument to run 
the test in Windows to avoid these problem.

Regards,
Ashish

-Original Message-
From: Nkechi Achara [mailto:nkach...@googlemail.com] 
Sent: 15 March 2016 03:52
To: Ted Yu
Cc: user@hbase.apache.org
Subject: Re: Example of spinning up a Hbase mock style test for integration 
testing in scala

Hi Ted,

I believe it is an issue with long file name lengths in  windows as when i 
attempt to get to the directory it is trying to replicate the block to, I 
recieve the ever annoying error of:

The filename or extension is too long.

Does anyone know how to fix this?


On 14 March 2016 at 18:42, Ted Yu  wrote:

> You can inspect the output from 'mvn dependency:tree' to see if any 
> incompatible hadoop dependency exists.
>
> FYI
>
> On Mon, Mar 14, 2016 at 10:26 AM, Parsian, Mahmoud 
> 
> wrote:
>
>> Hi Keech,
>>
>> Please post your sample test, its run log, version of Hbase , hadoop, 
>> … And make sure that hadoop-core-1.2.1.jar is not your classpath 
>> (causes many errors!).
>>
>> Best,
>> Mahmoud
>> From: Nkechi Achara > nkach...@googlemail.com>>
>> Date: Monday, March 14, 2016 at 10:14 AM
>> To: "user@hbase.apache.org" < 
>> user@hbase.apache.org>, Mahmoud Parsian 
>> < mpars...@illumina.com>
>>
>> Subject: Re: Example of spinning up a Hbase mock style test for 
>> integration testing in scala
>>
>>
>> Thanks Mahmoud,
>>
>> This is what I am using,  but as the previous reply stated, I  
>> receiving an exception when starting the cluster.
>> Thinking about it, it looks to be more of a build problem of my hbase 
>> mini cluster,  as I am receiving the following error:
>>
>> 16/03/14 12:29:00 WARN datanode.DataNode: IOException in
>> BlockReceiver.run():
>>
>> java.io.IOException: Failed to move meta file for 
>> ReplicaBeingWritten, blk_1073741825_1001, RBW
>>
>>   getNumBytes() = 7
>>
>>   getBytesOnDisk()  = 7
>>
>>   getVisibleLength()= 7
>>
>>   getVolume()   =
>> C:\Users\unknown\Documents\trs\target\test-data\780d11ca-27b8-4004-be
>> d8-480bc9903125\dfscluster_d292c05b-0190-43b1-83b2-bebf483c8b3c\dfs\d
>> ata\data1\current
>>
>>   getBlockFile()=
>> C:\Users\unknown\Documents\trs\target\test-data\780d11ca-27b8-4004-be
>> d8-480bc9903125\dfscluster_d292c05b-0190-43b1-83b2-bebf483c8b3c\dfs\d
>> ata\data1\current\BP-1081755239-10.66.90.86-1457954925705\current\rbw
>> \blk_1073741825
>>
>>   bytesAcked=7
>>
>>   bytesOnDisk=7 from
>> C:\Users\unknown\Documents\trs\target\test-data\780d11ca-27b8-4004-be
>> d8-480bc9903125\dfscluster_d292c05b-0190-43b1-83b2-bebf483c8b3c\dfs\d
>> ata\data1\current\BP-1081755239-10.66.90.86-1457954925705\current\rbw
>> \blk_1073741825_1001.meta
>> to
>> C:\Users\unknown\Documents\trs\target\test-data\780d11ca-27b8-4004-be
>> d8-480bc9903125\dfscluster_d292c05b-0190-43b1-83b2-bebf483c8b3c\dfs\d
>> ata\data1\current\BP-1081755239-10.66.90.86-1457954925705\current\fin
>> alized\subdir0\subdir0\blk_1073741825_1001.meta
>>
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.m
>> oveBlockFiles(FsDatasetImpl.java:615)
>>
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.
>> addBlock(BlockPoolSlice.java:250)
>>
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.ad
>> dBlock(FsVolumeImpl.java:229)
>>
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.f
>> inalizeReplica(FsDatasetImpl.java:1119)
>>
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.f
>> inalizeBlock(FsDatasetImpl.java:1100)
>>
>> at
>> org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.
>> finalizeBlock(BlockReceiver.java:1293)
>>
>> at
>> org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.
>> run(BlockReceiver.java:1233)
>>
>> at java.lang.Thread.run(Thread.java:745)
>>
>> Caused by: 3: The system cannot find the path specified.
>>
>> at org.apache.hadoop.io.nativeio.NativeIO.renameTo0(Native Method)
>>
>> at org.apache.hadoop.io.nativeio.NativeIO.renameTo(NativeIO.java:830)
>>
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.m
>> oveBlockFiles(FsDatasetImpl.java:613)
>>
>> ... 7 more
>>
>> 16/03/14 12:29:00 INFO datanode.DataNode: Starting CheckDiskError 
>> Thread
>>
>> Thanks,
>>
>> Keech
>>
>> On 14 Mar 2016 6:10 pm, "Parsian, Mahmoud" > mpars...@illumina.com>> wrote:
>> Hi Keech,
>>
>> You may use the org.apache.hadoop.hbase.HBaseCommonTestingUtility 
>> class to start a ZK, and an HBase cluster and then do your unit tests 
>> and integration.
>> I am using this with junit and it works very well. But I am using 
>> Java only.
>>
>> Best regards,
>> Mahmoud Parsian
>>
>>
>> On 3/13/16, 11:52 PM, "Nkechi Achara" > nkach...@googlemail.com>> wrote:
>>
>> >Hi,
>> >
>> >I am trying to find an example of how to spin up a Hbase server in a 
>> >mo

RE: Hbase testing utility

2016-02-22 Thread ashish singhi
Looks like you are missing hadoop dll files from your path which are required 
to run hadoop processes on Windows.
If you don't have hadoop dll files then you can generate your own. Google 
"Steps to build Hadoop bin distribution for Windows".

Regards,
Ashish

-Original Message-
From: Gaurav Agarwal [mailto:gaurav130...@gmail.com] 
Sent: 22 February 2016 16:34
To: user@hbase.apache.org
Subject: Hbase testing utility

> I am trying to hbase testing utility to start minicluster on my local but
getting some exception
> Java.Lang.unsatisfiedLinkError:
org.apache.hadoop.ok.native.NativeIO$windows.access
>
> Please let me know what needs to be done


RE: HBase replication seems to be not working with Kerberos cross realm trust

2015-12-16 Thread ashish singhi
Hi all.

After looking more into the code we found that currently cross realm trust can 
work in HBase only when FQDN in the Kerberos principal for hbase processes is 
hostname.
So we changed the Kerberos principal accordingly and hbase replication is 
working fine.

May be we can enhance our Sasl framework to support non-hostname also as FQDN 
in the Kerberos principal.

Regards,
Ashish Singhi

From: ashish singhi
Sent: 14 December 2015 19:03
To: user
Subject: HBase replication seems to be not working with Kerberos cross realm 
trust

Hi all.

We are using HBase 1.0.2 and Java 1.8.0_51
HBase replication is not working for us in Kerberos cross realm trust.
We have followed all the instructions provided at 
http://www.cloudera.com/content/www/en-us/documentation/archive/cdh/4-x/4-5-0/CDH4-Security-Guide/cdh4sg_topic_8_4.html

We are getting the following exception in the active cluster RS log,

2015-12-14 17:16:43,768 | WARN  | 
regionserver/host-10-19-92-192/10.19.92.192:21302.replicationSource,peer1 | 
Can't replicate because of a local or network error:  | 
org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint.replicate(HBaseInterClusterReplicationEndpoint.java:295)
java.io.IOException: Couldn't setup connection for 
hbase/hadoop.hadoop@hadoop.com<mailto:hbase/hadoop.hadoop@hadoop.com> 
to hbase/hadoop.hadoop@hadoop.com<mailto:hbase/hadoop.hadoop@hadoop.com>
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$1.run(RpcClientImpl.java:664)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1673)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.handleSaslConnectionFailure(RpcClientImpl.java:636)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:744)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:895)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:864)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1209)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
at 
org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:25690)
at 
org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:79)
at 
org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:351)
at 
org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:335)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: 
org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): GSS 
initiate failed
at 
org.apache.hadoop.hbase.security.HBaseSaslRpcClient.readStatus(HBaseSaslRpcClient.java:153)
at 
org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:189)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:610)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$600(RpcClientImpl.java:156)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:736)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:733)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1673)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:733)
... 13 more


Any pointers will be very helpful here.

P.S: We have tested Hadoop distcp tool and it seems to be working for us in the 
same env.

Regards,
Ashish Singhi


HBase replication seems to be not working with Kerberos cross realm trust

2015-12-14 Thread ashish singhi
Hi all.

We are using HBase 1.0.2 and Java 1.8.0_51
HBase replication is not working for us in Kerberos cross realm trust.
We have followed all the instructions provided at 
http://www.cloudera.com/content/www/en-us/documentation/archive/cdh/4-x/4-5-0/CDH4-Security-Guide/cdh4sg_topic_8_4.html

We are getting the following exception in the active cluster RS log,

2015-12-14 17:16:43,768 | WARN  | 
regionserver/host-10-19-92-192/10.19.92.192:21302.replicationSource,peer1 | 
Can't replicate because of a local or network error:  | 
org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint.replicate(HBaseInterClusterReplicationEndpoint.java:295)
java.io.IOException: Couldn't setup connection for 
hbase/hadoop.hadoop@hadoop.com to hbase/hadoop.hadoop@hadoop.com
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$1.run(RpcClientImpl.java:664)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1673)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.handleSaslConnectionFailure(RpcClientImpl.java:636)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:744)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:895)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:864)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1209)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
at 
org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:25690)
at 
org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:79)
at 
org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:351)
at 
org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:335)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: 
org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): GSS 
initiate failed
at 
org.apache.hadoop.hbase.security.HBaseSaslRpcClient.readStatus(HBaseSaslRpcClient.java:153)
at 
org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:189)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:610)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$600(RpcClientImpl.java:156)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:736)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:733)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1673)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:733)
... 13 more


Any pointers will be very helpful here.

P.S: We have tested Hadoop distcp tool and it seems to be working for us in the 
same env.

Regards,
Ashish Singhi


Re: org.apache.hadoop.hbase.exceptions.DeserializationException: Missing pb magic PBUF prefix

2015-10-24 Thread Ashish Singhi
As you mentioned in one of the previous mail that this issue is due to
hbase-indexer code.
Can you post your problem on ngdata forum.

Regards,
Ashish Singhi

On Sat, Oct 24, 2015 at 2:31 AM, Pankil Doshi  wrote:

> I tried setting up using hdfs, still I have the same issue.
>
> On Fri, Oct 23, 2015 at 11:28 AM, Pankil Doshi 
> wrote:
>
> > One other thing which is different in my setup is I am using filesystem
> > for hbase rather hbase-indexer needs hdfs setup to be fully functional.
> So
> > I need to change that.
> >
> > Also, do you have local hbase setup or hbase cluster mode ?
> >
> > Pankil
> >
> > On Fri, Oct 23, 2015 at 11:23 AM, Pankil Doshi 
> > wrote:
> >
> >> Hi Beeshma,
> >>
> >> Thanks for your response.
> >>
> >> I am running zookeeper locally. but I am not managing it with hbase i.e
> I
> >> have this set:
> >> export HBASE_MANAGES_ZK=false
> >>
> >> and also:
> >> 
> >> hbase.cluster.distributed
> >> true
> >> 
> >>
> >> even though I have everything running locally in standalone mode.
> >>
> >> If I dont set "hbase.cluster.distributed"  I am seeing zookeeper being
> >> started with start of my hbase. I am not sure if there is any other good
> >> way not to start or stop zookeeper with hbase as by only setting
> >>  (HBASE_MANAGES_ZK=false) it doesnt work.
> >>
> >> Were you able to setup hbase-indexer at all ?
> >>
> >> Pankil
> >>
> >>
> >> On Fri, Oct 23, 2015 at 10:51 AM, beeshma r 
> wrote:
> >>
> >>> Hi Pankil,
> >>>
> >>> Are you sure your hbase is running with external zookeeper ensemble ?
> >>>
> >>> As per documentation on Hbase Replication
> >>>
> >>>
> >>>
> http://www.cloudera.com/content/www/en-us/documentation/archive/cdh/4-x/4-2-0/CDH4-Installation-Guide/cdh4ig_topic_20_11.html
> >>>
> >>> zookeeper must not be managed by HBase,.But i havent tried this
> >>>
> >>> On Fri, Oct 23, 2015 at 9:55 AM, Ashish Singhi <
> >>> ashish.singhi.apa...@gmail.com> wrote:
> >>>
> >>> > Hi Pankil.
> >>> >
> >>> > A similar issue was reported few days back (
> >>> >
> >>> >
> >>>
> http://search-hadoop.com/m/YGbbknQt52rKBDS1&subj=HRegionServer+failed+due+to+replication
> >>> > ).
> >>> >
> >>> > May be this is due to hbase-indexer code ?
> >>> > One more Q, did you upgrade hbase from 0.94 and you see this issue ?
> >>> >
> >>> > Regards,
> >>> > Ashish Singhi
> >>> >
> >>> > On Fri, Oct 23, 2015 at 2:47 AM, Pankil Doshi 
> >>> wrote:
> >>> >
> >>> > > Hi,
> >>> > >
> >>> > > I am using hbase-0.98.15-hadoop2 and hbase-indexer from lily (
> >>> > > http://ngdata.github.io/hbase-indexer/).
> >>> > >
> >>> > > I am seeing below error when I add my indexer:
> >>> > >
> >>> > >
> >>> > > 2015-10-22 14:08:27,468 INFO  [regionserver60020-EventThread]
> >>> > > replication.ReplicationTrackerZKImpl: /hbase/replication/peers
> znode
> >>> > > expired, triggering peerListChanged event
> >>> > >
> >>> > > 2015-10-22 14:08:27,473 ERROR [regionserver60020-EventThread]
> >>> > > regionserver.ReplicationSourceManager: Error while adding a new
> peer
> >>> > >
> >>> > > org.apache.hadoop.hbase.replication.ReplicationException: Error
> >>> adding
> >>> > peer
> >>> > > with id=Indexer_newtest2
> >>> > >
> >>> > > at
> >>> > >
> >>> > >
> >>> >
> >>>
> org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.createAndAddPeer(ReplicationPeersZKImpl.java:386)
> >>> > >
> >>> > > at
> >>> > >
> >>> > >
> >>> >
> >>>
> org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.peerAdded(ReplicationPeersZKImpl.java:358)
> >>> > >
> >>> > > at
> >>> > >
> >>> > >
> >>>

Re: Start hbase with replication mode

2015-10-23 Thread Ashish Singhi
Hi,

Looks like you are using start-hbase.sh script to start the hbase processes.

Can you try,
1. hbase-daemon.sh start master
2. hbase-daemon.sh start regionserver

Regards,
Ashish Singhi

On Fri, Oct 23, 2015 at 11:50 PM, beeshma r  wrote:

> HI Ted ,
>
> Can you please advice what changes  that i need in Hbase?  because hbase
> starts with own zookeeper.
> I need hbase should run with external zookeeper
>
> Thanks
> Beeshma
>
> On Wed, Oct 21, 2015 at 9:51 AM, beeshma r  wrote:
>
> > Hi
> >
> > i just want to hbase as a replication mode.As per documentation zookeeper
> > must not be managed by HBase
> >
> > so created below settings
> >
> > *zookeeper zoo.cfg(/home/beeshma/zookeeper-3.4.6/cfg)*
> >
> > tickTime=2000
> > dataDir=/home/beeshma/zookeeper
> > clientPort=2181
> > initLimit=5
> > syncLimit=2
> >
> > *hbase-site.xml*
> >
> > 
> > 
> > hbase.master
> > master:9000
> > 
> > 
> > hbase.rootdir
> > hdfs://localhost:9000/hbase
> >   
> >   
> > hbase.zookeeper.property.dataDir
> > /home/beeshma/zookeeper-3.4.6/conf
> >   
> > 
> > 
> >   hbase.cluster.distributed
> >   true
> > 
> > 
> > 
> > hbase.zookeeper.property.clientPort
> > 2181
> > 
> > 
> > hbase.zookeeper.quorum
> > localhost
> > 
> > 
> >   
> >   
> > hbase.replication
> > true
> >   
> >   
> >   
> > replication.source.ratio
> > 1.0
> >   
> >   
> >   
> > replication.source.nb.capacity
> > 1000
> >   
> >   
> >   
> > replication.replicationsource.implementation
> > com.ngdata.sep.impl.SepReplicationSource
> >   
> > 
> >
> >
> > *in hbase-env.shexport HBASE_MANAGES_ZK=false*
> >
> > when i start zookeeper and hbase ,i abale to see fallowing confusions
> > zookeeper started with fallowing specifications
> > 2015-10-21 04:22:13,810 [myid:] - INFO  [main:Environment@100] - Server
> > environment:java.io.tmpdir=/tmp
> > 2015-10-21 04:22:13,810 [myid:] - INFO  [main:Environment@100] - Server
> > environment:java.compiler=
> > 2015-10-21 04:22:13,813 [myid:] - INFO  [main:Environment@100] - Server
> > environment:os.name=Linux
> > 2015-10-21 04:22:13,814 [myid:] - INFO  [main:Environment@100] - Server
> > environment:os.arch=amd64
> > 2015-10-21 04:22:13,814 [myid:] - INFO  [main:Environment@100] - Server
> > environment:os.version=3.11.0-12-generic
> > 2015-10-21 04:22:13,814 [myid:] - INFO  [main:Environment@100] - Server
> > environment:user.name=beeshma
> > 2015-10-21 04:22:13,814 [myid:] - INFO  [main:Environment@100] - Server
> > environment:user.home=/home/beeshma
> > 2015-10-21 04:22:13,814 [myid:] - INFO  [main:Environment@100] - Server
> > environment:user.dir=/home/beeshma/zookeeper-3.4.6/bin
> > 2015-10-21 04:22:13,827 [myid:] - INFO  [main:ZooKeeperServer@755] -
> > tickTime set to 2000
> > 2015-10-21 04:22:13,827 [myid:] - INFO  [main:ZooKeeperServer@764] -
> > minSessionTimeout set to -1
> > 2015-10-21 04:22:13,827 [myid:] - INFO  [main:ZooKeeperServer@773] -
> > maxSessionTimeout set to -1
> > 2015-10-21 04:22:13,893 [myid:] - INFO  [main:NIOServerCnxnFactory@94] -
> > binding to port 0.0.0.0/0.0.0.0:2181
> >
> > But Hbase starts with own zookeeper
> > in hbase zookeeper log
> > 2015-10-21 04:25:12,345 INFO  [main] server.ZooKeeperServer: Server
> > environment:java.io.tmpdir=/tmp
> > 2015-10-21 04:25:12,345 INFO  [main] server.ZooKeeperServer: Server
> > environment:java.compiler=
> > 2015-10-21 04:25:12,345 INFO  [main] server.ZooKeeperServer: Server
> > environment:os.name=Linux
> > 2015-10-21 04:25:12,345 INFO  [main] server.ZooKeeperServer: Server
> > environment:os.arch=amd64
> > 2015-10-21 04:25:12,357 INFO  [main] server.ZooKeeperServer: Server
> > environment:os.version=3.11.0-12-generic
> > 2015-10-21 04:25:12,358 INFO  [main] server.ZooKeeperServer: Server
> > environment:user.name=beeshma
> > 2015-10-21 04:25:12,358 INFO  [main] server.ZooKeeperServer: Server
> > environment:user.home=/home/beeshma
> > 2015-10-21 04:25:12,358 INFO  [main] server.ZooKeeperServer: Server
> > environment:user.dir=/home/beeshma/hbase-0.98.6.1-hadoop2
> > 2015-10-21 04:25:12,423 INFO  [main] server.ZooKeeperServer: tickTime set
> > to 3000
> > 2015-10-21 04:25:12,423 INFO  [main] server.ZooKeeperServer:
> > minSessionTimeout

Re: org.apache.hadoop.hbase.exceptions.DeserializationException: Missing pb magic PBUF prefix

2015-10-23 Thread Ashish Singhi
Hi Pankil.

A similar issue was reported few days back (
http://search-hadoop.com/m/YGbbknQt52rKBDS1&subj=HRegionServer+failed+due+to+replication
).

May be this is due to hbase-indexer code ?
One more Q, did you upgrade hbase from 0.94 and you see this issue ?

Regards,
Ashish Singhi

On Fri, Oct 23, 2015 at 2:47 AM, Pankil Doshi  wrote:

> Hi,
>
> I am using hbase-0.98.15-hadoop2 and hbase-indexer from lily (
> http://ngdata.github.io/hbase-indexer/).
>
> I am seeing below error when I add my indexer:
>
>
> 2015-10-22 14:08:27,468 INFO  [regionserver60020-EventThread]
> replication.ReplicationTrackerZKImpl: /hbase/replication/peers znode
> expired, triggering peerListChanged event
>
> 2015-10-22 14:08:27,473 ERROR [regionserver60020-EventThread]
> regionserver.ReplicationSourceManager: Error while adding a new peer
>
> org.apache.hadoop.hbase.replication.ReplicationException: Error adding peer
> with id=Indexer_newtest2
>
> at
>
> org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.createAndAddPeer(ReplicationPeersZKImpl.java:386)
>
> at
>
> org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.peerAdded(ReplicationPeersZKImpl.java:358)
>
> at
>
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.peerListChanged(ReplicationSourceManager.java:514)
>
> at
>
> org.apache.hadoop.hbase.replication.ReplicationTrackerZKImpl$PeersWatcher.nodeChildrenChanged(ReplicationTrackerZKImpl.java:189)
>
> at
>
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:468)
>
> at
>
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
>
> at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
>
> Caused by: org.apache.hadoop.hbase.replication.ReplicationException: Error
> starting the peer state tracker for peerId=Indexer_newtest2
>
> at
>
> org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.createPeer(ReplicationPeersZKImpl.java:454)
>
> at
>
> org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.createAndAddPeer(ReplicationPeersZKImpl.java:384)
>
> ... 6 more
>
> Caused by: org.apache.zookeeper.KeeperException$DataInconsistencyException:
> KeeperErrorCode = DataInconsistency
>
> at org.apache.hadoop.hbase.zookeeper.ZKUtil.convert(ZKUtil.java:2063)
>
> at
>
> org.apache.hadoop.hbase.replication.ReplicationPeerZKImpl.startStateTracker(ReplicationPeerZKImpl.java:85)
>
> at
>
> org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.createPeer(ReplicationPeersZKImpl.java:452)
>
> ... 7 more
>
> Caused by: org.apache.hadoop.hbase.exceptions.DeserializationException:
> Missing pb magic PBUF prefix
>
> at
>
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.expectPBMagicPrefix(ProtobufUtil.java:270)
>
> at
>
> org.apache.hadoop.hbase.replication.ReplicationPeerZKImpl.parseStateFrom(ReplicationPeerZKImpl.java:243)
>
> at
>
> org.apache.hadoop.hbase.replication.ReplicationPeerZKImpl.isStateEnabled(ReplicationPeerZKImpl.java:232)
>
> at
>
> org.apache.hadoop.hbase.replication.ReplicationPeerZKImpl.readPeerStateZnode(ReplicationPeerZKImpl.java:90)
>
> at
>
> org.apache.hadoop.hbase.replication.ReplicationPeerZKImpl.startStateTracker(ReplicationPeerZKImpl.java:83)
>
> ... 8 more
>
>
>
> My Hbase-site.xml:
>
>
> 
>
> 
>
> 
>
> 
>
> 
>
>
> 
>
> hbase.cluster.distributed
>
> true
>
> 
>
> //Here you have to set the path where you want HBase to store its files.
>
>
>
>   hbase.rootdir
>
>   file:/tmp/HBase/HFiles
>
>
>
> 
>
>   hbase.zookeeper.property.clientPort
>
>   2181
>
>   Property from ZooKeeper's config zoo.cfg.
>
>   The port at which the clients will connect.
>
>   
>
> 
>
> 
>
>   hbase.zookeeper.quorum
>
>   localhost
>
>   Comma separated list of servers in the ZooKeeper Quorum.
>
>   For example, "host1.mydomain.com,host2.mydomain.com,
> host3.mydomain.com
> ".
>
>   By default this is set to localhost for local and pseudo-distributed
> modes
>
>   of operation. For a fully-distributed setup, this should be set to a
> full
>
>   list of ZooKeeper quorum servers. If HBASE_MANAGES_ZK is set in
> hbase-env.sh
>
>   this is the list of servers which we will start/stop ZooKeeper on.
>
>   
>
> 
>
> 
>
>hbase.zookeeper.property.dataDir
>
>/tmp/zookeeper
>
>Property from ZooKeeper config zoo.cfg.
>
>The direct

Re: start_replication command not available in hbase shell in HBase0.98

2015-10-13 Thread Ashish Singhi
Hi Anil.

I did not check this in 0.98.
By default when ever we add a peer, its state will be ENABLED.

There is no child node for peer-state so its 'ls' output will be empty, you
can use ZK 'get' command to find its value but the output will not be in
human readable format.

To check the peer-state value you can use zk_dump command in hbase shell or
from web UI.

Did you find any errors in the RS logs for replication ?

Regards,
Ashish Singhi

On Wed, Oct 14, 2015 at 5:04 AM, anil gupta  wrote:

> I found that those command are deprecated as per this Jira:
> https://issues.apache.org/jira/browse/HBASE-8861
>
> Still, after enabling peers the replication is not starting. We looked into
> zk. Its peer state value is null/blank:
> zknode:  ls /hbase-unsecure/replication/peers/prod-hbase/peer-state
> []
>
> Can anyone tell me what is probably going on?
>
> On Tue, Oct 13, 2015 at 3:56 PM, anil gupta  wrote:
>
> > Hi All,
> >
> > I am using HBase 0.98(HDP2.2).
> > As per the documentation here:
> >
> >
> http://www.cloudera.com/content/cloudera/en/documentation/cdh4/v4-3-1/CDH4-Installation-Guide/cdh4ig_topic_20_11.html
> >
> > I am trying to run start_replication command. But, i m getting following
> > error:
> > hbase(main):013:0> start_replication
> > NameError: undefined local variable or method `start_replication' for
> > #
> >
> > Is start_replication not a valid command in HBase0.98? If its deprecated
> > then what is the alternate command?
> >
> > --
> > Thanks & Regards,
> > Anil Gupta
> >
>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>


RE: Hbase import/export change number of rows

2015-09-22 Thread ashish singhi
How did you count the rows in the tables, using rowcounter ?

Regards,
Ashish Singhi

-Original Message-
From: Jean-Marc Spaggiari [mailto:jean-m...@spaggiari.org] 
Sent: 22 September 2015 18:15
To: user
Subject: Re: Hbase import/export change number of rows

Very interesting. Are you able to figure which rows are missing? What version 
of HBase are you using? How big is your table? What does the 2 Export and 
Import tools report? Is the ingestion stopped while doing the export/import 
sequence? Can you reproduce that every time?

Thanks,

JM

2015-09-22 8:03 GMT-04:00 OM PARKASH Nain :

> I am using Hbase export using command.
>
>   hbase org.apache.hadoop.hbase.mapreduce.Export "Table1" "hdfs path"
>
> Then I use import command from HDFS to Hbase Table;
>
> hbase org.apache.hadoop.hbase.mapreduce.Import "hdfs path" "Table2"
>
> Then I count number of row in both tables, I found mismatch number of 
> rows
>
> Table1:8301 Table2:8032
>
> Please define what goes wrong with my system.
>


RE: How to get the creation time of a HTable?

2015-08-14 Thread ashish singhi
I think we do not have any such attribute for the table but we can implement 
one like we have for Snapshot.

Regards,
Ashish Singhi

-Original Message-
From: Serega Sheypak [mailto:serega.shey...@gmail.com] 
Sent: 14 August 2015 15:28
To: user
Subject: Re: How to get the creation time of a HTable?

Hm... you can check underlying table catalog on HDFS and see time properties 
there?

2015-08-14 11:54 GMT+02:00 ShaoFeng Shi :

> Hello the community,
>
> In my case, I want to cleanup the HTables that older than certain 
> days; so we need to get the table's creation time, is there any API to 
> get this? If not, I may have to add such an attribute when creating 
> the table;
>
> Thanks for any suggestion;
>
> Shaofeng Shi,
> Apache Kylin
>


Re: Removing .oldlogs may lead to replication problem?

2015-08-07 Thread Ashish Singhi
Hi.
> Could it be because of manually deletion of the WALs in .oldlogs?
Yes very much.
Ideally it is not suggested to remove the logs manually from .oldlogs. It
is auto cleaned by LogCleaner thread. If it is not cleaned up then there is
some reason for it.
There are cases when the WAL file is not replicated and it is moved to its
archive directory(.oldlogs). Then ReplicationLogCleaner thread will ensure
that this files are not cleaned up before it is replicated to another
cluster.
I hope I have answered your question.

Regards,
Ashish Singhi

On Fri, Aug 7, 2015 at 12:44 PM, Shuai Lin  wrote:

> Hi all,
>
> We have two hbase cluster (one prod, one backup) running hbase 0.94.6 (from
> cdh4.) and have setup master-master replication.
>
> Last week we find the .oldlogs in the prod cluster was growing very large
> (13TB) and decided to remove it.
>
> But not until yesterday do we find that the replication had been in problem
> for almost a mongth due to some mis-configured firewall rules. After fixing
> the firewall the replication seems to be ok now, but some data which can be
> found in the prod cluster can't be found in the backup cluster. Could it be
> because of manually deletion of the WALs in .oldlogs?
>
> I have read a lot about replication and WALs, but could not be sure whether
> the logs in .oldlogs is related to replication.
>
> Can anyone share some thoughts? Thanks!
>


Re: Unable to create hbase table in namespace

2015-07-31 Thread Ashish Singhi
This issue was reported and fixed as part of HBASE-12098.
Fix is available in 0.98.7+ releases.

Regards,
Ashish Singhi

On Fri, Jul 31, 2015 at 6:11 PM, Shashi Vishwakarma <
shashi.vish...@gmail.com> wrote:

> I am using 0.98.
> On 31 Jul 2015 6:04 pm, "Ted Yu"  wrote:
>
> > Which HBase release are you using ?
> >
> > Thanks
> >
> >
> >
> > > On Jul 31, 2015, at 3:51 AM, Shashi Vishwakarma <
> > shashi.vish...@gmail.com> wrote:
> > >
> > > Hi
> > >
> > > I am trying to create table in hbase namspace but it is giving me
> > > permission exception but i confirmed with admin that he has given
> > > permission to my user.
> > > Below is an exception that i am getting on executing below command.
> > >
> > > create 'svish_ns:emp','empfam'
> > >
> > > *Exceptio :*
> > >
> > >
> > > *ERROR: org.apache.hadoop.hbase.security.AccessDeniedException:
> > > Insufficient permissions for user 'svish' (global, action=CREATE)*
> > >
> > > It is looking at global level instead of namespace level.It should
> check
> > > for permission at namespace level.
> > >
> > > Below is command that is used for granting permission.
> > >
> > > grant 'svish','RWC','@svish_ns'
> > >
> > > Any pointers would be great help.
> > >
> > > Thanks
> > > Shashi
> >
>


RE: How to limit the HBase server bandwidth for scan requests from MapReduce?

2015-05-28 Thread ashish singhi
This feature is available in HBase 1.1 as part of HBASE-13205

Regards,
Ashish

-Original Message-
From: ShaoFeng Shi [mailto:shaofeng...@gmail.com] 
Sent: 28 May 2015 12:40
To: user@hbase.apache.org
Subject: Re: How to limit the HBase server bandwidth for scan requests from 
MapReduce?

Hi Ted, thanks for giving the link, our scenario is just such a case; We're 
looking forward to see this feature in HBase 1.1; Thanks!

2015-05-27 22:11 GMT+08:00 Ted Yu :

> Please see
> https://blogs.apache.org/hbase/entry/the_hbase_request_throttling_feat
> ure
>
> Cheers
>
> On Tue, May 26, 2015 at 12:35 AM, ShaoFeng Shi 
> wrote:
>
> > Hello,
> >
> > Currently we're running a MapReduce over live htables to do table 
> > merge (introduced at 
> > https://hbase.apache.org/0.94/book/mapreduce.example.html
> );
> > At the samtime these tables are still serving user scan requests; As 
> > this is a full table scan which may take much server resources, we 
> > want to control the impact to users during the MapReduce, avoding 
> > remarkable performance downgrade during the MR; I see there are two 
> > parameters might be related: caching and cacheBlocks, like :
> >
> >
> > scan.setCaching(500);
> >
> > scan.setCacheBlocks(false);  // don't set to true for MR jobs
> >
> >
> > But still want to double check with the experts here, is there other 
> > ways to control this? Thanks!
> >
> > Shaofeng Shi
> > Apache Kylin (incubation)
> >
>


Re: [VOTE] First release candidate for HBase 1.1.0 (RC0) is available.

2015-04-29 Thread Ashish Singhi
Hi Nick.
bq. (HBase-1.1.0RC0) is available for download at
https://dist.apache.org/repos/dist/dev/hbase/hbase-1.0.1RC2/
The above url is correct ? from the name it does not seems to be.

-- Ashish

On Thu, Apr 30, 2015 at 11:05 AM, Nick Dimiduk  wrote:

> I'm happy to announce the first release candidate of HBase 1.1.0
> (HBase-1.1.0RC0) is available for download at
> https://dist.apache.org/repos/dist/dev/hbase/hbase-1.0.1RC2/
>
> Maven artifacts are also available in the staging repository
> https://repository.apache.org/content/repositories/orgapachehbase-1076
>
> Artifacts are signed with my code signing subkey 0xAD9039071C3489BD,
> available in the Apache keys directory
> https://people.apache.org/keys/committer/ndimiduk.asc and in
> http://people.apache.org/~ndimiduk/KEY
>
> There's also a signed tag for this release at
>
> https://git-wip-us.apache.org/repos/asf?p=hbase.git;a=tag;h=2c102dbe56116ca342abd08e906d70d900048a55
>
> HBase 1.1.0 is the first minor release in the HBase 1.x line, continuing on
> the theme of bringing a stable, reliable database to the Hadoop and NoSQL
> communities. This release includes nearly 200 resolved issues above the
> 1.0.x series to date. Notable features include:
>
>  - Async RPC client (HBASE-12684)
>  - Simple RPC throttling (HBASE-11598)
>  - Improved compaction controls (HBASE-8329, HBASE-12859)
>  - New extension interfaces for coprocessor users, better supporting
> projects like Phoenix (HBASE-12972, HBASE-12975)
>  - Per-column family flush (HBASE-10201)
>  - WAL on SSD (HBASE-12848)
>  - BlockCache in Memcached (HBASE-13170)
>  - Tons of region replica enhancements around META, WAL, and bulk loading
> (HBASE-11574, HBASE-11568, HBASE-11571, HBASE-11567)
>
> The full list of issues can be found at
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12310753&version=12329043
>
> Please try out this candidate and vote +/-1 by midnight Pacific time on
> 2015-05-06 as to whether we should release these bits as HBase 1.1.0.
>
> Thanks,
> Nick
>


RE: Please welcome new HBase committer Jing Chen (Jerry) He

2015-04-01 Thread ashish singhi
Congratulations, Jerry!

-Original Message-
From: Andrew Purtell [mailto:apurt...@apache.org] 
Sent: 01 April 2015 23:23
To: d...@hbase.apache.org; user@hbase.apache.org
Subject: Please welcome new HBase committer Jing Chen (Jerry) He

On behalf of the Apache HBase PMC, I am pleased to announce that Jerry He has 
accepted the PMC's invitation to become a committer on the project. We 
appreciate all of Jerry's hard work and generous contributions thus far, and 
look forward to his continued involvement.

Congratulations and welcome, Jerry!

--
​​

Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein (via 
Tom White)


RE: Please welcome new HBase committer Srikanth Srungarapu

2015-04-01 Thread ashish singhi
Congratulations, Srikanth!

-Original Message-
From: Andrew Purtell [mailto:apurt...@apache.org] 
Sent: 01 April 2015 23:23
To: d...@hbase.apache.org; user@hbase.apache.org
Subject: Please welcome new HBase committer Srikanth Srungarapu

On behalf of the Apache HBase PMC, I am pleased to announce that Srikanth 
Srungarapu has accepted the PMC's invitation to become a committer on the 
project. We appreciate all of Srikanth's hard work and generous contributions 
thus far, and look forward to his continued involvement.

Congratulations and welcome, Srikanth!

--
​​

Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein (via 
Tom White)


RE: [ANNOUNCE] Sean Busbey joins the Apache HBase PMC

2015-03-26 Thread ashish singhi
Congratulations, Sean!

-Original Message-
From: Andrew Purtell [mailto:apurt...@apache.org] 
Sent: 26 March 2015 22:56
To: user@hbase.apache.org; d...@hbase.apache.org
Subject: [ANNOUNCE] Sean Busbey joins the Apache HBase PMC

On behalf of the Apache HBase PMC I"m pleased to announce that Sean Busbey has 
accepted our invitation to become a PMC member on the Apache HBase project. 
Sean has been an active and positive contributor in many areas, including on 
project meta-concerns such as versioning, build infrastructure, code reviews, 
etc. He's a natural and we're looking forward to many more future contributions.

Welcome to the PMC, Sean!

--
Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein (via 
Tom White)


RE: 回复:RE: bulkload promble

2015-01-08 Thread ashish singhi
Since you are using hadoop script to run completebulkload it is required.

If you try like below then it is not required.
bin/hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles 
/xiaoming/loghouseip loghouse

-Ashish

-Original Message-
From: aafri [mailto:wozhuibang...@qq.com] 
Sent: 08 January 2015 13:26
To: user
Subject: 回复:RE: bulkload promble

Yes,it is runing good. Thank you,Huaweier.


It looks like I miss this env configuration:HADOOP_CLASSPATH‍.  
does this configuration is ‍necessary when use bulkload?‍








-- 原始邮件 --
发件人: "ashish singhi";;
发送时间: 2015年1月8日(星期四) 下午3:26
收件人: "user@hbase.apache.org"; 

主题: RE: bulkload promble



Can you try running it like below.

HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` ${HADOOP_HOME}/bin/hadoop 
jar ${HBASE_HOME}/lib/ hbase-server-0.98.8-hadoop2.jar completebulkload 
/xiaoming/loghouseip loghouse

-Ashish
-Original Message-
From: aafri [mailto:wozhuibang...@qq.com] 
Sent: 08 January 2015 12:31
To: user
Subject: bulkload promble

I just use bulkload to load data into hbase table loghouse.
I have make hfile in the path of hdfs:/xiaoming/loghouseip‍ I exe this command:
hadoop jar $HBASE_HOME/lib/hbase-server-0.98.8-hadoop2.jar completebulkload 
/xiaoming/loghouseip loghouse‍ then retuen:


Exception in thread "main" java.lang.NoClassDefFoundError: 
org/apache/hadoop/hbase/filter/Filter
at java.lang.Class.getDeclaredMethods0(Native Method)
at java.lang.Class.privateGetDeclaredMethods(Class.java:2570)
at java.lang.Class.getMethod0(Class.java:2813)
at java.lang.Class.getMethod(Class.java:1663)
at 
org.apache.hadoop.util.ProgramDriver$ProgramDescription.(ProgramDriver.java:60)
at org.apache.hadoop.util.ProgramDriver.addClass(ProgramDriver.java:103)
at org.apache.hadoop.hbase.mapreduce.Driver.main(Driver.java:39)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Caused by: java.lang.ClassNotFoundException: 
org.apache.hadoop.hbase.filter.Filter
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 12 more‍



The CLASSPATH i have set on linux .


what's wrong?


RE: bulkload promble

2015-01-07 Thread ashish singhi
Can you try running it like below.

HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` ${HADOOP_HOME}/bin/hadoop 
jar ${HBASE_HOME}/lib/ hbase-server-0.98.8-hadoop2.jar completebulkload 
/xiaoming/loghouseip loghouse

-Ashish
-Original Message-
From: aafri [mailto:wozhuibang...@qq.com] 
Sent: 08 January 2015 12:31
To: user
Subject: bulkload promble

I just use bulkload to load data into hbase table loghouse.
I have make hfile in the path of hdfs:/xiaoming/loghouseip‍ I exe this command:
hadoop jar $HBASE_HOME/lib/hbase-server-0.98.8-hadoop2.jar completebulkload 
/xiaoming/loghouseip loghouse‍ then retuen:


Exception in thread "main" java.lang.NoClassDefFoundError: 
org/apache/hadoop/hbase/filter/Filter
at java.lang.Class.getDeclaredMethods0(Native Method)
at java.lang.Class.privateGetDeclaredMethods(Class.java:2570)
at java.lang.Class.getMethod0(Class.java:2813)
at java.lang.Class.getMethod(Class.java:1663)
at 
org.apache.hadoop.util.ProgramDriver$ProgramDescription.(ProgramDriver.java:60)
at org.apache.hadoop.util.ProgramDriver.addClass(ProgramDriver.java:103)
at org.apache.hadoop.hbase.mapreduce.Driver.main(Driver.java:39)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Caused by: java.lang.ClassNotFoundException: 
org.apache.hadoop.hbase.filter.Filter
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 12 more‍



The CLASSPATH i have set on linux .


what's wrong?


RE: Reg. HBase client API calls in secure cluster (Kerberos)

2014-12-10 Thread ashish singhi
Hi.

When I get this exception I usually set 
System.setProperty("java.security.krb5.conf", krbfilepath); in my client code.
Where krbfilepath is path to krb5.conf file.

Regards

-Original Message-
From: AnandaVelMurugan Chandra Mohan [mailto:ananthu2...@gmail.com] 
Sent: 10 December 2014 15:41
To: user@hbase.apache.org
Subject: Re: Reg. HBase client API calls in secure cluster (Kerberos)

Hi,

Thanks for responding back. But I get this error now

Failure to initialize security context [Caused by GSSException: Invalid name 
provided (Mechanism level: Could not load configuration file 
C:\Windows\krb5.ini (The system cannot find the file specified))]

My problem is very similar to this stackflow question

http://stackoverflow.com/questions/21193453/how-to-access-secure-kerberized-hadoop-using-just-java-api

Basically I want to run the examples in this link
http://java.dzone.com/articles/handling-big-data-hbase-part-4 against my secure 
cluster.

Regards,
Anand

On Wed, Dec 10, 2014 at 11:58 AM, Srikanth Srungarapu  wrote:

> Hi,
> Please take a look at the patch added as part of HBASE-12366 
> . There will be a 
> new AuthUtil. launchAuthChore() which should help in your case. And 
> also, the documentation patch is here HBASE-12528 
>  just in case. Hope 
> this helps.
> Thanks,
> Srikanth.
>
> On Tue, Dec 9, 2014 at 10:11 PM, AnandaVelMurugan Chandra Mohan < 
> ananthu2...@gmail.com> wrote:
>
> > Hi All,
> >
> > My Hbase admin has set up kerberos authentication in our cluster. 
> > Now all the HBase Java client API calls hang indefinitely.
> > I could scan/get in HBase shell, but when I do the same through the 
> > java api,it hangs in the scan statement.
> >
> > This is code which was working earlier, but not now. Earlier I was
> running
> > this code outside of the cluster without any impersonation.
> >
> > Configuration config = HBaseConfiguration.create(); HTable table = 
> > new HTable(config, "Assets"); Scan Scan = new Scan(); ResultScanner 
> > results = table.getScanner(Scan);
> >
> > Do I need to impersonate as any super user to make this work now? 
> > How do
> I
> > pass the kerberos credentials? Any pointers would be greatly appreciated.
> > --
> > Regards,
> > Anand
> >
>



--
Regards,
Anand


RE: can not enable snapshot function on hbase 0.94.6

2014-10-16 Thread ashish singhi
See http://stackoverflow.com/questions/21777018/big-data-hbase if it can help.

Regards
Ashish

-Original Message-
From: ch huang [mailto:justlo...@gmail.com] 
Sent: 17 October 2014 12:02
To: user@hbase.apache.org
Subject: can not enable snapshot function on hbase 0.94.6

hi,maillist :
 i installed CDH4.4 with hbase version 0.94.6 ,(no cloudera manager
involved) but when i test snapshot function ,i get error like this  ,acturally 
, i add the following info into my /etc/hbase/conf/hbase-site.xml (each node) 
,and restart hbase cluster,still same error,anyone know why?


hbase.snapshot.enabled
true



hbase(main):002:0> snapshot 'demo','demo_2014'

ERROR: java.io.IOException: java.io.IOException:
java.lang.UnsupportedOperationException: To use snapshots, You must add to the 
hbase-site.xml of the HBase Master: 'hbase.snapshot.enabled' property with 
value 'true'.
at
org.apache.hadoop.hbase.master.HMaster.snapshot(HMaster.java:2008)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320)
at
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1428)
Caused by: java.lang.UnsupportedOperationException: To use snapshots, You must 
add to the hbase-site.xml of the HBase Master:
'hbase.snapshot.enabled' property with value 'true'.
at
org.apache.hadoop.hbase.master.snapshot.SnapshotManager.checkSnapshotSupport(SnapshotManager.java:890)
at
org.apache.hadoop.hbase.master.HMaster.snapshot(HMaster.java:2006)
... 6 more

Here is some help for this command:
Take a snapshot of specified table. Examples:

  hbase> snapshot 'sourceTable', 'snapshotName'


Master shuts down during log splitting on restart

2014-09-24 Thread ashish singhi
Hi All.

I am using 0.98.6 HBase.

I observed that when I have the following value set in my hbase-site.xml file


hbase.regionserver.wal.encryption
false


hbase.regionserver.hlog.reader.impl
org.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogReader


hbase.regionserver.hlog.writer.impl
org.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogWriter


And while log splitting on hbase service restart, master shutdown with 
following exception.

Exception in master log

2014-09-24 17:14:28,590 FATAL [master:host-10-18-40-18:6] master.HMaster: 
Master server abort: loaded coprocessors are: 
[org.apache.hadoop.hbase.security.access.AccessController]
2014-09-24 17:14:28,590 FATAL [master:host-10-18-40-18:6] master.HMaster: 
Unhandled exception. Starting shutdown.
java.io.IOException: error or interrupted while splitting logs in 
[hdfs://10.18.40.18:8020/tmp/hbase-ashish/hbase/WALs/host-10-18-40-18,60020,1411558717849-splitting]
 Task = installed = 6 done = 0 error = 6
at 
org.apache.hadoop.hbase.master.SplitLogManager.splitLogDistributed(SplitLogManager.java:378)
at 
org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:415)
at 
org.apache.hadoop.hbase.master.MasterFileSystem.splitMetaLog(MasterFileSystem.java:307)
at 
org.apache.hadoop.hbase.master.MasterFileSystem.splitMetaLog(MasterFileSystem.java:298)
at 
org.apache.hadoop.hbase.master.HMaster.splitMetaLogBeforeAssignment(HMaster.java:1071)
at 
org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:863)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:612)
at java.lang.Thread.run(Thread.java:745)

Exception in region server log

2014-09-24 20:10:16,535 WARN  [RS_LOG_REPLAY_OPS-host-10-18-40-18:60020-1] 
regionserver.SplitLogWorker: log splitting of 
WALs/host-10-18-40-18,60020,1411558717849-splitting/host-10-18-40-18%2C60020%2C1411558717849.1411558724316.meta
 failed, returning error
java.io.IOException: Cannot get log reader
at 
org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:161)
at 
org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:89)
at 
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:660)
at 
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:569)
at 
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:282)
at 
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:225)
at 
org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:143)
at 
org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:82)
at 
org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.UnsupportedOperationException: Unable to find suitable 
constructor for class 
org.apache.hadoop.hbase.regionserver.wal.SecureWALCellCodec
at 
org.apache.hadoop.hbase.util.ReflectionUtils.instantiateWithCustomCtor(ReflectionUtils.java:39)
at 
org.apache.hadoop.hbase.regionserver.wal.WALCellCodec.create(WALCellCodec.java:101)
at 
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.getCodec(ProtobufLogReader.java:242)
at 
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initAfterCompression(ProtobufLogReader.java:247)
at 
org.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogReader.initAfterCompression(SecureProtobufLogReader.java:138)
at 
org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:85)
at 
org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:113)
... 11 more
Caused by: java.lang.NoSuchMethodException: 
org.apache.hadoop.hbase.regionserver.wal.SecureWALCellCodec.(org.apache.hadoop.conf.Configuration,
 org.apache.hadoop.hbase.regionserver.wal.CompressionContext)
at java.lang.Class.getConstructor0(Class.java:2849)
at java.lang.Class.getDeclaredConstructor(Class.java:2053)
at 
org.apache.hadoop.hbase.util.ReflectionUtils.instantiateWithCustomCtor(ReflectionUtils.java:33)
... 17 more

Question,

When wal encryption is disabled should we set SecureWALCellCodec for 
cellCodecClsName in WALHeader.Builder object  ?



Regards,
Ashish Singhi


RE: One question regarding bulk load

2014-04-07 Thread ashish singhi
Yes. Thanks Kashif for pointing it out. There was an empty line at the end of 
the file.

Regards
Ashish
-Original Message-
From: Kashif Jawed Siddiqui [mailto:kashi...@huawei.com] 
Sent: 07 April 2014 15:28
To: user@hbase.apache.org
Cc: d...@hbase.apache.org
Subject: RE: One question regarding bulk load

Hi,

Please check if your file contains empty lines(maybe in the beginning 
or the end).

Since -Dimporttsv.skip.bad.lines=false is set, any empty lines will 
cause this error.

Regards
KASHIF

-Original Message-
From: ashish singhi [mailto:ashish.sin...@huawei.com] 
Sent: 07 April 2014 13:56
To: user@hbase.apache.org
Cc: d...@hbase.apache.org
Subject: One question regarding bulk load

Hi all.

I have one question regarding bulk load.
How to load data with table empty column values in few rows using bulk load 
tool ?

I tried the following simple example in HBase 0.94.11 and Hadoop-2, with table 
having three columns and second column value is empty in few rows using bulk 
load tool.


Ø  Data in file is in below format

row0,value1,value0

row1,,value1

row2,value3,value2

row3,,value3

row4,value5,value4

row5,,value5

row6,value7,value6

row7,,value7

row8,value9,value8



Ø  When I execute the command

hadoop jar /hbase-0.94.11-security.jar importtsv 
-Dimporttsv.skip.bad.lines=false -Dimporttsv.separator=, 
-Dimporttsv.columns=HBASE_ROW_KEY,cf1:c1,cf1:c2 -Dimporttsv.bulk.output= 
/bulkdata/comma_separated _3columns comma_separated_3columns /comma_separated_ 
3columns.txt



I get the below Exception.



2014-04-07 11:15:01,870 INFO  [main] mapreduce.Job 
(Job.java:printTaskEvents(1424)) - Task Id : 
attempt_1396526639698_0028_m_00_2, Status : FAILED

Error: java.io.IOException: 
org.apache.hadoop.hbase.mapreduce.ImportTsv$TsvParser$BadTsvLineException: No 
delimiter

at 
org.apache.hadoop.hbase.mapreduce.TsvImporterTextMapper.map(TsvImporterTextMapper.java:135)

at 
org.apache.hadoop.hbase.mapreduce.TsvImporterTextMapper.map(TsvImporterTextMapper.java:33)

at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)

at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763)

at org.apache.hadoop.mapred.MapTask.run(MapTask.java:339)

at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)

Regards,
Ashish Singhi


One question regarding bulk load

2014-04-07 Thread ashish singhi
Hi all.

I have one question regarding bulk load.
How to load data with table empty column values in few rows using bulk load 
tool ?

I tried the following simple example in HBase 0.94.11 and Hadoop-2, with table 
having three columns and second column value is empty in few rows using bulk 
load tool.


Ø  Data in file is in below format

row0,value1,value0

row1,,value1

row2,value3,value2

row3,,value3

row4,value5,value4

row5,,value5

row6,value7,value6

row7,,value7

row8,value9,value8



Ø  When I execute the command

hadoop jar /hbase-0.94.11-security.jar importtsv 
-Dimporttsv.skip.bad.lines=false -Dimporttsv.separator=, 
-Dimporttsv.columns=HBASE_ROW_KEY,cf1:c1,cf1:c2 -Dimporttsv.bulk.output= 
/bulkdata/comma_separated _3columns comma_separated_3columns /comma_separated_ 
3columns.txt



I get the below Exception.



2014-04-07 11:15:01,870 INFO  [main] mapreduce.Job 
(Job.java:printTaskEvents(1424)) - Task Id : 
attempt_1396526639698_0028_m_00_2, Status : FAILED

Error: java.io.IOException: 
org.apache.hadoop.hbase.mapreduce.ImportTsv$TsvParser$BadTsvLineException: No 
delimiter

at 
org.apache.hadoop.hbase.mapreduce.TsvImporterTextMapper.map(TsvImporterTextMapper.java:135)

at 
org.apache.hadoop.hbase.mapreduce.TsvImporterTextMapper.map(TsvImporterTextMapper.java:33)

at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)

at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763)

at org.apache.hadoop.mapred.MapTask.run(MapTask.java:339)

at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)

Regards,
Ashish Singhi