RE: how to get the last row inserted into a hbase table

2018-07-11 Thread Ming


thanks Josh,

My application leave the timestamp as default, so each put/delete should
have timestamp generated by HBase itself. I was thinking there is a way to
get the last timestamp.
But think it again, yes you are right, the data is sorted by rowkey, so
there is no quick way to get the biggest timestamp without a full scan, or
at least scan the memstore, the last row must be in memstore.
But there is no such API client can invoke.

Let me think it again.

thanks,
Ming

-Original Message-
From: Josh Elser  
Sent: Thursday, July 12, 2018 12:55 AM
To: user@hbase.apache.org
Subject: Re: how to get the last row inserted into a hbase table

Unless you are including the date+time in the rowKey yourself, no.

HBase has exactly one index for fast lookups, and that is the rowKey. 
Any other query operation is (essentially) an exhaustive search.

On 7/11/18 12:07 PM, Ming wrote:
> Hi, all,
> 
>   
> 
> Is there a way to get the last row put/delete into a HBase table?
> 
> In other words, how can I tell the last time a HBase table is changed? I
was
> trying to check the HDFS file stats, but HBase has memstore, so that is
not
> a good way, and HFile location is internal to HBase.
> 
>   
> 
> My purpose is to quickly check the last modified timestamp for a given
HBase
> table.
> 
>   
> 
> Thanks,
> 
> Ming
> 
> 




Re: Query for OldWals and use of WAl for Hbase indexer

2018-07-11 Thread Reid Chan
oldWals are supposed to be cleaned in master background chore, I also doubt 
they are needed.

HBASE-20352(for 1.x version) is to speed up cleaning oldWals, it may address 
your concern "OldWals is quite huge"


R.C




From: Manjeet Singh 
Sent: 12 July 2018 08:19:21
To: user@hbase.apache.org
Subject: Re: Query for OldWals and use of WAl for Hbase indexer

I have one more question

If solr is having its own data mean its maintaining data in their shards
and hbase is maintaining in data folder... Why still oldWals need?

Thanks
Manjeet singh

On Wed, 11 Jul 2018, 23:19 Manjeet Singh, 
wrote:

> Thanks Sean for your reply
>
> I still have some question un answered like
> Q1: How Hbase syncronized with Hbase indexer.
> Q2 What optimization I can apply.
> Q3 As it's clear from my stats, data in OldWals is quite huge so it's not
> getting clear my HMaster., how can I improve my HDFS space issue due to
> this?
>
> Thanks
> Manjeet Singh
>
> On Wed, Jul 11, 2018 at 9:33 PM, Sean Busbey  wrote:
>
>> Presuming you're using the Lily indexer[1], yes it relies on hbase's
>> built in cross-cluster replication.
>>
>> The replication system stores WALs until it can successfully send them
>> for replication. If you look in ZK you should be able to see which
>> regionserver(s) are waiting to send those WALs over. The easiest way
>> to do this is probably to look at the "zk dump" web page on the
>> Master's web ui[2].
>>
>> Once you have the particular region server(s), take a look at their
>> logs for messages about difficulty sending edits to the replication
>> peer you have set up for the destination solr collection.
>>
>> If you remove the WALs then the solr collection will have a hole in
>> it. Depending on how far behind you are, it might be quicker to 1)
>> remove the replication peer, 2) wait for old wals to clear, 3)
>> reenable replication, 4) use a batch indexing tool to index data
>> already in the table.
>>
>> [1]:
>>
>> http://ngdata.github.io/hbase-indexer/
>>
>> [2]:
>>
>> The specifics will vary depending on your installation, but the page
>> is essentially at a URL like
>> https://active-master-host.example.com:22002/zk.jsp
>>
>> the link is on the master UI landing page, near the bottom, in the
>> description of the "ZooKeeper Quorum" row. it's the end of "Addresses
>> of all registered ZK servers. For more, see zk dump."
>>
>> On Wed, Jul 11, 2018 at 10:16 AM, Manjeet Singh
>>  wrote:
>> > Hi All
>> >
>> > I have a query regarding Hbase replication and OldWals
>> >
>> > Hbase version 1.2.1
>> >
>> > To enable Hbase indexing we use below command on table
>> >
>> > alter '', {NAME => 'CF1', REPLICATION_SCOPE => 1}
>> >
>> > By Doing this actually replication get enabled as hbase-indexer required
>> > it, as per my understanding indexer use hbase WAL (Please correct me if
>> I
>> > am wrong).
>> >
>> > so question is How Hbase syncronize with Solr Indexer? What is the role
>> of
>> > replication? what optimization we can apply in order to reduce data
>> size?
>> >
>> >
>> > I can see that our OldWals are getting filled , if Hmaster it self
>> taking
>> > care why it's reached to 7.2 TB? what if I delete it, does it impact
>> solr
>> > indexing?
>> >
>> > 7.2 K   21.5 K  /hbase/.hbase-snapshot
>> > 0   0   /hbase/.tmp
>> > 0   0   /hbase/MasterProcWALs
>> > 18.3 G  60.2 G  /hbase/WALs
>> > 28.7 G  86.1 G  /hbase/archive
>> > 0   0   /hbase/corrupt
>> > 1.7 T   5.2 T   /hbase/data
>> > 42  126 /hbase/hbase.id
>> > 7   21  /hbase/hbase.version
>> > 7.2 T   21.6 T  /hbase/oldWALs
>> >
>> >
>> >
>> >
>> > Thanks
>> > Manjeet Singh
>>
>
>
>
> --
> luv all
>


Re: Query for OldWals and use of WAl for Hbase indexer

2018-07-11 Thread Manjeet Singh
I have one more question

If solr is having its own data mean its maintaining data in their shards
and hbase is maintaining in data folder... Why still oldWals need?

Thanks
Manjeet singh

On Wed, 11 Jul 2018, 23:19 Manjeet Singh, 
wrote:

> Thanks Sean for your reply
>
> I still have some question un answered like
> Q1: How Hbase syncronized with Hbase indexer.
> Q2 What optimization I can apply.
> Q3 As it's clear from my stats, data in OldWals is quite huge so it's not
> getting clear my HMaster., how can I improve my HDFS space issue due to
> this?
>
> Thanks
> Manjeet Singh
>
> On Wed, Jul 11, 2018 at 9:33 PM, Sean Busbey  wrote:
>
>> Presuming you're using the Lily indexer[1], yes it relies on hbase's
>> built in cross-cluster replication.
>>
>> The replication system stores WALs until it can successfully send them
>> for replication. If you look in ZK you should be able to see which
>> regionserver(s) are waiting to send those WALs over. The easiest way
>> to do this is probably to look at the "zk dump" web page on the
>> Master's web ui[2].
>>
>> Once you have the particular region server(s), take a look at their
>> logs for messages about difficulty sending edits to the replication
>> peer you have set up for the destination solr collection.
>>
>> If you remove the WALs then the solr collection will have a hole in
>> it. Depending on how far behind you are, it might be quicker to 1)
>> remove the replication peer, 2) wait for old wals to clear, 3)
>> reenable replication, 4) use a batch indexing tool to index data
>> already in the table.
>>
>> [1]:
>>
>> http://ngdata.github.io/hbase-indexer/
>>
>> [2]:
>>
>> The specifics will vary depending on your installation, but the page
>> is essentially at a URL like
>> https://active-master-host.example.com:22002/zk.jsp
>>
>> the link is on the master UI landing page, near the bottom, in the
>> description of the "ZooKeeper Quorum" row. it's the end of "Addresses
>> of all registered ZK servers. For more, see zk dump."
>>
>> On Wed, Jul 11, 2018 at 10:16 AM, Manjeet Singh
>>  wrote:
>> > Hi All
>> >
>> > I have a query regarding Hbase replication and OldWals
>> >
>> > Hbase version 1.2.1
>> >
>> > To enable Hbase indexing we use below command on table
>> >
>> > alter '', {NAME => 'CF1', REPLICATION_SCOPE => 1}
>> >
>> > By Doing this actually replication get enabled as hbase-indexer required
>> > it, as per my understanding indexer use hbase WAL (Please correct me if
>> I
>> > am wrong).
>> >
>> > so question is How Hbase syncronize with Solr Indexer? What is the role
>> of
>> > replication? what optimization we can apply in order to reduce data
>> size?
>> >
>> >
>> > I can see that our OldWals are getting filled , if Hmaster it self
>> taking
>> > care why it's reached to 7.2 TB? what if I delete it, does it impact
>> solr
>> > indexing?
>> >
>> > 7.2 K   21.5 K  /hbase/.hbase-snapshot
>> > 0   0   /hbase/.tmp
>> > 0   0   /hbase/MasterProcWALs
>> > 18.3 G  60.2 G  /hbase/WALs
>> > 28.7 G  86.1 G  /hbase/archive
>> > 0   0   /hbase/corrupt
>> > 1.7 T   5.2 T   /hbase/data
>> > 42  126 /hbase/hbase.id
>> > 7   21  /hbase/hbase.version
>> > 7.2 T   21.6 T  /hbase/oldWALs
>> >
>> >
>> >
>> >
>> > Thanks
>> > Manjeet Singh
>>
>
>
>
> --
> luv all
>


Re: I am a subscribe please add me thanks

2018-07-11 Thread Ted Yu
Please see for subscription information:

http://hbase.apache.org/mail-lists.html

On Wed, Jul 11, 2018 at 4:19 AM bill.zhou  wrote:

> I am a subscriber please add me thanks
>
>


Re: Query for OldWals and use of WAl for Hbase indexer

2018-07-11 Thread Manjeet Singh
Thanks Sean for your reply

I still have some question un answered like
Q1: How Hbase syncronized with Hbase indexer.
Q2 What optimization I can apply.
Q3 As it's clear from my stats, data in OldWals is quite huge so it's not
getting clear my HMaster., how can I improve my HDFS space issue due to
this?

Thanks
Manjeet Singh

On Wed, Jul 11, 2018 at 9:33 PM, Sean Busbey  wrote:

> Presuming you're using the Lily indexer[1], yes it relies on hbase's
> built in cross-cluster replication.
>
> The replication system stores WALs until it can successfully send them
> for replication. If you look in ZK you should be able to see which
> regionserver(s) are waiting to send those WALs over. The easiest way
> to do this is probably to look at the "zk dump" web page on the
> Master's web ui[2].
>
> Once you have the particular region server(s), take a look at their
> logs for messages about difficulty sending edits to the replication
> peer you have set up for the destination solr collection.
>
> If you remove the WALs then the solr collection will have a hole in
> it. Depending on how far behind you are, it might be quicker to 1)
> remove the replication peer, 2) wait for old wals to clear, 3)
> reenable replication, 4) use a batch indexing tool to index data
> already in the table.
>
> [1]:
>
> http://ngdata.github.io/hbase-indexer/
>
> [2]:
>
> The specifics will vary depending on your installation, but the page
> is essentially at a URL like
> https://active-master-host.example.com:22002/zk.jsp
>
> the link is on the master UI landing page, near the bottom, in the
> description of the "ZooKeeper Quorum" row. it's the end of "Addresses
> of all registered ZK servers. For more, see zk dump."
>
> On Wed, Jul 11, 2018 at 10:16 AM, Manjeet Singh
>  wrote:
> > Hi All
> >
> > I have a query regarding Hbase replication and OldWals
> >
> > Hbase version 1.2.1
> >
> > To enable Hbase indexing we use below command on table
> >
> > alter '', {NAME => 'CF1', REPLICATION_SCOPE => 1}
> >
> > By Doing this actually replication get enabled as hbase-indexer required
> > it, as per my understanding indexer use hbase WAL (Please correct me if I
> > am wrong).
> >
> > so question is How Hbase syncronize with Solr Indexer? What is the role
> of
> > replication? what optimization we can apply in order to reduce data size?
> >
> >
> > I can see that our OldWals are getting filled , if Hmaster it self taking
> > care why it's reached to 7.2 TB? what if I delete it, does it impact solr
> > indexing?
> >
> > 7.2 K   21.5 K  /hbase/.hbase-snapshot
> > 0   0   /hbase/.tmp
> > 0   0   /hbase/MasterProcWALs
> > 18.3 G  60.2 G  /hbase/WALs
> > 28.7 G  86.1 G  /hbase/archive
> > 0   0   /hbase/corrupt
> > 1.7 T   5.2 T   /hbase/data
> > 42  126 /hbase/hbase.id
> > 7   21  /hbase/hbase.version
> > 7.2 T   21.6 T  /hbase/oldWALs
> >
> >
> >
> >
> > Thanks
> > Manjeet Singh
>



-- 
luv all


Re: how to get the last row inserted into a hbase table

2018-07-11 Thread Josh Elser

Unless you are including the date+time in the rowKey yourself, no.

HBase has exactly one index for fast lookups, and that is the rowKey. 
Any other query operation is (essentially) an exhaustive search.


On 7/11/18 12:07 PM, Ming wrote:

Hi, all,

  


Is there a way to get the last row put/delete into a HBase table?

In other words, how can I tell the last time a HBase table is changed? I was
trying to check the HDFS file stats, but HBase has memstore, so that is not
a good way, and HFile location is internal to HBase.

  


My purpose is to quickly check the last modified timestamp for a given HBase
table.

  


Thanks,

Ming




[question] what's the Hbase-spark module different with other two spark on Hbase

2018-07-11 Thread nurseryboy
Dear All 

 I saw there is one Hbase-spark module in Hbase code and saw there is one
jira for this: https://issues.apache.org/jira/browse/HBASE-13992
In this jira it's told the Hbase-spark module code initially from
https://github.com/cloudera-labs/SparkOnHBase
And in anther discuss list it's mentioned SHC shared all the code to Hbase ( 
discussion link is: 
https://lists.apache.org/thread.html/fd74ef9b9da77abf794664f06ea19c839fb3d543647fb29115081683@%3Cdev.hbase.apache.org%3E
SHC link is: https://github.com/hortonworks-spark/shc) 

I download the code Hbase-spark, SHC and cloudera-las SparkOnHbase, then
found the code is not same. 

So I am little confusion: what's the Hbase-spark relationship with SHC and
cloudera-las SparkOnHbase?
inherit from them or just get the idea from them ? 

thanks for community can help me clear about this confusion.  thanks 

Regards





--
Sent from: http://apache-hbase.679495.n3.nabble.com/HBase-User-f4020416.html


[question] what's the Hbase-spark module different with other two spark on Hbase

2018-07-11 Thread nurseryboy
Dear All 

 I saw there is one Hbase-spark module in Hbase code and saw there is one
jira for this: https://issues.apache.org/jira/browse/HBASE-13992
In this jira it's told the Hbase-spark module code initially from
https://github.com/cloudera-labs/SparkOnHBase
And in anther discuss list it's mentioned SHC shared all the code to Hbase ( 
discussion link is: 
https://lists.apache.org/thread.html/fd74ef9b9da77abf794664f06ea19c839fb3d543647fb29115081683@%3Cdev.hbase.apache.org%3E
SHC link is: https://github.com/hortonworks-spark/shc) 

I download the code Hbase-spark, SHC and cloudera-las SparkOnHbase, then
found the code is not same. 

So I am little confusion: what's the Hbase-spark relationship with SHC and
cloudera-las SparkOnHbase?
inherit from them or just get the idea from them ? 

thanks for community can help me clear about this confusion.  thanks 

Regards





--
Sent from: http://apache-hbase.679495.n3.nabble.com/HBase-User-f4020416.html


how to get the last row inserted into a hbase table

2018-07-11 Thread Ming
Hi, all,

 

Is there a way to get the last row put/delete into a HBase table?

In other words, how can I tell the last time a HBase table is changed? I was
trying to check the HDFS file stats, but HBase has memstore, so that is not
a good way, and HFile location is internal to HBase.

 

My purpose is to quickly check the last modified timestamp for a given HBase
table.

 

Thanks,

Ming



Re: Query for OldWals and use of WAl for Hbase indexer

2018-07-11 Thread Sean Busbey
Presuming you're using the Lily indexer[1], yes it relies on hbase's
built in cross-cluster replication.

The replication system stores WALs until it can successfully send them
for replication. If you look in ZK you should be able to see which
regionserver(s) are waiting to send those WALs over. The easiest way
to do this is probably to look at the "zk dump" web page on the
Master's web ui[2].

Once you have the particular region server(s), take a look at their
logs for messages about difficulty sending edits to the replication
peer you have set up for the destination solr collection.

If you remove the WALs then the solr collection will have a hole in
it. Depending on how far behind you are, it might be quicker to 1)
remove the replication peer, 2) wait for old wals to clear, 3)
reenable replication, 4) use a batch indexing tool to index data
already in the table.

[1]:

http://ngdata.github.io/hbase-indexer/

[2]:

The specifics will vary depending on your installation, but the page
is essentially at a URL like
https://active-master-host.example.com:22002/zk.jsp

the link is on the master UI landing page, near the bottom, in the
description of the "ZooKeeper Quorum" row. it's the end of "Addresses
of all registered ZK servers. For more, see zk dump."

On Wed, Jul 11, 2018 at 10:16 AM, Manjeet Singh
 wrote:
> Hi All
>
> I have a query regarding Hbase replication and OldWals
>
> Hbase version 1.2.1
>
> To enable Hbase indexing we use below command on table
>
> alter '', {NAME => 'CF1', REPLICATION_SCOPE => 1}
>
> By Doing this actually replication get enabled as hbase-indexer required
> it, as per my understanding indexer use hbase WAL (Please correct me if I
> am wrong).
>
> so question is How Hbase syncronize with Solr Indexer? What is the role of
> replication? what optimization we can apply in order to reduce data size?
>
>
> I can see that our OldWals are getting filled , if Hmaster it self taking
> care why it's reached to 7.2 TB? what if I delete it, does it impact solr
> indexing?
>
> 7.2 K   21.5 K  /hbase/.hbase-snapshot
> 0   0   /hbase/.tmp
> 0   0   /hbase/MasterProcWALs
> 18.3 G  60.2 G  /hbase/WALs
> 28.7 G  86.1 G  /hbase/archive
> 0   0   /hbase/corrupt
> 1.7 T   5.2 T   /hbase/data
> 42  126 /hbase/hbase.id
> 7   21  /hbase/hbase.version
> 7.2 T   21.6 T  /hbase/oldWALs
>
>
>
>
> Thanks
> Manjeet Singh


Query for OldWals and use of WAl for Hbase indexer

2018-07-11 Thread Manjeet Singh
Hi All

I have a query regarding Hbase replication and OldWals

Hbase version 1.2.1

To enable Hbase indexing we use below command on table

alter '', {NAME => 'CF1', REPLICATION_SCOPE => 1}

By Doing this actually replication get enabled as hbase-indexer required
it, as per my understanding indexer use hbase WAL (Please correct me if I
am wrong).

so question is How Hbase syncronize with Solr Indexer? What is the role of
replication? what optimization we can apply in order to reduce data size?


I can see that our OldWals are getting filled , if Hmaster it self taking
care why it's reached to 7.2 TB? what if I delete it, does it impact solr
indexing?

7.2 K   21.5 K  /hbase/.hbase-snapshot
0   0   /hbase/.tmp
0   0   /hbase/MasterProcWALs
18.3 G  60.2 G  /hbase/WALs
28.7 G  86.1 G  /hbase/archive
0   0   /hbase/corrupt
1.7 T   5.2 T   /hbase/data
42  126 /hbase/hbase.id
7   21  /hbase/hbase.version
7.2 T   21.6 T  /hbase/oldWALs




Thanks
Manjeet Singh


Re: EMR Read Replica Metadata Table Name

2018-07-11 Thread Austin Heyne
To expand on this, I'm also having the inverse issue. I had to take down 
our main HBase today and now when I try to run hbck it is trying to look 
for the hbase:meta,,1 table on a region server that is serving a read 
replica metadata table and failing.


It seems like something is messed up on HBase knowing which metadata 
table to use when and where that metadata table is located. I image this 
is all state that should be maintained in zookeeper but I don't know 
where things are going wrong.


-Austin


On 07/10/2018 07:47 PM, Austin Heyne wrote:
I currently have an EMR cluster that's running a continuous ingest. 
I'd like to spin up read-only clusters with Spark and Zeppelin to 
query with. I've gotten the replica up and running with Spark but when 
an executor tries to query HBase it's throwing 
NotServingRegionExceptions.


"""
18/07/10 23:23:36 INFO RpcRetryingCaller: Call exception, tries=10, 
retries=35, started=38532 ms ago, cancelled=false, 
msg=org.apache.hadoop.hbase.NotServingRegionException: Region 
hbase:meta,,1 is not online on 
ip-10-0-24-63.ec2.internal,16020,1531253339025
    at 
org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:3008)
    at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1144)
    at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.newRegionScanner(RSRpcServices.java:2476)
    at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2757)
    at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34950)

    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
    at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
    at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)

"""

Which makes sense because on an EMR read replica the metadata table 
should be something like hbase:meta_j-2ZMF9CFOOBAR,,1 not 
hbase:meta,,1. Further hbase:meta_j-2ZMF9CFOOBAR,,1 is available on 
ip-10-0-24-63.ec2.internal. Does anyone know why this is happening or 
how to fix it?


Thanks,



--
Austin L. Heyne



[question] what's the Hbase-spark module different with other two spark on Hbase

2018-07-11 Thread nurseryboy
Dear All 

 I saw there is one Hbase-spark module in Hbase code and saw there is one
jira for this: https://issues.apache.org/jira/browse/HBASE-13992
In this jira it's told the Hbase-spark module code initially from
https://github.com/cloudera-labs/SparkOnHBase
And in anther discuss list it's mentioned SHC shared all the code to Hbase ( 
discussion link is: 
https://lists.apache.org/thread.html/fd74ef9b9da77abf794664f06ea19c839fb3d543647fb29115081683@%3Cdev.hbase.apache.org%3E
SHC link is: https://github.com/hortonworks-spark/shc) 

I download the code Hbase-spark, SHC and cloudera-las SparkOnHbase, then
found the code is not same. 

So I am little confusion: what's the Hbase-spark relationship with SHC and
cloudera-las SparkOnHbase?
inherit from them or just get the idea from them ? 

thanks for community can help me clear about this confusion.  thanks 

Regards





--
Sent from: http://apache-hbase.679495.n3.nabble.com/HBase-User-f4020416.html


[question] what's the Hbase-spark module different with other two spark on Hbase

2018-07-11 Thread nurseryboy
Dear All 

 I saw there is one Hbase-spark module in Hbase code and saw there is one
jira for this: https://issues.apache.org/jira/browse/HBASE-13992
In this jira it's told the Hbase-spark module code initially from
https://github.com/cloudera-labs/SparkOnHBase
And in anther discuss list it's mentioned SHC shared all the code to Hbase ( 
discussion link is: 
https://lists.apache.org/thread.html/fd74ef9b9da77abf794664f06ea19c839fb3d543647fb29115081683@%3Cdev.hbase.apache.org%3E
SHC link is: https://github.com/hortonworks-spark/shc) 

I download the code Hbase-spark, SHC and cloudera-las SparkOnHbase, then
found the code is not same. 

So I am little confusion: what's the Hbase-spark relationship with SHC and
cloudera-las SparkOnHbase?
inherit from them or just get the idea from them ? 

thanks for community can help me clear about this confusion.  thanks 

Regards





--
Sent from: http://apache-hbase.679495.n3.nabble.com/HBase-User-f4020416.html


Re: [question] what's the Hbase-spark module different with other two spark on Hbase

2018-07-11 Thread Sean Busbey
The hbase-spark module in the HBase project (which hasn't yet made it
into a release) is FWICT the eventual replacement for both the
Cloudera Labs SparkOnHBase and the Hortonworks SHC.

The code in the hbase-spark module started as an update of the
SparkOnHBase code and then quickly expanded via contributions from the
SHC folks to incorporate the features they wanted to provide. So while
it is safe to say the current code "comes from" both of those efforts,
it no longer looks like either of them.

On Wed, Jul 11, 2018 at 6:23 AM, nurseryboy  wrote:
> Dear All
>
>  I saw there is one Hbase-spark module in Hbase code and saw there is one
> jira for this: https://issues.apache.org/jira/browse/HBASE-13992
> In this jira it's told the Hbase-spark module code initially from
> https://github.com/cloudera-labs/SparkOnHBase
> And in anther discuss list it's mentioned SHC shared all the code to Hbase (
> discussion link is:
> https://lists.apache.org/thread.html/fd74ef9b9da77abf794664f06ea19c839fb3d543647fb29115081683@%3Cdev.hbase.apache.org%3E
> SHC link is: https://github.com/hortonworks-spark/shc)
>
> I download the code Hbase-spark, SHC and cloudera-las SparkOnHbase, then
> found the code is not same.
>
> So I am little confusion: what's the Hbase-spark relationship with SHC and
> cloudera-las SparkOnHbase?
> inherit from them or just get the idea from them ?
>
> thanks for community can help me clear about this confusion.  thanks
>
> Regards
>
>
>
> --
> Sent from: http://apache-hbase.679495.n3.nabble.com/HBase-User-f4020416.html


[question] what's the Hbase-spark module different with other two spark on Hbase

2018-07-11 Thread nurseryboy
Dear All 

 I saw there is one Hbase-spark module in Hbase code and saw there is one
jira for this: https://issues.apache.org/jira/browse/HBASE-13992
In this jira it's told the Hbase-spark module code initially from
https://github.com/cloudera-labs/SparkOnHBase
And in anther discuss list it's mentioned SHC shared all the code to Hbase ( 
discussion link is: 
https://lists.apache.org/thread.html/fd74ef9b9da77abf794664f06ea19c839fb3d543647fb29115081683@%3Cdev.hbase.apache.org%3E
SHC link is: https://github.com/hortonworks-spark/shc) 

I download the code Hbase-spark, SHC and cloudera-las SparkOnHbase, then
found the code is not same. 

So I am little confusion: what's the Hbase-spark relationship with SHC and
cloudera-las SparkOnHbase? 
inherit from them or just get the idea from them ? 

thanks for community can help me clear about this confusion.  thanks 

Regards 



--
Sent from: http://apache-hbase.679495.n3.nabble.com/HBase-User-f4020416.html


I am a subscribe please add me thanks

2018-07-11 Thread bill.zhou
I am a subscriber please add me thanks 



[question] what's the Hbase-spark module different with other two spark on Hbase

2018-07-11 Thread nurseryboy
Dear All 

 I saw there is one Hbase-spark module in Hbase code and saw there is one
jira for this: https://issues.apache.org/jira/browse/HBASE-13992
In this jira it's told the Hbase-spark module code initially from
https://github.com/cloudera-labs/SparkOnHBase
And in anther discuss list it's mentioned SHC shared all the code to Hbase ( 
discussion link is: 
https://lists.apache.org/thread.html/fd74ef9b9da77abf794664f06ea19c839fb3d543647fb29115081683@%3Cdev.hbase.apache.org%3E
SHC link is: https://github.com/hortonworks-spark/shc) 

I download the code Hbase-spark, SHC and cloudera-las SparkOnHbase, then
found the code is not same. 

So I am little confusion: what's the Hbase-spark relationship with SHC and
cloudera-las SparkOnHbase? 
inherit from them or just get the idea from them ? 

thanks for community can help me clear about this confusion.  thanks 

Regards 



--
Sent from: http://apache-hbase.679495.n3.nabble.com/HBase-User-f4020416.html


Re: Unable to read from Kerberised HBase

2018-07-11 Thread Reid Chan
Does every machine where hbase client runs has your specific keytab and 
corresponding principal?

>From snippet, i can tell that you're using service principal to login (with 
>name/hostname@REALM format), and each principal should be different due to 
>their different hostname.



R.C




From: Lalit Jadhav 
Sent: 11 July 2018 17:45:22
To: user@hbase.apache.org
Subject: Re: Unable to read from Kerberised HBase

Yes.

On Wed, Jul 11, 2018 at 2:58 PM, Reid Chan  wrote:

> Does your hbase client run on multiple machines?
>
> R.C
>
>
> 
> From: Lalit Jadhav 
> Sent: 11 July 2018 14:31:40
> To: user@hbase.apache.org
> Subject: Re: Unable to read from Kerberised HBase
>
> Tried with given snippet,
>
> It works when a table placed on single RegionServer. But when Table is
> distributed across the cluster, I am not able to scan table, Let me know if
> I am going wrong somewhere.
>
> On Tue, Jul 10, 2018 at 2:13 PM, Reid Chan  wrote:
>
> > Try this way:
> >
> >
> > Connection connection = ugi.doAs(new PrivilegedAction() {
> >
> > @Override
> > public Connection run() {
> >   return ConnectionFactory.createConnection(configuration);
> > }
> >   });
> >
> >
> >
> > R.C
> >
> >
> >
> > 
> > From: Lalit Jadhav 
> > Sent: 10 July 2018 16:35:15
> > To: user@hbase.apache.org
> > Subject: Re: Unable to read from Kerberised HBase
> >
> > Code Snipper:
> >
> > Configuration configuration = HBaseConfiguration.create();
> > configuration.set("hbase.zookeeper.quorum",  "QUARAM");
> > configuration.set("hbase.master", "MASTER");
> > configuration.set("hbase.zookeeper.property.clientPort", "2181");
> > configuration.set("hadoop.security.authentication", "kerberos");
> > configuration.set("hbase.security.authentication", "kerberos");
> > configuration.set("zookeeper.znode.parent", "/hbase-secure");
> > configuration.set("hbase.cluster.distributed", "true");
> > configuration.set("hbase.rpc.protection", "authentication");
> > configuration.set("hbase.regionserver.kerberos.principal",
> > "hbase/Principal@realm");
> > configuration.set("hbase.regionserver.keytab.file",
> > "/home/developers/Desktop/hbase.service.keytab3");
> > configuration.set("hbase.master.kerberos.principal",
> > "hbase/HbasePrincipal@realm");
> > configuration.set("hbase.master.keytab.file",
> > "/etc/security/keytabs/hbase.service.keytab");
> >
> > System.setProperty("java.security.krb5.conf","/etc/krb5.conf");
> >
> > String principal = System.getProperty("kerberosPrincipal",
> > "hbase/HbasePrincipal@realm");
> > String keytabLocation = System.getProperty("kerberosKeytab",
> > "/etc/security/keytabs/hbase.service.keytab");
> > UserGroupInformation.setconfiguration(configuration);
> > UserGroupInformation.loginUserFromKeytab(principal, keytabLocation);
> > UserGroupInformation userGroupInformation = UserGroupInformation.
> > loginUserFromKeytabAndReturnUGI("hbase/HbasePrincipal@realm",
> > "/etc/security/keytabs/hbase.service.keytab");
> > UserGroupInformation.setLoginUser(userGroupInformation);
> >
> >Connection connection =
> > ConnectionFactory.createConnection(configuration);
> >
> >
> > Any more logs about login failure or success or related? - No, I only got
> > above logs.
> >
> >
> > On Tue, Jul 10, 2018 at 1:58 PM, Reid Chan 
> wrote:
> >
> > > Any more logs about login failure or success or related?
> > >
> > > And can you show the code snippet of connection creation?
> > > 
> > > From: Lalit Jadhav 
> > > Sent: 10 July 2018 16:06:32
> > > To: user@hbase.apache.org
> > > Subject: Re: Unable to read from Kerberised HBase
> > >
> > > Table only contains 100 rows. Still not able to scan.
> > >
> > > On Tue, Jul 10, 2018, 12:21 PM anil gupta 
> wrote:
> > >
> > > > As per error message, your scan ran for more than 1 minute but the
> > > timeout
> > > > is set for 1 minute. Hence the error. Try doing smaller scans or
> > > increasing
> > > > timeout.(PS: HBase is mostly good for short scan not for full table
> > > scans.)
> > > >
> > > > On Mon, Jul 9, 2018 at 8:37 PM, Lalit Jadhav <
> > lalit.jad...@nciportal.com
> > > >
> > > > wrote:
> > > >
> > > > > While connecting to remote HBase cluster, I can create Table and
> get
> > > > Table
> > > > > Listing.  But unable to scan Table using Java API. Below is code
> > > > >
> > > > > configuration.set("hbase.zookeeper.quorum", "QUARAM");
> > > > > configuration.set("hbase.master", "MASTER");
> > > > > configuration.set("hbase.zookeeper.property.clientPort",
> > "2181");
> > > > > configuration.set("hadoop.security.authentication",
> "kerberos");
> > > > > configuration.set("hbase.security.authentication",
> "kerberos");
> > > > > configuration.set("zookeeper.znode.parent", "/hbase-secure");
> > > > > 

Re: Region state is PENDING_CLOSE persists.

2018-07-11 Thread Allan Yang
There must be a handler thread is running (or stuck) somewhere, so the
close region thread can't obtain the write lock. You can look closely in
your thread dump.
The handler thread you pasted above it is just a thread can't obtain the
read lock since the close thread is trying write lock.

Best Regards
Allan Yang


Kang Minwoo  于2018年7月11日周三 下午2:25写道:

> Hello.
>
> Occasionally, when closing a region, the RS_CLOSE_REGION thread is unable
> to acquire a lock and is still in the WAITING.
> (These days, the cluster load increase.)
> So the Region state is PENDING_CLOSE persists.
> The thread holding the lock is the RPC handler.
>
> If you have any good tips on moving regions, please share them.
> It would be nice if the timeout could be set.
>
> The HBase version is 1.2.6.
>
> Best regards,
> Minwoo Kang
>
> 
>
> [thread dump]
> "RS_CLOSE_REGION" waiting on condition [abc]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for   (a
> java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
> at
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
> at
> java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock(ReentrantReadWriteLock.java:943)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1426)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1372)
> - locked  (a java.lang.Object)
> at
> org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:138)
> at
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:748)
>Locked ownable synchronizers:
> -  (a java.util.concurrent.ThreadPoolExecutor$Worker)
>
> "RpcServer.handler" waiting on condition [bcd]
>java.lang.Thread.State: TIMED_WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for   (a
> java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
> at
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
> at
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:871)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:8177)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:8164)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.startRegionOperation(HRegion.java:8073)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2547)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2541)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6830)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6809)
> at
> org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2049)
> at
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33644)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
> at java.lang.Thread.run(Thread.java:748)
>Locked ownable synchronizers:
> - None
>
>
>


Re: Unable to read from Kerberised HBase

2018-07-11 Thread Lalit Jadhav
Yes.

On Wed, Jul 11, 2018 at 2:58 PM, Reid Chan  wrote:

> Does your hbase client run on multiple machines?
>
> R.C
>
>
> 
> From: Lalit Jadhav 
> Sent: 11 July 2018 14:31:40
> To: user@hbase.apache.org
> Subject: Re: Unable to read from Kerberised HBase
>
> Tried with given snippet,
>
> It works when a table placed on single RegionServer. But when Table is
> distributed across the cluster, I am not able to scan table, Let me know if
> I am going wrong somewhere.
>
> On Tue, Jul 10, 2018 at 2:13 PM, Reid Chan  wrote:
>
> > Try this way:
> >
> >
> > Connection connection = ugi.doAs(new PrivilegedAction() {
> >
> > @Override
> > public Connection run() {
> >   return ConnectionFactory.createConnection(configuration);
> > }
> >   });
> >
> >
> >
> > R.C
> >
> >
> >
> > 
> > From: Lalit Jadhav 
> > Sent: 10 July 2018 16:35:15
> > To: user@hbase.apache.org
> > Subject: Re: Unable to read from Kerberised HBase
> >
> > Code Snipper:
> >
> > Configuration configuration = HBaseConfiguration.create();
> > configuration.set("hbase.zookeeper.quorum",  "QUARAM");
> > configuration.set("hbase.master", "MASTER");
> > configuration.set("hbase.zookeeper.property.clientPort", "2181");
> > configuration.set("hadoop.security.authentication", "kerberos");
> > configuration.set("hbase.security.authentication", "kerberos");
> > configuration.set("zookeeper.znode.parent", "/hbase-secure");
> > configuration.set("hbase.cluster.distributed", "true");
> > configuration.set("hbase.rpc.protection", "authentication");
> > configuration.set("hbase.regionserver.kerberos.principal",
> > "hbase/Principal@realm");
> > configuration.set("hbase.regionserver.keytab.file",
> > "/home/developers/Desktop/hbase.service.keytab3");
> > configuration.set("hbase.master.kerberos.principal",
> > "hbase/HbasePrincipal@realm");
> > configuration.set("hbase.master.keytab.file",
> > "/etc/security/keytabs/hbase.service.keytab");
> >
> > System.setProperty("java.security.krb5.conf","/etc/krb5.conf");
> >
> > String principal = System.getProperty("kerberosPrincipal",
> > "hbase/HbasePrincipal@realm");
> > String keytabLocation = System.getProperty("kerberosKeytab",
> > "/etc/security/keytabs/hbase.service.keytab");
> > UserGroupInformation.setconfiguration(configuration);
> > UserGroupInformation.loginUserFromKeytab(principal, keytabLocation);
> > UserGroupInformation userGroupInformation = UserGroupInformation.
> > loginUserFromKeytabAndReturnUGI("hbase/HbasePrincipal@realm",
> > "/etc/security/keytabs/hbase.service.keytab");
> > UserGroupInformation.setLoginUser(userGroupInformation);
> >
> >Connection connection =
> > ConnectionFactory.createConnection(configuration);
> >
> >
> > Any more logs about login failure or success or related? - No, I only got
> > above logs.
> >
> >
> > On Tue, Jul 10, 2018 at 1:58 PM, Reid Chan 
> wrote:
> >
> > > Any more logs about login failure or success or related?
> > >
> > > And can you show the code snippet of connection creation?
> > > 
> > > From: Lalit Jadhav 
> > > Sent: 10 July 2018 16:06:32
> > > To: user@hbase.apache.org
> > > Subject: Re: Unable to read from Kerberised HBase
> > >
> > > Table only contains 100 rows. Still not able to scan.
> > >
> > > On Tue, Jul 10, 2018, 12:21 PM anil gupta 
> wrote:
> > >
> > > > As per error message, your scan ran for more than 1 minute but the
> > > timeout
> > > > is set for 1 minute. Hence the error. Try doing smaller scans or
> > > increasing
> > > > timeout.(PS: HBase is mostly good for short scan not for full table
> > > scans.)
> > > >
> > > > On Mon, Jul 9, 2018 at 8:37 PM, Lalit Jadhav <
> > lalit.jad...@nciportal.com
> > > >
> > > > wrote:
> > > >
> > > > > While connecting to remote HBase cluster, I can create Table and
> get
> > > > Table
> > > > > Listing.  But unable to scan Table using Java API. Below is code
> > > > >
> > > > > configuration.set("hbase.zookeeper.quorum", "QUARAM");
> > > > > configuration.set("hbase.master", "MASTER");
> > > > > configuration.set("hbase.zookeeper.property.clientPort",
> > "2181");
> > > > > configuration.set("hadoop.security.authentication",
> "kerberos");
> > > > > configuration.set("hbase.security.authentication",
> "kerberos");
> > > > > configuration.set("zookeeper.znode.parent", "/hbase-secure");
> > > > > configuration.set("hbase.cluster.distributed", "true");
> > > > > configuration.set("hbase.rpc.protection", "authentication");
> > > > > configuration.set("hbase.regionserver.kerberos.principal",
> > > > > "hbase/Principal@realm");
> > > > > configuration.set("hbase.regionserver.keytab.file",
> > > > > "/home/developers/Desktop/hbase.service.keytab3");
> > > > > configuration.set("hbase.master.kerberos.principal",
> > > > > "hbase/HbasePrincipal@realm");

Re: Unable to read from Kerberised HBase

2018-07-11 Thread Reid Chan
Does your hbase client run on multiple machines?

R.C



From: Lalit Jadhav 
Sent: 11 July 2018 14:31:40
To: user@hbase.apache.org
Subject: Re: Unable to read from Kerberised HBase

Tried with given snippet,

It works when a table placed on single RegionServer. But when Table is
distributed across the cluster, I am not able to scan table, Let me know if
I am going wrong somewhere.

On Tue, Jul 10, 2018 at 2:13 PM, Reid Chan  wrote:

> Try this way:
>
>
> Connection connection = ugi.doAs(new PrivilegedAction() {
>
> @Override
> public Connection run() {
>   return ConnectionFactory.createConnection(configuration);
> }
>   });
>
>
>
> R.C
>
>
>
> 
> From: Lalit Jadhav 
> Sent: 10 July 2018 16:35:15
> To: user@hbase.apache.org
> Subject: Re: Unable to read from Kerberised HBase
>
> Code Snipper:
>
> Configuration configuration = HBaseConfiguration.create();
> configuration.set("hbase.zookeeper.quorum",  "QUARAM");
> configuration.set("hbase.master", "MASTER");
> configuration.set("hbase.zookeeper.property.clientPort", "2181");
> configuration.set("hadoop.security.authentication", "kerberos");
> configuration.set("hbase.security.authentication", "kerberos");
> configuration.set("zookeeper.znode.parent", "/hbase-secure");
> configuration.set("hbase.cluster.distributed", "true");
> configuration.set("hbase.rpc.protection", "authentication");
> configuration.set("hbase.regionserver.kerberos.principal",
> "hbase/Principal@realm");
> configuration.set("hbase.regionserver.keytab.file",
> "/home/developers/Desktop/hbase.service.keytab3");
> configuration.set("hbase.master.kerberos.principal",
> "hbase/HbasePrincipal@realm");
> configuration.set("hbase.master.keytab.file",
> "/etc/security/keytabs/hbase.service.keytab");
>
> System.setProperty("java.security.krb5.conf","/etc/krb5.conf");
>
> String principal = System.getProperty("kerberosPrincipal",
> "hbase/HbasePrincipal@realm");
> String keytabLocation = System.getProperty("kerberosKeytab",
> "/etc/security/keytabs/hbase.service.keytab");
> UserGroupInformation.setconfiguration(configuration);
> UserGroupInformation.loginUserFromKeytab(principal, keytabLocation);
> UserGroupInformation userGroupInformation = UserGroupInformation.
> loginUserFromKeytabAndReturnUGI("hbase/HbasePrincipal@realm",
> "/etc/security/keytabs/hbase.service.keytab");
> UserGroupInformation.setLoginUser(userGroupInformation);
>
>Connection connection =
> ConnectionFactory.createConnection(configuration);
>
>
> Any more logs about login failure or success or related? - No, I only got
> above logs.
>
>
> On Tue, Jul 10, 2018 at 1:58 PM, Reid Chan  wrote:
>
> > Any more logs about login failure or success or related?
> >
> > And can you show the code snippet of connection creation?
> > 
> > From: Lalit Jadhav 
> > Sent: 10 July 2018 16:06:32
> > To: user@hbase.apache.org
> > Subject: Re: Unable to read from Kerberised HBase
> >
> > Table only contains 100 rows. Still not able to scan.
> >
> > On Tue, Jul 10, 2018, 12:21 PM anil gupta  wrote:
> >
> > > As per error message, your scan ran for more than 1 minute but the
> > timeout
> > > is set for 1 minute. Hence the error. Try doing smaller scans or
> > increasing
> > > timeout.(PS: HBase is mostly good for short scan not for full table
> > scans.)
> > >
> > > On Mon, Jul 9, 2018 at 8:37 PM, Lalit Jadhav <
> lalit.jad...@nciportal.com
> > >
> > > wrote:
> > >
> > > > While connecting to remote HBase cluster, I can create Table and get
> > > Table
> > > > Listing.  But unable to scan Table using Java API. Below is code
> > > >
> > > > configuration.set("hbase.zookeeper.quorum", "QUARAM");
> > > > configuration.set("hbase.master", "MASTER");
> > > > configuration.set("hbase.zookeeper.property.clientPort",
> "2181");
> > > > configuration.set("hadoop.security.authentication", "kerberos");
> > > > configuration.set("hbase.security.authentication", "kerberos");
> > > > configuration.set("zookeeper.znode.parent", "/hbase-secure");
> > > > configuration.set("hbase.cluster.distributed", "true");
> > > > configuration.set("hbase.rpc.protection", "authentication");
> > > > configuration.set("hbase.regionserver.kerberos.principal",
> > > > "hbase/Principal@realm");
> > > > configuration.set("hbase.regionserver.keytab.file",
> > > > "/home/developers/Desktop/hbase.service.keytab3");
> > > > configuration.set("hbase.master.kerberos.principal",
> > > > "hbase/HbasePrincipal@realm");
> > > > configuration.set("hbase.master.keytab.file",
> > > > "/etc/security/keytabs/hbase.service.keytab");
> > > >
> > > > System.setProperty("java.security.krb5.conf","/etc/krb5.conf");
> > > >
> > > > String principal = System.getProperty("kerberosPrincipal",
> > > > "hbase/HbasePrincipal@realm");
> > 

Re: Unable to read from Kerberised HBase

2018-07-11 Thread Lalit Jadhav
Tried with given snippet,

It works when a table placed on single RegionServer. But when Table is
distributed across the cluster, I am not able to scan table, Let me know if
I am going wrong somewhere.

On Tue, Jul 10, 2018 at 2:13 PM, Reid Chan  wrote:

> Try this way:
>
>
> Connection connection = ugi.doAs(new PrivilegedAction() {
>
> @Override
> public Connection run() {
>   return ConnectionFactory.createConnection(configuration);
> }
>   });
>
>
>
> R.C
>
>
>
> 
> From: Lalit Jadhav 
> Sent: 10 July 2018 16:35:15
> To: user@hbase.apache.org
> Subject: Re: Unable to read from Kerberised HBase
>
> Code Snipper:
>
> Configuration configuration = HBaseConfiguration.create();
> configuration.set("hbase.zookeeper.quorum",  "QUARAM");
> configuration.set("hbase.master", "MASTER");
> configuration.set("hbase.zookeeper.property.clientPort", "2181");
> configuration.set("hadoop.security.authentication", "kerberos");
> configuration.set("hbase.security.authentication", "kerberos");
> configuration.set("zookeeper.znode.parent", "/hbase-secure");
> configuration.set("hbase.cluster.distributed", "true");
> configuration.set("hbase.rpc.protection", "authentication");
> configuration.set("hbase.regionserver.kerberos.principal",
> "hbase/Principal@realm");
> configuration.set("hbase.regionserver.keytab.file",
> "/home/developers/Desktop/hbase.service.keytab3");
> configuration.set("hbase.master.kerberos.principal",
> "hbase/HbasePrincipal@realm");
> configuration.set("hbase.master.keytab.file",
> "/etc/security/keytabs/hbase.service.keytab");
>
> System.setProperty("java.security.krb5.conf","/etc/krb5.conf");
>
> String principal = System.getProperty("kerberosPrincipal",
> "hbase/HbasePrincipal@realm");
> String keytabLocation = System.getProperty("kerberosKeytab",
> "/etc/security/keytabs/hbase.service.keytab");
> UserGroupInformation.setconfiguration(configuration);
> UserGroupInformation.loginUserFromKeytab(principal, keytabLocation);
> UserGroupInformation userGroupInformation = UserGroupInformation.
> loginUserFromKeytabAndReturnUGI("hbase/HbasePrincipal@realm",
> "/etc/security/keytabs/hbase.service.keytab");
> UserGroupInformation.setLoginUser(userGroupInformation);
>
>Connection connection =
> ConnectionFactory.createConnection(configuration);
>
>
> Any more logs about login failure or success or related? - No, I only got
> above logs.
>
>
> On Tue, Jul 10, 2018 at 1:58 PM, Reid Chan  wrote:
>
> > Any more logs about login failure or success or related?
> >
> > And can you show the code snippet of connection creation?
> > 
> > From: Lalit Jadhav 
> > Sent: 10 July 2018 16:06:32
> > To: user@hbase.apache.org
> > Subject: Re: Unable to read from Kerberised HBase
> >
> > Table only contains 100 rows. Still not able to scan.
> >
> > On Tue, Jul 10, 2018, 12:21 PM anil gupta  wrote:
> >
> > > As per error message, your scan ran for more than 1 minute but the
> > timeout
> > > is set for 1 minute. Hence the error. Try doing smaller scans or
> > increasing
> > > timeout.(PS: HBase is mostly good for short scan not for full table
> > scans.)
> > >
> > > On Mon, Jul 9, 2018 at 8:37 PM, Lalit Jadhav <
> lalit.jad...@nciportal.com
> > >
> > > wrote:
> > >
> > > > While connecting to remote HBase cluster, I can create Table and get
> > > Table
> > > > Listing.  But unable to scan Table using Java API. Below is code
> > > >
> > > > configuration.set("hbase.zookeeper.quorum", "QUARAM");
> > > > configuration.set("hbase.master", "MASTER");
> > > > configuration.set("hbase.zookeeper.property.clientPort",
> "2181");
> > > > configuration.set("hadoop.security.authentication", "kerberos");
> > > > configuration.set("hbase.security.authentication", "kerberos");
> > > > configuration.set("zookeeper.znode.parent", "/hbase-secure");
> > > > configuration.set("hbase.cluster.distributed", "true");
> > > > configuration.set("hbase.rpc.protection", "authentication");
> > > > configuration.set("hbase.regionserver.kerberos.principal",
> > > > "hbase/Principal@realm");
> > > > configuration.set("hbase.regionserver.keytab.file",
> > > > "/home/developers/Desktop/hbase.service.keytab3");
> > > > configuration.set("hbase.master.kerberos.principal",
> > > > "hbase/HbasePrincipal@realm");
> > > > configuration.set("hbase.master.keytab.file",
> > > > "/etc/security/keytabs/hbase.service.keytab");
> > > >
> > > > System.setProperty("java.security.krb5.conf","/etc/krb5.conf");
> > > >
> > > > String principal = System.getProperty("kerberosPrincipal",
> > > > "hbase/HbasePrincipal@realm");
> > > > String keytabLocation = System.getProperty("kerberosKeytab",
> > > > "/etc/security/keytabs/hbase.service.keytab");
> > > > UserGroupInformation.setconfiguration(configuration);
> > > > 

Region state is PENDING_CLOSE persists.

2018-07-11 Thread Kang Minwoo
Hello.

Occasionally, when closing a region, the RS_CLOSE_REGION thread is unable to 
acquire a lock and is still in the WAITING.
(These days, the cluster load increase.)
So the Region state is PENDING_CLOSE persists.
The thread holding the lock is the RPC handler.

If you have any good tips on moving regions, please share them.
It would be nice if the timeout could be set.

The HBase version is 1.2.6.

Best regards,
Minwoo Kang



[thread dump]
"RS_CLOSE_REGION" waiting on condition [abc]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for   (a 
java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
at 
java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock(ReentrantReadWriteLock.java:943)
at 
org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1426)
at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1372)
- locked  (a java.lang.Object)
at 
org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:138)
at 
org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
   Locked ownable synchronizers:
-  (a java.util.concurrent.ThreadPoolExecutor$Worker)

"RpcServer.handler" waiting on condition [bcd]
   java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for   (a 
java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at 
java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:871)
at org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:8177)
at org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:8164)
at 
org.apache.hadoop.hbase.regionserver.HRegion.startRegionOperation(HRegion.java:8073)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2547)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2541)
at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6830)
at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6809)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2049)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33644)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:748)
   Locked ownable synchronizers:
- None