hbase: secure login and connection management

2014-11-19 Thread Bogala, Chandra Reddy
Hi,
  I am trying to login to secure cluster with keytabs using below methods. It 
works fine if  the token is not expired. My process runs for long time ( web 
app from tomcat). Keep getting below exceptions after the token expire time and 
connection fails if the user tries to view data from web page.
What is the better way of handling connections? How to refresh keys 
automatically?. Is there a spring implementation for managing connections? If 
yes, can you share sample code.


UserGroupInformation.setConfiguration(conf);
UserGroupInformation.loginUserFromKeytab(hbase.myclient.principal, 
hbase.myclient.keytab);

2014-11-13 08:25:49,899 ERROR [org.apache.hadoop.security.UserGroupInformation] 
PriviledgedActionException as u...@mycompany.com (auth:KERBEROS) 
cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)]
2014-11-13 08:25:49,900 WARN [org.apache.hadoop.ipc.RpcClient] Exception 
encountered while connecting to the server : javax.security.sasl.SaslException: 
GSS initiate failed [Caused by GSSException: No valid credentials provided 
(Mechanism level: Failed to find any Kerberos tgt)]
javax.security.sasl.SaslException: GSS initiate failed
Caused by: org.ietf.jgss.GSSException: No valid credentials provided (Mechanism 
level: Failed to find any Kerberos tgt)

Thanks,
Chandra





phoenix setup issue

2014-05-15 Thread Bogala, Chandra Reddy
Hi,
   I am trying to setup Phoenix and test queries on Hbase. But getting below 
error. Any clue what might be the issue. I have added core jar to classpath in 
hbase region servers by using dynamic loading of jars setting in 
hbase-site.xml.  Also added phoenix client jar at client side.
Getting same error with sqlline aswell.

./performance.py testhost.gs.com 100
Phoenix Performance Evaluation Script 1.0
-

Creating performance table...
java.lang.IllegalArgumentException: Not a host:port pair: PBUF

testhost.gs.com??(
at 
org.apache.hadoop.hbase.util.Addressing.parseHostname(Addressing.java:60)
at org.apache.hadoop.hbase.ServerName.init(ServerName.java:101)
at 
org.apache.hadoop.hbase.ServerName.parseVersionedServerName(ServerName.java:283)
at 
org.apache.hadoop.hbase.MasterAddressTracker.bytesToServerName(MasterAddressTracker.java:77)

Thanks,
Chandra


RE: How to get complete row?

2014-05-15 Thread Bogala, Chandra Reddy
My queries always on a single column family. But return columns will be 
multiple from same family. In this case I don't think below issue impact our 
results. Let me know if I am wrong. 

scan 'test4',{STARTROW = '11645|1395288900', ENDROW = '11645|1398699000', 
COLUMNS = ['cf1:foo','cf1:bar'],FILTER = 
SingleColumnValueFilter('cf1','foo',=, 'binary:\xxx\')} 

scan.addColumn(Bytes.toBytes(cf1), Bytes.toBytes(foo));
scan.addColumn(Bytes.toBytes(cf1), Bytes.toBytes(bar));

Thanks,
Chandra

-Original Message-
From: Ted Yu [mailto:yuzhih...@gmail.com] 
Sent: Tuesday, May 06, 2014 5:50 PM
To: user@hbase.apache.org
Cc: user@hbase.apache.org
Subject: Re: How to get complete row?

For 0.96+, HBASE-10850 is needed for SingleColumnValueFilter to function 
correctly. 
This fix is in 0.98.2 whose RC is under vote. 

Cheers

On May 6, 2014, at 2:34 AM, Bogala, Chandra Reddy chandra.bog...@gs.com 
wrote:

 I am able to solve by using SingleColumnValueFilter .
 Tx
 
 From: Bogala, Chandra Reddy [Tech]
 Sent: Tuesday, May 06, 2014 12:35 PM
 To: 'user@hbase.apache.org'
 Subject: How to get complete row?
 
 Hi,
   I have similar requirement like in below posted thread. I need to get 
 complete row after applying value filter on single column value. Let know if 
 anyone knows the solution.
 http://stackoverflow.com/questions/21636787/hbase-how-to-get-complete-rows-when-scanning-with-filters-by-qualifier-value
 
 Thanks,
 Chandra
 


RE: phoenix setup issue

2014-05-12 Thread Bogala, Chandra Reddy
Thanks Kamil. I was trying Phoenix 3.0 against Hbase 0.96. I don't think that 
combination works. Because of endpoint coprocessor implementation /apis changed 
a lot from 0.94 to 0.96. 
I tried Phoenix 4.0 on 0.98.0. But it thrown below exception. I will try to run 
on Hbase 0.98.1+ and update status. Thanks gain for the help.

-Original Message-
From: alex kamil [mailto:alex.ka...@gmail.com] 
Sent: Monday, May 12, 2014 6:36 AM
To: user@hbase.apache.org
Subject: Re: phoenix setup issue

Chandra,

try copying the the phoenix-core jars into hbase/lib folder  instead of loading 
from hbase-site.xml and notice the phoenix versions supported against hbase 
versions here:
http://phoenix.incubator.apache.org/building.html
i.e. Phoenix 3.0 is running against hbase0.94+, Phoenix 4.0 is running against 
hbase0.98.1+ and Phoenix master branch is running against hbase trunk 

Alex


On Sun, May 11, 2014 at 8:54 PM, alex kamil alex.ka...@gmail.com wrote:

 looks similar to this
 http://stackoverflow.com/questions/11649824/hbase-error-not-a-hostport
 -pair

 possibly jar version mismatch between hbase client and server



 On Fri, May 9, 2014 at 2:49 AM, Bogala, Chandra Reddy  
 chandra.bog...@gs.com wrote:

 Hi,
I am trying to setup Phoenix and test queries on Hbase. But 
 getting below error. Any clue what might be the issue. I have added 
 core jar to classpath in hbase region servers by using dynamic 
 loading of jars setting in hbase-site.xml.  Also added phoenix client jar at 
 client side.
 Getting same error with sqlline aswell.

 ./performance.py testhost.gs.com 100 Phoenix Performance 
 Evaluation Script 1.0
 -

 Creating performance table...
 java.lang.IllegalArgumentException: Not a host:port pair: PBUF 
 testhost.gs.com??(
 at
 org.apache.hadoop.hbase.util.Addressing.parseHostname(Addressing.java:60)
 at org.apache.hadoop.hbase.ServerName.init(ServerName.java:101)
 at
 org.apache.hadoop.hbase.ServerName.parseVersionedServerName(ServerName.java:283)
 at
 org.apache.hadoop.hbase.MasterAddressTracker.bytesToServerName(Master
 AddressTracker.java:77)

 Thanks,
 Chandra





How to get complete row?

2014-05-06 Thread Bogala, Chandra Reddy
Hi,
   I have similar requirement like in below posted thread. I need to get 
complete row after applying value filter on single column value. Let know if 
anyone knows the solution.
http://stackoverflow.com/questions/21636787/hbase-how-to-get-complete-rows-when-scanning-with-filters-by-qualifier-value

Thanks,
Chandra



RE: How to get complete row?

2014-05-06 Thread Bogala, Chandra Reddy
I am able to solve by using SingleColumnValueFilter .
Tx

From: Bogala, Chandra Reddy [Tech]
Sent: Tuesday, May 06, 2014 12:35 PM
To: 'user@hbase.apache.org'
Subject: How to get complete row?

Hi,
   I have similar requirement like in below posted thread. I need to get 
complete row after applying value filter on single column value. Let know if 
anyone knows the solution.
http://stackoverflow.com/questions/21636787/hbase-how-to-get-complete-rows-when-scanning-with-filters-by-qualifier-value

Thanks,
Chandra



RE: endpoint coprocessor

2014-04-15 Thread Bogala, Chandra Reddy


Thanks  Yu. I have added below coprocessor to my table. And tried to invoke 
coprocessor using java client. But fails with below error. But I could see 
coprocessor in describe table output.



Caused by: 
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.exceptions.UnknownProtocolException):
 org.apache.hadoop.hbase.exceptions.UnknownProtocolException: No registered 
coprocessor service found for name AggregateService in region 
test3,,1397469869214.c73698dce0d5b91d29d42a9f9e194965.

at 
org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5070)



From describe table:

--

test3', {TABLE_ATTRIBUTES = {coprocessor$1 = 
'hdfs://xxx.com:8020/user///hbase-server-0.98.1-hadoop2.jar|org.apache.hadoop.hbase.coprocessor.AggregateImplementation||'},
 {NAME = 'cf'



Thanks,

Chandra





-Original Message-
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Thursday, April 10, 2014 5:36 PM
To: user@hbase.apache.org
Cc: user@hbase.apache.org
Subject: Re: endpoint coprocessor



Here is a reference implementation for aggregation :

http://search-hadoop.com/c/HBase:hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/AggregateImplementation.java||Hbase+aggregation+endpoint



You can find it in hbase source code.

Cheers



On Apr 10, 2014, at 4:29 AM, Bogala, Chandra Reddy 
chandra.bog...@gs.commailto:chandra.bog...@gs.com wrote:



 Hi,

 I am planning to write endpoint coprocessor to calculate TOP N results for my 
 usecase.  I got confused with old apis and new apis.

 I followed below links and try to implement. But looks like api's changed a 
 lot. I don't see many of these classes in hbase jars. We are using Hbase 0.96.

 Can anyone point to the latest document/apis?. And if possible sample code to 
 calculate top N.



 https://blogs.apache.org/hbase/entry/coprocessor_introduction

 https://www.youtube.com/watch?v=xHvJhuGGOKc



 Thanks,

 Chandra






RE: endpoint coprocessor

2014-04-11 Thread Bogala, Chandra Reddy
Thank you. I am aware of this challenge. How to call below coprocessor from 
client. Can I call this coprocessor from hbase shell?.  I am new to Hbase. So 
may be asking very dumb questions.

Thanks,
Chandra

-Original Message-
From: Asaf Mesika [mailto:asaf.mes...@gmail.com] 
Sent: Friday, April 11, 2014 12:10 PM
To: user@hbase.apache.org
Subject: Re: endpoint coprocessor

Bear in mind each region will return its top n, then you will have to run 
another top n in your client code. This introduce a numerical error : top on 
top.

On Thursday, April 10, 2014, Bogala, Chandra Reddy chandra.bog...@gs.com
wrote:

 Hi,
 I am planning to write endpoint coprocessor to calculate TOP N results 
 for my usecase.  I got confused with old apis and new apis.
 I followed below links and try to implement. But looks like api's 
 changed a lot. I don't see many of these classes in hbase jars. We are 
 using Hbase 0.96.
 Can anyone point to the latest document/apis?. And if possible sample 
 code to calculate top N.

 https://blogs.apache.org/hbase/entry/coprocessor_introduction
 HBase Coprocessors - Deploy shared functionality directly on the 
 clusterhttps://www.youtube.com/watch?v=xHvJhuGGOKc

 Thanks,
 Chandra





RE: endpoint coprocessor

2014-04-11 Thread Bogala, Chandra Reddy
Thanks Yu. My understanding is, this coprocessor is available as part of Hbase 
server components. So I should be able to attach this coprocessor to any of my 
tables by using alter table command.



alter 'demo-table','COPROCESSOR' ='.jar|class|priority|args'



Then from hbase shell, I should be able to call this coprocessor with command. 
Same like how we do scan and filter. Is there any command like below(filter 
command) calling a coprocessor. So that it will run in region server and return 
back results?.



scan 'demo-table', {FILTER = 
org.apache.hadoop.hbase.filter.RowFilter.new(CompareFilter::CompareOp.valueOf('EQUAL'),SubstringComparator.new(10001|1395309600))}



I am trying to figure out a simple call mechanism(client) of coprocessor. If by 
default those classes and calling mechanism not available from hbase shell. 
Then planning to use java client code to invoke coprocessor.

Any pointers to java client to invoke 
aggregationhttp://search-hadoop.com/c/HBase:hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/AggregateImplementation.java||Hbase+aggregation+endpoint
 coprocessor will be helpful.



Thanks,

Chandra



-Original Message-
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Friday, April 11, 2014 10:42 PM
To: user@hbase.apache.org
Subject: Re: endpoint coprocessor



Please take a look at :

hbase-shell/src/main/ruby/hbase/security.rb



for example on how coprocessor is activated from shell.



Cheers





On Fri, Apr 11, 2014 at 11:06 AM, Bogala, Chandra Reddy  
chandra.bog...@gs.commailto:chandra.bog...@gs.com wrote:



 Thank you. I am aware of this challenge. How to call below coprocessor

 from client. Can I call this coprocessor from hbase shell?.  I am new

 to Hbase. So may be asking very dumb questions.



 Thanks,

 Chandra



 -Original Message-

 From: Asaf Mesika [mailto:asaf.mes...@gmail.com]

 Sent: Friday, April 11, 2014 12:10 PM

 To: user@hbase.apache.orgmailto:user@hbase.apache.org

 Subject: Re: endpoint coprocessor



 Bear in mind each region will return its top n, then you will have to

 run another top n in your client code. This introduce a numerical

 error : top on top.



 On Thursday, April 10, 2014, Bogala, Chandra Reddy

 chandra.bog...@gs.commailto:chandra.bog...@gs.com

 wrote:



  Hi,

  I am planning to write endpoint coprocessor to calculate TOP N

  results for my usecase.  I got confused with old apis and new apis.

  I followed below links and try to implement. But looks like api's

  changed a lot. I don't see many of these classes in hbase jars. We

  are using Hbase 0.96.

  Can anyone point to the latest document/apis?. And if possible

  sample code to calculate top N.

 

  https://blogs.apache.org/hbase/entry/coprocessor_introduction

  HBase Coprocessors - Deploy shared functionality directly on the

  clusterhttps://www.youtube.com/watch?v=xHvJhuGGOKc

 

  Thanks,

  Chandra

 

 

 




endpoint coprocessor

2014-04-10 Thread Bogala, Chandra Reddy
Hi,
I am planning to write endpoint coprocessor to calculate TOP N results for my 
usecase.  I got confused with old apis and new apis.
I followed below links and try to implement. But looks like api's changed a 
lot. I don't see many of these classes in hbase jars. We are using Hbase 0.96.
Can anyone point to the latest document/apis?. And if possible sample code to 
calculate top N.

https://blogs.apache.org/hbase/entry/coprocessor_introduction
https://www.youtube.com/watch?v=xHvJhuGGOKc

Thanks,
Chandra