Hello all,

I have problem accessing HBase using Spark Phoenix Plugin in secured
cluster.

Versions:

Spark 1.6.1,
HBase 1.1.2.2.4,
Phoenix 4.4.0


Using sqlline.py works just fine.

I have valid Kerberos ticket.

Trying to get this to work in local mode first. What I'm doing is basic
test as described here: https://phoenix.apache.org/phoenix_spark.html,
 Load as a DataFrame using the Data Source API as an example.

This is Ambari managed Hortonworks cluster. HBase client is installed on
the node I run it. Still I'm adding hbase-site.xml to the classpath.

So the code that has troubles is this:

*val df = sqlContext.load(*
*  "org.apache.phoenix.spark",*
*  Map("table" -> "TABLE1", "zkUrl" -> "zookeeper:2181:/hbase-secure")*
*)*


Tried also this:

*val df = sqlContext.load(*
*  "org.apache.phoenix.spark",*
*  Map("table" -> "TABLE1", "zkUrl" ->
"zookeeper:2181:/hbase-secure:hbase@DOMAIN:/etc/security/keytabs/hbase.headless.keytab")*
*)*


Once executed it tries to connect to Phoenix, lines worth mentioning:


16/12/08 22:03:52 INFO ZooKeeper: Initiating client connection,
connectString=zookeeper:2181 sessionTimeout=90000
watcher=hconnection-0x55a7c430x0, quorum=zookeeper:2181,
baseZNode=/hbase-secure
16/12/08 22:03:52 INFO ClientCnxn: Opening socket connection to server
zookeeper/ip.ip.ip.ip:2181. *Will not attempt to authenticate using SASL
(unknown error)*
(...)
16/12/08 22:03:52 WARN RpcControllerFactory: *Cannot load configured
"hbase.rpc.controllerfactory.class"
*(org.apache.hadoop.hbase.ipc.controller.ClientRpcControllerFactory)
from hbase-site.xml, falling back to use default RpcControllerFactory


Then it repeats that once again and then it throws those lines every minute
or so.

16/12/08 22:04:41 INFO RpcRetryingCaller: Call exception, tries=10,
retries=35, started=48381 ms ago, cancelled=false, msg=
16/12/08 22:05:01 INFO RpcRetryingCaller: Call exception, tries=11,
retries=35, started=68424 ms ago, cancelled=false, msg=
16/12/08 22:05:21 INFO RpcRetryingCaller: Call exception, tries=12,
retries=35, started=88520 ms ago, cancelled=false, msg=
16/12/08 22:05:41 INFO RpcRetryingCaller: Call exception, tries=13,
retries=35, started=108677 ms ago, cancelled=false, msg=


And ... that's pretty much it.

I tried to replace phoenix-client.jar with the latest
one: phoenix-4.9.0-HBase-1.1-client.jar

But the only thing that changed was that instead of repeating those lat
lines it throws error:


java.sql.SQLException:
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after
attempts=36, exceptions:
Thu Dec 08 21:28:16 CET 2016, null, java.net.SocketTimeoutException:
callTimeout=60000, callDuration=68388: row 'SYSTEM:CATALOG,,' on table
'hbase:meta' at region=hbase:meta,,1.1588230740,
hostname=hostname,16020,1479200343216, seqNum=0

at
org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2432)
at
org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2352)
at
org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
(...)
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed
after attempts=36, exceptions:
Thu Dec 08 21:28:16 CET 2016, null, java.net.SocketTimeoutException:
callTimeout=60000, callDuration=68388: row 'SYSTEM:CATALOG,,' on table
'hbase:meta' at region=hbase:meta,,1.1588230740,
hostname=hostname,16020,1479200343216, seqNum=0


Any help would be much appreciated.

The goal is to make it work in Yarn cluster mode...

Thanks,
Marcin

Reply via email to