Hi, Siva

Generally the problem is thrown under that your spark driver classpath did not 
recognize the relative hbase-protocol.jar 

Under this condition, you may try to use 
   spark-submit ...  --driver-class-path 
SPARK_CLASSPATH=$SPARK_CLASSPATH:/path-to-your-hbase-protocol-jar

I believe that would solve the problems. 

Best,
Sun.





CertusNet 

From: Siva
Date: 2015-05-03 11:19
To: user; dev
Subject: Error while phoenix from spark
Hi Everyone,

Trying to connect to Phoenix from spark through JDBCRdd. Encountered the below 
error. I have tried adding hbase-protocol.jar and phoenix-client.jar while 
submitting the spark jar.

spark-submit --class PhoenixConn --deploy-mode client --master local --jars 
/usr/hdp/2.2.4.2-2/phoenix/phoenix-4.2.0.2.2.4.2-2-client.jar,/usr/hdp/2.2.4.2-2/hbase/lib/hbase-protocol-0.98.4.2.2.4.2-2-hadoop2.jar,/usr/hdp/2.2.4.2-2/phoenix/phoenix-4.2.0.2.2.4.2-2-server.jar
 phoenixconn_2.10-0.0.1.jar


15/05/02 23:04:52 WARN client.HTable: Error calling coprocessor service 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService for row 
\x00\x00TEST
java.util.concurrent.ExecutionException: java.lang.IllegalAccessError: class 
com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
com.google.protobuf.LiteralByteString
        at java.util.concurrent.FutureTask.report(FutureTask.java:122)
        at java.util.concurrent.FutureTask.get(FutureTask.java:188)
        at 
org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1659)
        at 
org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1614)
        at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:954)
        at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.java:1200)
        at 
org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:353)
        at 
org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:312)
        at 
org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:308)
        at 
org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:311)
        at 
org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.<init>(FromCompiler.java:221)
        at 
org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:159)
        at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:315)
        at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:305)
        at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:221)
        at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:217)
        at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
        at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:216)
        at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:188)
        at org.apache.spark.rdd.JdbcRDD$$anon$1.<init>(JdbcRDD.scala:89)
        at org.apache.spark.rdd.JdbcRDD.compute(JdbcRDD.scala:73)
        at org.apache.spark.rdd.JdbcRDD.compute(JdbcRDD.scala:53)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:280)
        at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:61)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:245)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
        at org.apache.spark.scheduler.Task.run(Task.scala:56)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:200)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)

Thanks in advance.

Thanks,
Siva.

Reply via email to