I am trying to write Dataframe to Phoenix.

Here is my code:


   1. df.write.format("org.apache.phoenix.spark").mode(SaveMode.Overwrite).
   options(collection.immutable.Map(
   2. "zkUrl" -> "localhost:2181/hbase-unsecure",
   3. "table" -> "TEST")).save();

and i am getting following exception:


   1. org.apache.spark.SparkException: Job aborted due to stage
failure: Task 0 in stage 3.0 failed 4 times, most recent failure: Lost
task 0.3 in stage 3.0 (TID 411,
ip-xxxxx-xx-xxx.ap-southeast-1.compute.internal):
java.lang.RuntimeException: java.sql.SQLException: No suitable driver
found for jdbc:phoenix:localhost:2181:/hbase-unsecure;
   2.         at
org.apache.phoenix.mapreduce.PhoenixOutputFormat.getRecordWriter(PhoenixOutputFormat.java:58)
   3.         at
org.apache.spark.rdd.PairRDDFunctions$anonfun$saveAsNewAPIHadoopDataset$1$anonfun$12.apply(PairRDDFunctions.scala:1030)
   4.         at
org.apache.spark.rdd.PairRDDFunctions$anonfun$saveAsNewAPIHadoopDataset$1$anonfun$12.apply(PairRDDFunctions.scala:1014)
   5.         at
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
   6.         at org.apache.spark.scheduler.Task.run(Task.scala:88)
   7.         at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
   8.         at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   9.         at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

Reply via email to