Hi,

I am trying to run the spark plugin DataFrame sample code available here (
https://phoenix.apache.org/phoenix_spark.html) and getting following
exception.  I am running the code against hbase-1.1.1, spark 1.5.0 and
phoenix  4.5.2. HBase is running in standalone mode, locally on OS X.  Any
ideas what might be causing this exception?


java.lang.ClassCastException:
org.apache.spark.sql.catalyst.expressions.GenericMutableRow cannot be cast
to org.apache.spark.sql.Row
    at
org.apache.spark.sql.SQLContext$$anonfun$7.apply(SQLContext.scala:439)
~[spark-sql_2.11-1.5.0.jar:1.5.0]
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:363)
~[scala-library-2.11.4.jar:na]
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:363)
~[scala-library-2.11.4.jar:na]
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:363)
~[scala-library-2.11.4.jar:na]
    at
org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator.processInputs(TungstenAggregationIterator.scala:366)
~[spark-sql_2.11-1.5.0.jar:1.5.0]
    at
org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator.start(TungstenAggregationIterator.scala:622)
~[spark-sql_2.11-1.5.0.jar:1.5.0]
    at
org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$
1.org$apache$spark$sql$execution$aggregate$TungstenAggregate$$anonfun$$executePartition$1(TungstenAggregate.scala:110)
~[spark-sql_2.11-1.5.0.jar:1.5.0]
    at
org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:119)
~[spark-sql_2.11-1.5.0.jar:1.5.0]
    at
org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:119)
~[spark-sql_2.11-1.5.0.jar:1.5.0]
    at
org.apache.spark.rdd.MapPartitionsWithPreparationRDD.compute(MapPartitionsWithPreparationRDD.scala:64)
~[spark-core_2.11-1.5.0.jar:1.5.0]
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
~[spark-core_2.11-1.5.0.jar:1.5.0]
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
~[spark-core_2.11-1.5.0.jar:1.5.0]
    at
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
~[spark-core_2.11-1.5.0.jar:1.5.0]
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
~[spark-core_2.11-1.5.0.jar:1.5.0]
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
~[spark-core_2.11-1.5.0.jar:1.5.0]
    at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
~[spark-core_2.11-1.5.0.jar:1.5.0]
    at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
~[spark-core_2.11-1.5.0.jar:1.5.0]
    at org.apache.spark.scheduler.Task.run(Task.scala:88)
~[spark-core_2.11-1.5.0.jar:1.5.0]
    at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
~[spark-core_2.11-1.5.0.jar:1.5.0]
    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[na:1.8.0_45]
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[na:1.8.0_45]
    at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]

Thanks,
Babar

Reply via email to