I'm getting the following error when reading a table from Hive. Note the
spelling of the 'Primitve' in the stack trace. I can't seem to find it
anywhere else online.

It seems to only occur with this one particular table I am reading from.
Occasionally the task will completely fail, other times it will not.

I run into different variants of the exception, presumably for each of the
different types of the columns (LONG, INT, BOOLEAN).

Has anyone else run into this issue? I'm running Spark 1.3.0 with the
standalone cluster manager.

java.lang.RuntimeException: Primitve type LONG should not take parameters
        at
org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyPrimitiveObjectInspectorFactory.getLazyObjectInspector(LazyPrimitiveObjectInspectorFactory.java:136)
        at
org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyPrimitiveObjectInspectorFactory.getLazyObjectInspector(LazyPrimitiveObjectInspectorFactory.java:113)
        at
org.apache.hadoop.hive.serde2.lazy.LazyFactory.createLazyObjectInspector(LazyFactory.java:224)
        at
org.apache.hadoop.hive.serde2.lazy.LazyFactory.createColumnarStructInspector(LazyFactory.java:314)
        at
org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe.initialize(ColumnarSerDe.java:88)
        at
org.apache.spark.sql.hive.HadoopTableReader$$anonfun$2.apply(TableReader.scala:118)
        at
org.apache.spark.sql.hive.HadoopTableReader$$anonfun$2.apply(TableReader.scala:115)
        at org.apache.spark.rdd.RDD$$anonfun$14.apply(RDD.scala:634)
        at org.apache.spark.rdd.RDD$$anonfun$14.apply(RDD.scala:634)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
        at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
        at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
        at org.apache.spark.scheduler.Task.run(Task.scala:64)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-and-java-lang-RuntimeException-tp22831.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to