Hi,

i point to a mysql table using CarbonJDBC. RDBMS table contain the column
name api,day,week time etc.

But when reading values of the `day` column i got following error.

*SparkSQL >* select day from APIThrottleSummaryData;
ERROR :  Job aborted due to stage failure: Task 0 in stage 11.0 failed 1
times, most recent failure: Lost task 0.0 in stage 11.0 (TID 23,
localhost): java.lang.IllegalArgumentException: Unsupported field
StructField(day,ShortType,true)
at
org.apache.spark.sql.jdbc.JDBCRDD$$anonfun$getConversions$1.apply(JDBCRDD.scala:342)
at
org.apache.spark.sql.jdbc.JDBCRDD$$anonfun$getConversions$1.apply(JDBCRDD.scala:329)
at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
at org.apache.spark.sql.jdbc.JDBCRDD.getConversions(JDBCRDD.scala:329)
at org.apache.spark.sql.jdbc.JDBCRDD$$anon$1.<init>(JDBCRDD.scala:374)
at org.apache.spark.sql.jdbc.JDBCRDD.compute(JDBCRDD.scala:350)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
at org.apache.spark.scheduler.Task.run(Task.scala:70)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
*SparkSQL >*

any solutions?

Thanks and regards.

-- 
Rukshan Chathuranga.
Software Engineer.
WSO2, Inc.
_______________________________________________
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev

Reply via email to