On Tue, Mar 1, 2016 at 10:04 AM, Jean-Daniel Cryans <jdcry...@apache.org> wrote:
> Yeah didn't think about, are you volunteering Todd? :P I can do it today. > :) I'll volunteer to review! -Todd > J-D > > On Tue, Mar 1, 2016 at 9:57 AM, Todd Lipcon <t...@cloudera.com> wrote: > >> Perhaps we should target this for 0.7.1 as well, if we're going to do >> that follow-up release? Seems like it should be an easy fix (and >> client-side only) >> >> -Todd >> >> On Tue, Mar 1, 2016 at 9:29 AM, Jean-Daniel Cryans <jdcry...@apache.org> >> wrote: >> >>> Ha yeah that's a good one. I opened this jira: >>> https://issues.apache.org/jira/browse/KUDU-1360 >>> >>> Basically we forgot to check for nulls :) >>> >>> J-D >>> >>> On Tue, Mar 1, 2016 at 9:18 AM, Darren Hoo <darren....@gmail.com> wrote: >>> >>>> Spark SQL on kudu can not contains nullable columns? >>>> >>>> I've create one table in kudu(0.6.0) which has nullable columns, >>>> when I try to use spark sql (using kudu java client 0.7.0) like this: >>>> >>>> sqlContext.load("org.kududb.spark",Map("kudu.table" -> "contents", >>>> "kudu.master" -> "master1:7051")).registerTempTable("contents") >>>> sqlContext.sql("SELECT * FROM * FROM contents limit 10").collectAsList() >>>> >>>> I got this error: >>>> >>>> 16/03/02 00:45:42 INFO DAGScheduler: Job 4 failed: collect at >>>> <console>:20, took 11.813423 s >>>> org.apache.spark.SparkException: Job aborted due to stage failure: Task >>>> 0 in stage 7.0 failed 4 times, most recent failure: Lost task 0.3 in stage >>>> 7.0 (TID 62, slave29): java.lang.IllegalArgumentException: The requested >>>> column (4) is null >>>> at org.kududb.client.RowResult.checkNull(RowResult.java:475) >>>> at org.kududb.client.RowResult.getString(RowResult.java:321) >>>> at org.kududb.client.RowResult.getString(RowResult.java:308) >>>> at org.kududb.spark.KuduRelation.org >>>> $kududb$spark$KuduRelation$$getKuduValue(DefaultSource.scala:144) >>>> at >>>> org.kududb.spark.KuduRelation$$anonfun$buildScan$1$$anonfun$apply$1.apply(DefaultSource.scala:126) >>>> at >>>> org.kududb.spark.KuduRelation$$anonfun$buildScan$1$$anonfun$apply$1.apply(DefaultSource.scala:126) >>>> at >>>> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) >>>> at >>>> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) >>>> at >>>> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) >>>> at >>>> scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108) >>>> at >>>> scala.collection.TraversableLike$class.map(TraversableLike.scala:244) >>>> at >>>> scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108) >>>> at >>>> org.kududb.spark.KuduRelation$$anonfun$buildScan$1.apply(DefaultSource.scala:126) >>>> at >>>> org.kududb.spark.KuduRelation$$anonfun$buildScan$1.apply(DefaultSource.scala:124) >>>> at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) >>>> at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) >>>> at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) >>>> at scala.collection.Iterator$$anon$10.next(Iterator.scala:312) >>>> at scala.collection.Iterator$class.foreach(Iterator.scala:727) >>>> at >>>> scala.collection.AbstractIterator.foreach(Iterator.scala:1157) >>>> at >>>> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48) >>>> at >>>> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103) >>>> at >>>> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47) >>>> at scala.collection.TraversableOnce$class.to >>>> (TraversableOnce.scala:273) >>>> at scala.collection.AbstractIterator.to(Iterator.scala:1157) >>>> at >>>> scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265) >>>> at >>>> scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157) >>>> at >>>> scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252) >>>> at >>>> scala.collection.AbstractIterator.toArray(Iterator.scala:1157) >>>> at >>>> org.apache.spark.sql.execution.SparkPlan$$anonfun$5.apply(SparkPlan.scala:215) >>>> at >>>> org.apache.spark.sql.execution.SparkPlan$$anonfun$5.apply(SparkPlan.scala:215) >>>> at >>>> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850) >>>> at >>>> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850) >>>> at >>>> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) >>>> at org.apache.spark.scheduler.Task.run(Task.scala:88) >>>> at >>>> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) >>>> at >>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) >>>> at >>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) >>>> at java.lang.Thread.run(Thread.java:745) >>>> >>>> Is this due to the version incompatibily between my kudu server(0.6.0) >>>> and java client (0.7.0)? >>>> >>>> >>> >> >> >> -- >> Todd Lipcon >> Software Engineer, Cloudera >> > > -- Todd Lipcon Software Engineer, Cloudera