Hi There,
I am using spark sql left out join query. 
The sql query is 
scala> val test = sqlContext.sql("SELECT e.departmentID FROM employee e LEFT 
OUTER JOIN department d ON d.departmentId = e.departmentId").toDF()
In the spark 1.3.1 its working fine, but the latest pull is give the below error
15/04/27 23:02:49 ERROR Executor: Exception in task 4.0 in stage 67.0 (TID 
118)java.lang.ClassCastException15/04/27 23:02:49 INFO TaskSetManager: Lost 
task 4.0 in stage 67.0 (TID 118) on executor localhost: 
java.lang.ClassCastException (null) [duplicate 1]15/04/27 23:02:49 ERROR 
Executor: Exception in task 2.0 in stage 67.0 (TID 
116)java.lang.ClassCastException15/04/27 23:02:49 INFO TaskSetManager: Lost 
task 2.0 in stage 67.0 (TID 116) on executor localhost: 
java.lang.ClassCastException (null) [duplicate 2]15/04/27 23:02:49 ERROR 
Executor: Exception in task 3.0 in stage 67.0 (TID 
117)java.lang.ClassCastException15/04/27 23:02:49 INFO TaskSetManager: Lost 
task 3.0 in stage 67.0 (TID 117) on executor localhost: 
java.lang.ClassCastException (null) [duplicate 3]15/04/27 23:02:49 ERROR 
Executor: Exception in task 0.0 in stage 66.0 (TID 
112)java.lang.ClassCastException15/04/27 23:02:49 INFO TaskSetManager: Lost 
task 0.0 in stage 66.0 (TID 112) on executor localhost: 
java.lang.ClassCastException (null) [duplicate 1]15/04/27 23:02:49 INFO 
TaskSchedulerImpl: Removed TaskSet 66.0, whose tasks have all completed, from 
pool 15/04/27 23:02:49 ERROR Executor: Exception in task 5.0 in stage 67.0 (TID 
119)java.lang.ClassCastException15/04/27 23:02:49 INFO TaskSetManager: Lost 
task 5.0 in stage 67.0 (TID 119) on executor localhost: 
java.lang.ClassCastException (null) [duplicate 4]15/04/27 23:02:49 ERROR 
Executor: Exception in task 0.0 in stage 67.0 (TID 
114)java.lang.ClassCastException15/04/27 23:02:49 INFO TaskSetManager: Lost 
task 0.0 in stage 67.0 (TID 114) on executor localhost: 
java.lang.ClassCastException (null) [duplicate 5]15/04/27 23:02:49 INFO 
TaskSchedulerImpl: Removed TaskSet 67.0, whose tasks have all completed, from 
pool org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 
in stage 66.0 failed 1 times, most recent failure: Lost task 1.0 in stage 66.0 
(TID 113, localhost): java.lang.ClassCastException
Driver stacktrace: at 
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1241)
 at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1232)
 at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1231)
 at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) 
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at 
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1231) at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:705)
 at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:705)
 at scala.Option.foreach(Option.scala:236) at 
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:705)
 at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1424)
 at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1385)
 at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
ThanksKiran. 

Reply via email to