Hello, I have a simple session table, which tracks pages users visited with a sessionId. I would like to apply a window function by sessionId, but am hitting a type cast exception. I am using Spark 1.5.0.
Here is sample code: scala> df.printSchema root |-- sessionid: string (nullable = true) |-- ip: string (nullable = true) |-- user_id: string (nullable = true) |-- timestamp: string (nullable = true) |-- page: string (nullable = true) scala> df.withColumn("num", rowNumber.over(Window.partitionBy("sessionid"))).show(10) Here is the error stacktrace: Caused by: java.lang.ClassCastException: org.apache.spark.unsafe.types.UTF8String cannot be cast to java.lang.Integer at scala.runtime.BoxesRunTime.unboxToInt(BoxesRunTime.java:106) at org.apache.spark.sql.catalyst.expressions.BaseGenericInternalRow$class.getInt(rows.scala:40) at org.apache.spark.sql.catalyst.expressions.GenericInternalRow.getInt(rows.scala:220) at org.apache.spark.sql.catalyst.expressions.JoinedRow.getInt(JoinedRow.scala:82) at org.apache.spark.sql.catalyst.expressions.BoundReference.eval(BoundAttribute.scala:45) at org.apache.spark.sql.catalyst.expressions.Alias.eval(namedExpressions.scala:121) at org.apache.spark.sql.catalyst.expressions.InterpretedMutableProjection.apply(Projection.scala:82) at org.apache.spark.sql.catalyst.expressions.InterpretedMutableProjection.apply(Projection.scala:61) at org.apache.spark.sql.execution.Window$$anonfun$8$$anon$1.next(Window.scala:330) at org.apache.spark.sql.execution.Window$$anonfun$8$$anon$1.next(Window.scala:252) at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) at scala.collection.Iterator$$anon$10.next(Iterator.scala:312) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47) at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273) at scala.collection.AbstractIterator.to(Iterator.scala:1157) at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265) at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157) at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252) at scala.collection.AbstractIterator.toArray(Iterator.scala:1157) at org.apache.spark.sql.execution.SparkPlan$$anonfun$5.apply(SparkPlan.scala:215) at org.apache.spark.sql.execution.SparkPlan$$anonfun$5.apply(SparkPlan.scala:215) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Has anyone encountered this problem before? Any pointers would be greatly appreciated. Thanks! Isabelle