Re: ClassCastException: SomeCaseClass cannot be cast to org.apache.spark.sql.Row

2016-05-24 Thread Reynold Xin
Thanks, Koert. This is great. Please keep them coming.


On Tue, May 24, 2016 at 9:27 AM, Koert Kuipers  wrote:

> https://issues.apache.org/jira/browse/SPARK-15507
>
> On Tue, May 24, 2016 at 12:21 PM, Ted Yu  wrote:
>
>> Please log a JIRA.
>>
>> Thanks
>>
>> On Tue, May 24, 2016 at 8:33 AM, Koert Kuipers  wrote:
>>
>>> hello,
>>> as we continue to test spark 2.0 SNAPSHOT in-house we ran into the
>>> following trying to port an existing application from spark 1.6.1 to spark
>>> 2.0.0-SNAPSHOT.
>>>
>>> given this code:
>>>
>>> case class Test(a: Int, b: String)
>>> val rdd = sc.parallelize(List(Row(List(Test(5, "ha"), Test(6, "ba")
>>> val schema = StructType(Seq(
>>>   StructField("x", ArrayType(
>>> StructType(Seq(
>>>   StructField("a", IntegerType, false),
>>>   StructField("b", StringType, true)
>>> )),
>>> true)
>>>   , true)
>>>   ))
>>> val df = sqlc.createDataFrame(rdd, schema)
>>> df.show
>>>
>>> this works fine in spark 1.6.1 and gives:
>>>
>>> ++
>>> |   x|
>>> ++
>>> |[[5,ha], [6,ba]]|
>>> ++
>>>
>>> but in spark 2.0.0-SNAPSHOT i get:
>>>
>>> org.apache.spark.SparkException: Job aborted due to stage failure: Task
>>> 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage
>>> 0.0 (TID 0, localhost): java.lang.RuntimeException: Error while encoding:
>>> java.lang.ClassCastException: Test cannot be cast to
>>> org.apache.spark.sql.Row
>>> [info] getexternalrowfield(input[0, org.apache.spark.sql.Row, false], 0,
>>> x, IntegerType) AS x#0
>>> [info] +- getexternalrowfield(input[0, org.apache.spark.sql.Row, false],
>>> 0, x, IntegerType)
>>> [info]+- input[0, org.apache.spark.sql.Row, false]
>>>
>>>
>>
>


Re: ClassCastException: SomeCaseClass cannot be cast to org.apache.spark.sql.Row

2016-05-24 Thread Koert Kuipers
https://issues.apache.org/jira/browse/SPARK-15507

On Tue, May 24, 2016 at 12:21 PM, Ted Yu  wrote:

> Please log a JIRA.
>
> Thanks
>
> On Tue, May 24, 2016 at 8:33 AM, Koert Kuipers  wrote:
>
>> hello,
>> as we continue to test spark 2.0 SNAPSHOT in-house we ran into the
>> following trying to port an existing application from spark 1.6.1 to spark
>> 2.0.0-SNAPSHOT.
>>
>> given this code:
>>
>> case class Test(a: Int, b: String)
>> val rdd = sc.parallelize(List(Row(List(Test(5, "ha"), Test(6, "ba")
>> val schema = StructType(Seq(
>>   StructField("x", ArrayType(
>> StructType(Seq(
>>   StructField("a", IntegerType, false),
>>   StructField("b", StringType, true)
>> )),
>> true)
>>   , true)
>>   ))
>> val df = sqlc.createDataFrame(rdd, schema)
>> df.show
>>
>> this works fine in spark 1.6.1 and gives:
>>
>> ++
>> |   x|
>> ++
>> |[[5,ha], [6,ba]]|
>> ++
>>
>> but in spark 2.0.0-SNAPSHOT i get:
>>
>> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0
>> in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage
>> 0.0 (TID 0, localhost): java.lang.RuntimeException: Error while encoding:
>> java.lang.ClassCastException: Test cannot be cast to
>> org.apache.spark.sql.Row
>> [info] getexternalrowfield(input[0, org.apache.spark.sql.Row, false], 0,
>> x, IntegerType) AS x#0
>> [info] +- getexternalrowfield(input[0, org.apache.spark.sql.Row, false],
>> 0, x, IntegerType)
>> [info]+- input[0, org.apache.spark.sql.Row, false]
>>
>>
>


Re: ClassCastException: SomeCaseClass cannot be cast to org.apache.spark.sql.Row

2016-05-24 Thread Ted Yu
Please log a JIRA.

Thanks

On Tue, May 24, 2016 at 8:33 AM, Koert Kuipers  wrote:

> hello,
> as we continue to test spark 2.0 SNAPSHOT in-house we ran into the
> following trying to port an existing application from spark 1.6.1 to spark
> 2.0.0-SNAPSHOT.
>
> given this code:
>
> case class Test(a: Int, b: String)
> val rdd = sc.parallelize(List(Row(List(Test(5, "ha"), Test(6, "ba")
> val schema = StructType(Seq(
>   StructField("x", ArrayType(
> StructType(Seq(
>   StructField("a", IntegerType, false),
>   StructField("b", StringType, true)
> )),
> true)
>   , true)
>   ))
> val df = sqlc.createDataFrame(rdd, schema)
> df.show
>
> this works fine in spark 1.6.1 and gives:
>
> ++
> |   x|
> ++
> |[[5,ha], [6,ba]]|
> ++
>
> but in spark 2.0.0-SNAPSHOT i get:
>
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0
> in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage
> 0.0 (TID 0, localhost): java.lang.RuntimeException: Error while encoding:
> java.lang.ClassCastException: Test cannot be cast to
> org.apache.spark.sql.Row
> [info] getexternalrowfield(input[0, org.apache.spark.sql.Row, false], 0,
> x, IntegerType) AS x#0
> [info] +- getexternalrowfield(input[0, org.apache.spark.sql.Row, false],
> 0, x, IntegerType)
> [info]+- input[0, org.apache.spark.sql.Row, false]
>
>


ClassCastException: SomeCaseClass cannot be cast to org.apache.spark.sql.Row

2016-05-24 Thread Koert Kuipers
hello,
as we continue to test spark 2.0 SNAPSHOT in-house we ran into the
following trying to port an existing application from spark 1.6.1 to spark
2.0.0-SNAPSHOT.

given this code:

case class Test(a: Int, b: String)
val rdd = sc.parallelize(List(Row(List(Test(5, "ha"), Test(6, "ba")
val schema = StructType(Seq(
  StructField("x", ArrayType(
StructType(Seq(
  StructField("a", IntegerType, false),
  StructField("b", StringType, true)
)),
true)
  , true)
  ))
val df = sqlc.createDataFrame(rdd, schema)
df.show

this works fine in spark 1.6.1 and gives:

++
|   x|
++
|[[5,ha], [6,ba]]|
++

but in spark 2.0.0-SNAPSHOT i get:

org.apache.spark.SparkException: Job aborted due to stage failure: Task 0
in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage
0.0 (TID 0, localhost): java.lang.RuntimeException: Error while encoding:
java.lang.ClassCastException: Test cannot be cast to
org.apache.spark.sql.Row
[info] getexternalrowfield(input[0, org.apache.spark.sql.Row, false], 0, x,
IntegerType) AS x#0
[info] +- getexternalrowfield(input[0, org.apache.spark.sql.Row, false], 0,
x, IntegerType)
[info]+- input[0, org.apache.spark.sql.Row, false]