[ 
https://issues.apache.org/jira/browse/SPARK-2620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14074673#comment-14074673
 ] 

Matei Zaharia commented on SPARK-2620:
--------------------------------------

The problem is that case class is compiled differently in the spark shell than 
in local tests. in spark-shell it currently becomes an inner class, which is 
unlike the behavior in plain Scala or the plain Scala REPL, but is due to the 
way we create wrapper objects for each command. You could add a test to 
ReplSuite, then that will fail.

> case class cannot be used as key for reduce
> -------------------------------------------
>
>                 Key: SPARK-2620
>                 URL: https://issues.apache.org/jira/browse/SPARK-2620
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.0.0
>         Environment: reproduced on spark-shell local[4]
>            Reporter: Gerard Maas
>            Priority: Critical
>              Labels: case-class, core
>
> Using a case class as a key doesn't seem to work properly on Spark 1.0.0
> A minimal example:
> case class P(name:String)
> val ps = Array(P("alice"), P("bob"), P("charly"), P("bob"))
> sc.parallelize(ps).map(x=> (x,1)).reduceByKey((x,y) => x+y).collect
> [Spark shell local mode] res : Array[(P, Int)] = Array((P(bob),1), 
> (P(bob),1), (P(abe),1), (P(charly),1))
> In contrast to the expected behavior, that should be equivalent to:
> sc.parallelize(ps).map(x=> (x.name,1)).reduceByKey((x,y) => x+y).collect
> Array[(String, Int)] = Array((charly,1), (abe,1), (bob,2))
> groupByKey and distinct also present the same behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to