Repository: spark
Updated Branches:
  refs/heads/master c0c397509 -> 5a4021998


[SPARK-12604][CORE] Addendum - use casting vs mapValues for countBy{Key,Value}

Per rxin, let's use the casting for countByKey and countByValue as well. Let's 
see if this passes.

Author: Sean Owen <so...@cloudera.com>

Closes #10641 from srowen/SPARK-12604.2.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/5a402199
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/5a402199
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/5a402199

Branch: refs/heads/master
Commit: 5a4021998ab0f1c8bbb610eceecdf879d149a7b8
Parents: c0c3975
Author: Sean Owen <so...@cloudera.com>
Authored: Thu Jan 7 17:21:03 2016 -0800
Committer: Reynold Xin <r...@databricks.com>
Committed: Thu Jan 7 17:21:03 2016 -0800

----------------------------------------------------------------------
 core/src/main/scala/org/apache/spark/api/java/JavaPairRDD.scala | 2 +-
 core/src/main/scala/org/apache/spark/api/java/JavaRDDLike.scala | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/5a402199/core/src/main/scala/org/apache/spark/api/java/JavaPairRDD.scala
----------------------------------------------------------------------
diff --git a/core/src/main/scala/org/apache/spark/api/java/JavaPairRDD.scala 
b/core/src/main/scala/org/apache/spark/api/java/JavaPairRDD.scala
index 76752e1..59af105 100644
--- a/core/src/main/scala/org/apache/spark/api/java/JavaPairRDD.scala
+++ b/core/src/main/scala/org/apache/spark/api/java/JavaPairRDD.scala
@@ -296,7 +296,7 @@ class JavaPairRDD[K, V](val rdd: RDD[(K, V)])
 
   /** Count the number of elements for each key, and return the result to the 
master as a Map. */
   def countByKey(): java.util.Map[K, jl.Long] =
-    mapAsSerializableJavaMap(rdd.countByKey().mapValues(jl.Long.valueOf))
+    mapAsSerializableJavaMap(rdd.countByKey()).asInstanceOf[java.util.Map[K, 
jl.Long]]
 
   /**
    * Approximate version of countByKey that can return a partial result if it 
does

http://git-wip-us.apache.org/repos/asf/spark/blob/5a402199/core/src/main/scala/org/apache/spark/api/java/JavaRDDLike.scala
----------------------------------------------------------------------
diff --git a/core/src/main/scala/org/apache/spark/api/java/JavaRDDLike.scala 
b/core/src/main/scala/org/apache/spark/api/java/JavaRDDLike.scala
index 1b1a9dc..2424382 100644
--- a/core/src/main/scala/org/apache/spark/api/java/JavaRDDLike.scala
+++ b/core/src/main/scala/org/apache/spark/api/java/JavaRDDLike.scala
@@ -448,7 +448,7 @@ trait JavaRDDLike[T, This <: JavaRDDLike[T, This]] extends 
Serializable {
    * combine step happens locally on the master, equivalent to running a 
single reduce task.
    */
   def countByValue(): java.util.Map[T, jl.Long] =
-    mapAsSerializableJavaMap(rdd.countByValue().mapValues(jl.Long.valueOf))
+    mapAsSerializableJavaMap(rdd.countByValue()).asInstanceOf[java.util.Map[T, 
jl.Long]]
 
   /**
    * (Experimental) Approximate version of countByValue().


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to