Repository: spark Updated Branches: refs/heads/master 12a0784ac -> 68ef61bb6
[SPARK-11658] simplify documentation for PySpark combineByKey Author: Chris Snow <chsnow...@gmail.com> Closes #9640 from snowch/patch-3. Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/68ef61bb Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/68ef61bb Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/68ef61bb Branch: refs/heads/master Commit: 68ef61bb656bd9c08239726913ca8ab271d52786 Parents: 12a0784 Author: Chris Snow <chsnow...@gmail.com> Authored: Thu Nov 12 15:50:47 2015 -0800 Committer: Andrew Or <and...@databricks.com> Committed: Thu Nov 12 15:50:47 2015 -0800 ---------------------------------------------------------------------- python/pyspark/rdd.py | 1 - 1 file changed, 1 deletion(-) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/spark/blob/68ef61bb/python/pyspark/rdd.py ---------------------------------------------------------------------- diff --git a/python/pyspark/rdd.py b/python/pyspark/rdd.py index 56e8922..4b4d596 100644 --- a/python/pyspark/rdd.py +++ b/python/pyspark/rdd.py @@ -1760,7 +1760,6 @@ class RDD(object): In addition, users can control the partitioning of the output RDD. >>> x = sc.parallelize([("a", 1), ("b", 1), ("a", 1)]) - >>> def f(x): return x >>> def add(a, b): return a + str(b) >>> sorted(x.combineByKey(str, add, add).collect()) [('a', '11'), ('b', '1')] --------------------------------------------------------------------- To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org