Repository: spark Updated Branches: refs/heads/branch-1.6 782885786 -> 0dd6c2987
[SPARK-11658] simplify documentation for PySpark combineByKey Author: Chris Snow <chsnow...@gmail.com> Closes #9640 from snowch/patch-3. (cherry picked from commit 68ef61bb656bd9c08239726913ca8ab271d52786) Signed-off-by: Andrew Or <and...@databricks.com> Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/0dd6c298 Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/0dd6c298 Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/0dd6c298 Branch: refs/heads/branch-1.6 Commit: 0dd6c2987fd80531bae501394e93d6510f022f20 Parents: 7828857 Author: Chris Snow <chsnow...@gmail.com> Authored: Thu Nov 12 15:50:47 2015 -0800 Committer: Andrew Or <and...@databricks.com> Committed: Thu Nov 12 15:50:53 2015 -0800 ---------------------------------------------------------------------- python/pyspark/rdd.py | 1 - 1 file changed, 1 deletion(-) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/spark/blob/0dd6c298/python/pyspark/rdd.py ---------------------------------------------------------------------- diff --git a/python/pyspark/rdd.py b/python/pyspark/rdd.py index 56e8922..4b4d596 100644 --- a/python/pyspark/rdd.py +++ b/python/pyspark/rdd.py @@ -1760,7 +1760,6 @@ class RDD(object): In addition, users can control the partitioning of the output RDD. >>> x = sc.parallelize([("a", 1), ("b", 1), ("a", 1)]) - >>> def f(x): return x >>> def add(a, b): return a + str(b) >>> sorted(x.combineByKey(str, add, add).collect()) [('a', '11'), ('b', '1')] --------------------------------------------------------------------- To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org