Repository: spark Updated Branches: refs/heads/master 79d07d660 -> 67fa71cba
Added doctest for map function in rdd.py Doctest added for map in rdd.py Author: Jyotiska NK <jyotiska...@gmail.com> Closes #177 from jyotiska/pyspark_rdd_map_doctest and squashes the following commits: a38527f [Jyotiska NK] Added doctest for map function in rdd.py Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/67fa71cb Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/67fa71cb Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/67fa71cb Branch: refs/heads/master Commit: 67fa71cba2cc07a65478899592e6ebad000e24c5 Parents: 79d07d6 Author: Jyotiska NK <jyotiska...@gmail.com> Authored: Wed Mar 19 14:04:45 2014 -0700 Committer: Matei Zaharia <ma...@databricks.com> Committed: Wed Mar 19 14:04:45 2014 -0700 ---------------------------------------------------------------------- python/pyspark/rdd.py | 4 ++++ 1 file changed, 4 insertions(+) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/spark/blob/67fa71cb/python/pyspark/rdd.py ---------------------------------------------------------------------- diff --git a/python/pyspark/rdd.py b/python/pyspark/rdd.py index ae09dbf..ca2dc11 100644 --- a/python/pyspark/rdd.py +++ b/python/pyspark/rdd.py @@ -180,6 +180,10 @@ class RDD(object): def map(self, f, preservesPartitioning=False): """ Return a new RDD by applying a function to each element of this RDD. + + >>> rdd = sc.parallelize(["b", "a", "c"]) + >>> sorted(rdd.map(lambda x: (x, 1)).collect()) + [('a', 1), ('b', 1), ('c', 1)] """ def func(split, iterator): return imap(f, iterator) return PipelinedRDD(self, func, preservesPartitioning)