[ 
https://issues.apache.org/jira/browse/SPARK-3141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Wendell resolved SPARK-3141.
------------------------------------

       Resolution: Fixed
    Fix Version/s: 1.1.0

Issue resolved by pull request 2045
[https://github.com/apache/spark/pull/2045]

> sortByKey() break take()
> ------------------------
>
>                 Key: SPARK-3141
>                 URL: https://issues.apache.org/jira/browse/SPARK-3141
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>    Affects Versions: 1.1.0
>            Reporter: Davies Liu
>            Assignee: Davies Liu
>            Priority: Blocker
>             Fix For: 1.1.0
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> https://github.com/apache/spark/pull/1898/files#r16449470
> I think there might be two unintended side effects of this change. This code 
> used to work in pyspark:
> sc.parallelize([5,3,4,2,1]).map(lambda x: (x,x)).sortByKey().take(1)
> Now it failswith the error:
> File "<...>/spark/python/pyspark/rdd.py", line 1023, in takeUpToNumLeft
>     yield next(iterator)
> TypeError: list object is not an iterator
> Changing mapFunc and sort back to generators rather than regular functions 
> fixes that problem.
> After making that change, there is a second side effect due to the removal of 
> flatMap where the above code returns the following unexpected result due to 
> the default partitioning scheme:
> [[(1, 1), (2, 2)]]
> Removing sortByKey, e.g.:
> sc.parallelize([5,3,4,2,1]).map(lambda x: (x,x)).take(1)
> returns the expected result [(5, 5)]. Restoring the call to flatMap resolves 
> this as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to