[ https://issues.apache.org/jira/browse/SPARK-17514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485821#comment-15485821 ]
Apache Spark commented on SPARK-17514: -------------------------------------- User 'JoshRosen' has created a pull request for this issue: https://github.com/apache/spark/pull/15068 > df.take(1) and df.limit(1).collect() perform differently in Python > ------------------------------------------------------------------ > > Key: SPARK-17514 > URL: https://issues.apache.org/jira/browse/SPARK-17514 > Project: Spark > Issue Type: Bug > Components: PySpark, SQL > Reporter: Josh Rosen > Assignee: Josh Rosen > > In PySpark, {{df.take(1)}} ends up running a single-stage job which computes > only one partition of {{df}}, while {{df.limit(1).collect()}} ends up > computing all partitions of {{df}} and runs a two-stage job. This difference > in performance is confusing, so I think that we should generalize the fix > from SPARK-10731 so that {{Dataset.collect()}} can be implemented efficiently > in Python. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org