[ https://issues.apache.org/jira/browse/SPARK-10731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14901092#comment-14901092 ]
Yin Huai edited comment on SPARK-10731 at 9/21/15 6:05 PM: ----------------------------------------------------------- Looks like the problem is df.collect does not work well with limit. In Scala, {{df.limit(1).rdd.count()}} will also trigger the problem. When we call {{df.limit(1).rdd}}, we will launch a job to get 1 record for every partition. was (Author: yhuai): Looks like the problem is df.collect does not work well with limit. In Scala, {{df.limit(1).rdd.count()}} will also trigger the problem. > The head() implementation of dataframe is very slow > --------------------------------------------------- > > Key: SPARK-10731 > URL: https://issues.apache.org/jira/browse/SPARK-10731 > Project: Spark > Issue Type: Bug > Components: PySpark > Affects Versions: 1.4.1, 1.5.0 > Reporter: Jerry Lam > Labels: pyspark > > {code} > df=sqlContext.read.parquet("someparquetfiles") > df.head() > {code} > The above lines take over 15 minutes. It seems the dataframe requires 3 > stages to return the first row. It reads all data (which is about 1 billion > rows) and run Limit twice. The take(1) implementation in the RDD performs > much better. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org