[ https://issues.apache.org/jira/browse/SPARK-16321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15404676#comment-15404676 ]
Apache Spark commented on SPARK-16321: -------------------------------------- User 'maver1ck' has created a pull request for this issue: https://github.com/apache/spark/pull/14465 > Spark 2.0 performance drop vs Spark 1.6 when reading parquet file > ----------------------------------------------------------------- > > Key: SPARK-16321 > URL: https://issues.apache.org/jira/browse/SPARK-16321 > Project: Spark > Issue Type: Bug > Components: PySpark > Affects Versions: 2.0.0 > Reporter: Maciej BryĆski > Priority: Critical > Attachments: Spark16.nps, Spark2.nps, spark16._trace.png, > spark16_query.nps, spark2_nofilterpushdown.nps, spark2_query.nps, > spark2_trace.png, visualvm_spark16.png, visualvm_spark2.png, > visualvm_spark2_G1GC.png > > > *UPDATE* > Please start with this comment > https://issues.apache.org/jira/browse/SPARK-16321?focusedCommentId=15383785&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15383785 > I assume that problem results from the performance problem with reading > parquet files > *Original Issue description* > I did some test on parquet file with many nested columns (about 30G in > 400 partitions) and Spark 2.0 is 2x slower. > {code} > df = sqlctx.read.parquet(path) > df.where('id > some_id').rdd.flatMap(lambda r: [r.id] if not r.id %100000 > else []).collect() > {code} > Spark 1.6 -> 2.3 min > Spark 2.0 -> 4.6 min (2x slower) > I used BasicProfiler for this task and cumulative time was: > Spark 1.6 - 4300 sec > Spark 2.0 - 5800 sec > Should I expect such a drop in performance ? > I don't know how to prepare sample data to show the problem. > Any ideas ? Or public data with many nested columns ? -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org