[ https://issues.apache.org/jira/browse/SPARK-5863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14375243#comment-14375243 ]
Cristian commented on SPARK-5863: --------------------------------- I'm a bit confused. The original jira refers to a very specific issue that has a very simple solution (I think). {code} r.toSeq.zip(schema.fields.map(_.dataType)) {code} Can be written as: {code} r.toSeq.view.zip(schema.fields.view.map(_.dataType)) {code} Or in fact this schema.fields.map(_.dataType) can be instead remembered and passed into the function instead of recalcing every time. Also this is a clear regression as I showed initially, it seems to have a simple fix, so why not fix the 1.2 branch as well ? > Improve performance of convertToScala codepath. > ----------------------------------------------- > > Key: SPARK-5863 > URL: https://issues.apache.org/jira/browse/SPARK-5863 > Project: Spark > Issue Type: Improvement > Components: SQL > Affects Versions: 1.2.0, 1.2.1 > Reporter: Cristian > Priority: Critical > > Was doing some perf testing on reading parquet files and noticed that moving > from Spark 1.1 to 1.2 the performance is 3x worse. In the profiler the > culprit showed up as being in ScalaReflection.convertRowToScala. > Particularly this zip is the issue: > {code} > r.toSeq.zip(schema.fields.map(_.dataType)) > {code} > I see there's a comment on that currently that this is slow but it wasn't > fixed. This actually produces a 3x degradation in parquet read performance, > at least in my test case. > Edit: the map is part of the issue as well. This whole code block is in a > tight loop and allocates a new ListBuffer that needs to grow for each > transformation. A possible solution is to change to using seq.view which > would allocate iterators instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org