Github user eatoncys commented on a diff in the pull request:

    https://github.com/apache/spark/pull/23262#discussion_r240114106
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/ExistingRDD.scala ---
    @@ -53,7 +53,7 @@ object RDDConversions {
         data.mapPartitions { iterator =>
           val numColumns = outputTypes.length
           val mutableRow = new GenericInternalRow(numColumns)
    -      val converters = 
outputTypes.map(CatalystTypeConverters.createToCatalystConverter)
    +      val converters = 
outputTypes.map(CatalystTypeConverters.createToCatalystConverter).toArray
    --- End diff --
    
    It has been modified, and the performance is the same as converting to 
arrays.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to