[ 
https://issues.apache.org/jira/browse/SPARK-23258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon resolved SPARK-23258.
----------------------------------
    Resolution: Incomplete

> Should not split Arrow record batches based on row count
> --------------------------------------------------------
>
>                 Key: SPARK-23258
>                 URL: https://issues.apache.org/jira/browse/SPARK-23258
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 2.3.0
>            Reporter: Bryan Cutler
>            Priority: Major
>              Labels: bulk-closed
>
> Currently when executing scalarĀ {{pandas_udf}} or using {{toPandas()}} the 
> Arrow record batches are split up once the record count reaches a max value, 
> which is configured with "spark.sql.execution.arrow.maxRecordsPerBatch".  
> This is not ideal because the number of columns is not taken into account and 
> if there are many columns, then OOMs can occur.  An alternative approach 
> could be to look at the size of the Arrow buffers being used and cap it at a 
> certain size.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to