[ 
https://issues.apache.org/jira/browse/SPARK-49547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruifeng Zheng resolved SPARK-49547.
-----------------------------------
    Fix Version/s: 4.1.0
       Resolution: Fixed

Issue resolved by pull request 52440
[https://github.com/apache/spark/pull/52440]

> Support returning RecordBatches from applyInArrow
> -------------------------------------------------
>
>                 Key: SPARK-49547
>                 URL: https://issues.apache.org/jira/browse/SPARK-49547
>             Project: Spark
>          Issue Type: Sub-task
>          Components: PySpark, SQL
>    Affects Versions: 4.0.0
>            Reporter: Adam Binford
>            Assignee: Adam Binford
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 4.1.0
>
>
> Currently the new `applyInArrow` method in PySpark uses a function that takes 
> a `pyarrow.Table` and returns a `pyarrow.Table`. This limits the ability for 
> this function to scale, as the entire result set must fit in memory at once 
> in a `Table`. However, we have use cases that can result in a large amount of 
> data that needs to be returned from the function on certain edge cases. The 
> result is immediately turned into a series of batches from the Table, so 
> there's no reason to not just allow an iterator of batches to be returned 
> instead.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to