[ https://issues.apache.org/jira/browse/SPARK-26858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773284#comment-16773284 ]
Bryan Cutler commented on SPARK-26858: -------------------------------------- {quote} (One other possibility I was thinking about batches without Schema is that we just send Arrow batch by Arrow batch, deserialize each batch to RecordBatch instance, and then construct an Arrow table, which is pretty different from Python side and hacky) {quote} The point from my previous comment is that you can't deserialize a RecordBatch and make a Table without a schema. I think these options make the most sense: 1) Have the user define a schema beforehand, then you can just send serialized RecordBatches back to the driver. 2) Send a complete Arrow stream (schema + RecordBatches) from the executor, then merge streams in the driver JVM, discarding duplicate schemas, and send one final stream to R. 3) Same as (2) but instead of merging streams, send each separate stream through to the R driver where they are read and concatenated into one Table - I'm not sure if the Arrow R api will support this though. > Vectorized gapplyCollect, Arrow optimization in native R function execution > --------------------------------------------------------------------------- > > Key: SPARK-26858 > URL: https://issues.apache.org/jira/browse/SPARK-26858 > Project: Spark > Issue Type: Sub-task > Components: SparkR, SQL > Affects Versions: 3.0.0 > Reporter: Hyukjin Kwon > Assignee: Hyukjin Kwon > Priority: Major > > Unlike gapply, gapplyCollect requires additional ser/de steps because it can > omit the schema, and Spark SQL doesn't know the return type before actually > execution happens. > In original code path, it's done via using binary schema. Once gapply is done > (SPARK-26761). we can mimic this approach in vectorized gapply to support > gapplyCollect. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org