[ https://issues.apache.org/jira/browse/SPARK-26858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16772266#comment-16772266 ]
Bryan Cutler commented on SPARK-26858: -------------------------------------- [~hyukjin.kwon] actually {{pyarrow.Table.from_batches}} does require a schema, but it's a little tricky. In python, to create a {{RecordBatch}} object, a schema is always required. Then it attaches to the {{RecordBatch}} object, so you can always get the schema from a batch and do things like {{Table.from_batches}} with just a list of batches. Look at the full api [here|https://arrow.apache.org/docs/python/generated/pyarrow.Table.html#pyarrow.Table.from_batches] and you can see the schema is optional and it just pulls from the first batch. Serialized record batches are a bit different and do not contain the schema, which is why the stream protocol goes {{Schema, RecordBatch, RecordBatch, ..}}. So you can't just send serialized Arrow batches and then build a Table. Hopefully that makes sense :P Back on your sequence diagram at step (9), does R write the data frame row by row bytes over the socket? I'm wondering how it gets serialized to see where Arrow might best help out. > Vectorized gapplyCollect, Arrow optimization in native R function execution > --------------------------------------------------------------------------- > > Key: SPARK-26858 > URL: https://issues.apache.org/jira/browse/SPARK-26858 > Project: Spark > Issue Type: Sub-task > Components: SparkR, SQL > Affects Versions: 3.0.0 > Reporter: Hyukjin Kwon > Assignee: Hyukjin Kwon > Priority: Major > > Unlike gapply, gapplyCollect requires additional ser/de steps because it can > omit the schema, and Spark SQL doesn't know the return type before actually > execution happens. > In original code path, it's done via using binary schema. Once gapply is done > (SPARK-26761). we can mimic this approach in vectorized gapply to support > gapplyCollect. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org