[ https://issues.apache.org/jira/browse/SPARK-51931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
L. C. Hsieh resolved SPARK-51931. --------------------------------- Fix Version/s: 4.1.0 Resolution: Fixed Issue resolved by pull request 50735 [https://github.com/apache/spark/pull/50735] > Add maxBytesPerOutputBatch to limit the number of bytes of Arrow output batch > ----------------------------------------------------------------------------- > > Key: SPARK-51931 > URL: https://issues.apache.org/jira/browse/SPARK-51931 > Project: Spark > Issue Type: Improvement > Components: SQL > Affects Versions: 4.1.0 > Reporter: L. C. Hsieh > Assignee: L. C. Hsieh > Priority: Major > Labels: pull-request-available > Fix For: 4.1.0 > > > While implementing columnar-based operator for Spark, if the operator takes > input from Arrow-based evaluation operator in Spark, the number of bytes of > output batch is unlimited for now. For such columnar-based operator, > sometimes we want to limit the maximum bytes of input batch. If we need to > limit the batch size in bytes, it seems there is no existing way we can do. -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org