HyukjinKwon commented on code in PR #36683:
URL: https://github.com/apache/spark/pull/36683#discussion_r882542412


##########
sql/core/src/main/scala/org/apache/spark/sql/api/python/PythonSQLUtils.scala:
##########
@@ -70,22 +70,22 @@ private[sql] object PythonSQLUtils extends Logging {
     SQLConf.get.timestampType == org.apache.spark.sql.types.TimestampNTZType
 
   /**
-   * Python callable function to read a file in Arrow stream format and create 
a [[RDD]]
-   * using each serialized ArrowRecordBatch as a partition.
+   * Python callable function to read a file in Arrow stream format and create 
an iterator
+   * of serialized ArrowRecordBatches.
    */
-  def readArrowStreamFromFile(session: SparkSession, filename: String): 
JavaRDD[Array[Byte]] = {
-    ArrowConverters.readArrowStreamFromFile(session, filename)
+  def readArrowStreamFromFile(filename: String): Iterator[Array[Byte]] = {

Review Comment:
   Intentionally I used `Iterator` to avoid Py4J copies `Array` into Python 
driver side.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to