HyukjinKwon commented on code in PR #36683:
URL: https://github.com/apache/spark/pull/36683#discussion_r894062038


##########
python/pyspark/sql/pandas/conversion.py:
##########
@@ -596,7 +596,7 @@ def _create_from_pandas_with_arrow(
             ]
 
         # Slice the DataFrame to be batched
-        step = -(-len(pdf) // self.sparkContext.defaultParallelism)  # round 
int up
+        step = self._jconf.arrowMaxRecordsPerBatch()

Review Comment:
   BTW, just to extra clarify, when the pandas DataFrame is small (lower than 
the threshold), the number of partitions remains same (configured by 
`spark.sql.leafNodeDefaultParallelism` that falls back to 
`sparkContext.defaultParallelism` if not set).
   
   The number of partitions is only different when the input DataFrame is 
large, which I think makes more sense in general ..



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to