[ 
https://issues.apache.org/jira/browse/SPARK-39931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enrico Minack updated SPARK-39931:
----------------------------------
    Description: 
Calling {{DataFrame.groupby(...).applyInPandas(...)}} for very small groups in 
PySpark is very slow. The reason is that for each group, PySpark creates a 
Pandas DataFrame and calls into the Python code. For very small groups, the 
overhead is huge, for large groups, it is smaller.

Here is a benchmarks (seconds to {{groupBy(...).applyInPandas(...)}} 10m rows):
||groupSize||Scala||pyspark.sql||pyspark.pandas||
|1024|8.9|16.2|7.8|
|512|9.4|26.7|9.8|
|256|9.3|44.5|20.2|
|128|9.5|82.7|48.8|
|64|9.5|158.2|91.9|
|32|9.6|319.8|207.3|
|16|9.6|652.6|261.5|
|8|9.5|1,376|663.0|
|4|9.8|2,656|1,168|
|2|10.4|5,412|2,456|
|1|11.3|9,491|4,642|

*Idea to overcome this* is to call into Python side with a Pandas DataFrame 
that contains potentially multiple groups, then perform a Pandas 
{{DataFrame.groupBy(...).apply(...)}} or provide the {{DataFrameGroupBy}} to 
the Python method. With large groups, that Panadas DataFrame has all rows of a 
single group, with small groups it contains many groups. This should improve 
efficiency.

  was:
Calling {{DataFrame.groupby(...).applyInPandas(...)}} for very small groups in 
PySpark is very slow. The reason is that for each group, PySpark creates a 
Pandas DataFrame and calls into the Python code. For very small groups, the 
overhead is huge, for large groups, it is smaller.

Here is a benchmarks (seconds to {{groupBy(...).applyInPandas(...)}} 10m rows):
||groupSize||Scala||pyspark.sql||pyspark.pandas||
|1024|8.9|16.2|7.8|
|512|9.4|26.7|9.8|
|256|9.3|44.5|20.2|
|128|9.5|82.7|48.8|
|64|9.5|158.2|91.9|
|32|9.6|319.8|207.3|
|16|9.6|652.6|261.5|
|8|9.5|1,376|663.0|
|4|9.8|2,656|1,168|
|2|10.4|5,412|2,456|
|1|11.3|8,162|4,642|

*Idea to overcome this* is to call into Python side with a Pandas DataFrame 
that contains potentially multiple groups, then perform a Pandas 
{{DataFrame.groupBy(...).apply(...)}} or provide the {{DataFrameGroupBy}} to 
the Python method. With large groups, that Panadas DataFrame has all rows of a 
single group, with small groups it contains many groups. This should improve 
efficiency.


> Improve performance of applyInPandas for very small groups
> ----------------------------------------------------------
>
>                 Key: SPARK-39931
>                 URL: https://issues.apache.org/jira/browse/SPARK-39931
>             Project: Spark
>          Issue Type: Improvement
>          Components: PySpark
>    Affects Versions: 3.4.0
>            Reporter: Enrico Minack
>            Priority: Major
>
> Calling {{DataFrame.groupby(...).applyInPandas(...)}} for very small groups 
> in PySpark is very slow. The reason is that for each group, PySpark creates a 
> Pandas DataFrame and calls into the Python code. For very small groups, the 
> overhead is huge, for large groups, it is smaller.
> Here is a benchmarks (seconds to {{groupBy(...).applyInPandas(...)}} 10m 
> rows):
> ||groupSize||Scala||pyspark.sql||pyspark.pandas||
> |1024|8.9|16.2|7.8|
> |512|9.4|26.7|9.8|
> |256|9.3|44.5|20.2|
> |128|9.5|82.7|48.8|
> |64|9.5|158.2|91.9|
> |32|9.6|319.8|207.3|
> |16|9.6|652.6|261.5|
> |8|9.5|1,376|663.0|
> |4|9.8|2,656|1,168|
> |2|10.4|5,412|2,456|
> |1|11.3|9,491|4,642|
> *Idea to overcome this* is to call into Python side with a Pandas DataFrame 
> that contains potentially multiple groups, then perform a Pandas 
> {{DataFrame.groupBy(...).apply(...)}} or provide the {{DataFrameGroupBy}} to 
> the Python method. With large groups, that Panadas DataFrame has all rows of 
> a single group, with small groups it contains many groups. This should 
> improve efficiency.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to