HyukjinKwon commented on a change in pull request #28089: 
[SPARK-30921][PySpark] Predicates on python udf should not be pushdown through 
Aggregate
URL: https://github.com/apache/spark/pull/28089#discussion_r402846556
 
 

 ##########
 File path: python/pyspark/sql/tests/test_pandas_udf_grouped_agg.py
 ##########
 @@ -491,6 +491,28 @@ def max_udf(v):
             agg2 = self.spark.sql("select max_udf(id) from table")
             assert_frame_equal(agg1.toPandas(), agg2.toPandas())
 
+    def test_no_predicate_pushdown_through(self):
+        from pyspark.sql.functions import monotonically_increasing_id, 
explode_outer
+        import numpy as np
+
+        @pandas_udf('float', PandasUDFType.GROUPED_AGG)
+        def mean(x):
+            return np.mean(x)
+
+        df = self.spark.createDataFrame([
+            Row(foo=[Row(bar=42), Row(bar=43), Row(bar=44)]),
+        ])
+
+        df_with_id = df.withColumn('id', monotonically_increasing_id())
+        exploded = df_with_id.select('id', explode_outer('foo').alias('foos'))
 
 Review comment:
   @viirya, is it necessary to use `monotonically_increasing_id` and 
`explode_outer`?  I know it's extracted from the reported case but I think we 
should just make it minimised for the fix.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to