Github user HyukjinKwon commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20211#discussion_r160860320
  
    --- Diff: python/pyspark/sql/group.py ---
    @@ -233,6 +233,27 @@ def apply(self, udf):
             |  2| 1.1094003924504583|
             +---+-------------------+
     
    +        Notes on grouping column:
    --- End diff --
    
    Yup, I saw this usecase as described in the JIRA and I got that the 
specific case can be simplified; however, I am not sure if it's straightforward 
to the end users.
    
    For example, if I use `pandas_udf` I think I would simply expect the return 
schema is matched as described in `returnType`. I think `pandas_udf` already 
need some background and I think we should make it simpler as possible as we 
can.
    
    It might be convenient to make the guarantee on grouping columns in some 
cases vs this might be a kind of magic inside.
    
    I would prefer to let the UDF to specify the grouping columns to make this 
more straightforward more .. 


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to