Github user HyukjinKwon commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20211#discussion_r161119843
  
    --- Diff: python/pyspark/sql/group.py ---
    @@ -233,6 +233,27 @@ def apply(self, udf):
             |  2| 1.1094003924504583|
             +---+-------------------+
     
    +        Notes on grouping column:
    --- End diff --
    
    @felixcheung, WDYT?
    
    To cut the context short, it's a Pandas map group API like `gapply` (not 
Pandas scalar udf).
    
    Its current implementation is as follows
    
    ```python
    def foo(pdf):
        pdf  # this is the Pandas DataFrame
    
    pudf = pandas_udf(f=foo, returnType="id int, v double", 
functionType=GROUP_MAP)
    df.groupby(group_column).apply(pudf)
    ```
    
    First `'id int, v double'` describes the output schema and input `pdf` is 
the grouped Pandas's DataFrame.
    
    As @icexelloss described above as a new proposal, looking at `gapply` in R 
at a glance again, seems making sense that we do:
    
    ```python
    def foo(key, pdf):
        key  # this is a grouping key. 
        pdf  # this is the Pandas DataFrame
    
    pudf = pandas_udf(f=foo, returnType="id int, v double", 
functionType=GROUP_MAP)
    df.groupby(group_column).apply(pudf)
    ```



---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to