[ 
https://issues.apache.org/jira/browse/SPARK-16258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15359794#comment-15359794
 ] 

Shivaram Venkataraman commented on SPARK-16258:
-----------------------------------------------

Thanks [~timhunter] -- So I see two ways to do this. One is to introduce a new 
optional flag in `gapply` that states if the key is required or not. The other 
way is to introduce a new primitive. I think the former is better as it keeps 
our public API small. 
[~Narine] Any thoughts on this ?

> Automatically append the grouping keys in SparkR's gapply
> ---------------------------------------------------------
>
>                 Key: SPARK-16258
>                 URL: https://issues.apache.org/jira/browse/SPARK-16258
>             Project: Spark
>          Issue Type: Improvement
>          Components: SparkR
>            Reporter: Timothy Hunter
>
> While working on the group apply function for python [1], we found it easier 
> to depart from SparkR's gapply function in the following way:
>  - the keys are appended by default to the spark dataframe being returned
>  - the output schema that the users provides is the schema of the R data 
> frame and does not include the keys
> Here are the reasons for doing so:
>  - in most cases, users will want to know the key associated with a result -> 
> appending the key is the sensible default
>  - most functions in the SQL interface and in MLlib append columns, and 
> gapply departs from this philosophy
>  - for the cases when they do not need it, adding the key is a fraction of 
> the computation time and of the output size
>  - from a formal perspective, it makes calling gapply fully transparent to 
> the type of the key: it is easier to build a function with gapply because it 
> does not need to know anything about the key
> This ticket proposes to change SparkR's gapply function to follow the same 
> convention as Python's implementation.
> cc [~Narine] [~shivaram]
> [1] 
> https://github.com/databricks/spark-sklearn/blob/master/python/spark_sklearn/group_apply.py



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to