Github user NarineK commented on a diff in the pull request: https://github.com/apache/spark/pull/14090#discussion_r70202321 --- Diff: docs/sparkr.md --- @@ -306,6 +306,64 @@ head(ldf, 3) {% endhighlight %} </div> +#### Run a given function on a large dataset grouping by input column(s) and using `gapply` or `gapplyCollect` + +##### gapply +Apply a function to each group of a `SparkDataFrame`. The function is to be applied to each group of the `SparkDataFrame` and should have only two parameters: grouping key and R `data.frame` corresponding to +that key. The groups are chosen from `SparkDataFrame`s column(s). +The output of function should be a `data.frame`. Schema specifies the row format of the resulting +`SparkDataFrame`. It must match the R function's output. --- End diff -- Thanks @shivaram. Does the following mapping looks fine to have in the table ? ``` **R Spark** byte byte integer integer float float double double numeric double character string string string binary binary raw binary logical boolean timestamp timestamp date date array array map map struct struct ```
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org