Github user felixcheung commented on a diff in the pull request:

    https://github.com/apache/spark/pull/13786#discussion_r67797294
  
    --- Diff: R/pkg/R/group.R ---
    @@ -129,6 +129,48 @@ methods <- c("avg", "max", "mean", "min", "sum")
     # These are not exposed on GroupedData: "kurtosis", "skewness", "stddev", 
"stddev_samp", "stddev_pop",
     # "variance", "var_samp", "var_pop"
     
    +#' Pivot a column of the GroupedData and perform the specified aggregation.
    +#'
    +#' Pivot a column of the GroupedData and perform the specified aggregation.
    +#' There are two versions of pivot function: one that requires the caller 
to specify the list
    +#' of distinct values to pivot on, and one that does not. The latter is 
more concise but less
    +#' efficient, because Spark needs to first compute the list of distinct 
values internally.
    +#'
    +#' @param x a GroupedData object
    +#' @param colname A column name
    +#' @param values A value or a list/vector of distinct values for the 
output columns.
    +#' @return GroupedData object
    +#' @rdname pivot
    +#' @name pivot
    +#' @export
    +#' @examples
    +#' \dontrun{
    +#' df <- createDataFrame(data.frame(
    +#'     earnings = c(10000, 10000, 11000, 15000, 12000, 20000, 21000, 
22000),
    +#'     course = c("R", "Python", "R", "Python", "R", "Python", "R", 
"Python"),
    +#'     year = c(2013, 2013, 2014, 2014, 2015, 2015, 2016, 2016)
    +#' ))
    +#' collect(sum(pivot(groupBy(df, "year"), "course"), "earnings"))
    +#' collect(sum(pivot(groupBy(df, "year"), "course", "R"), "earnings"))
    +#' collect(sum(pivot(groupBy(df, "year"), "course", c("Python", "R")), 
"earnings"))
    +#' collect(sum(pivot(groupBy(df, "year"), "course", list("Python", "R")), 
"earnings"))
    --- End diff --
    
    ... and change one from `sum(` to `mean(` for example.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to