Github user yhuai commented on a diff in the pull request:

    https://github.com/apache/spark/pull/7841#discussion_r44500928
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/GroupedData.scala ---
    @@ -273,6 +280,60 @@ class GroupedData protected[sql](
       def sum(colNames: String*): DataFrame = {
         aggregateNumericColumns(colNames : _*)(Sum)
       }
    +
    +  /**
    +    * (Scala-specific) Pivots a column of the current [[DataFrame]] and 
preform the specified
    +    * aggregation.
    +    * {{{
    +    *   // Compute the sum of earnings for each year by course with each 
course as a separate column
    +    *   df.groupBy($"year").pivot($"course", "dotNET", 
"Java").agg(sum($"earnings"))
    +    *   // Or without specifying column values
    +    *   df.groupBy($"year").pivot($"course").agg(sum($"earnings"))
    +    * }}}
    +    * @param pivotColumn Column to pivot
    +    * @param values Optional list of values of pivotColumn that will be 
translated to columns in the
    +    *               output data frame. If values are not provided the 
method with do an immediate
    +    *               call to .distinct() on the pivot column.
    +    * @since 1.6.0
    +    */
    +  @scala.annotation.varargs
    +  def pivot(pivotColumn: Column, values: String*): GroupedData = groupType 
match {
    +    case _: GroupedData.PivotType =>
    +      throw new UnsupportedOperationException("repeated pivots are not 
supported")
    +    case GroupedData.GroupByType =>
    +      val pivotValues = if (values.nonEmpty) {
    +        values
    +      } else {
    +        // Get the distinct values of the column and sort them so its 
consistent
    +        df.select(pivotColumn.cast(StringType))
    +          .distinct()
    +          .map(_.getString(0))
    +          .collect().sorted.toSeq
    +      }
    +      new GroupedData(df, groupingExprs, 
GroupedData.PivotType(pivotColumn.expr, pivotValues))
    +    case _ =>
    +      throw new UnsupportedOperationException("pivot is only supported 
after a groupBy")
    +  }
    +
    +  /**
    +    * Pivots a column of the current [[DataFrame]] and preform the 
specified aggregation.
    +    * {{{
    +    *   // Compute the sum of earnings for each year by course with each 
course as a separate column
    +    *   df.groupBy("year").pivot("course", "dotNET", 
"Java").sum("earnings")
    +    *   // Or without specifying column values
    +    *   df.groupBy("year").pivot("course").sum("earnings")
    +    * }}}
    +    * @param pivotColumn Column to pivot
    +    * @param values Optional list of values of pivotColumn that will be 
translated to columns in the
    +    *               output data frame. If values are not provided the 
method with do an immediate
    +    *               call to .distinct() on the pivot column.
    +    * @since 1.6.0
    +    */
    +  @scala.annotation.varargs
    +  def pivot(pivotColumn: String, values: String*): GroupedData = {
    +    val resolvedPivotColumn = Column(df.resolve(pivotColumn))
    +    pivot(resolvedPivotColumn, values: _*)
    +  }
    --- End diff --
    
    For the first version, maybe we can just have the API using `Column` as the 
argument type? (I am thinking about the type of values. I am not sure String is 
the right type).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to