Github user hvanhovell commented on a diff in the pull request:

    https://github.com/apache/spark/pull/14753#discussion_r76225656
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/interfaces.scala
 ---
    @@ -389,3 +389,144 @@ abstract class DeclarativeAggregate
         def right: AttributeReference = 
inputAggBufferAttributes(aggBufferAttributes.indexOf(a))
       }
     }
    +
    +/**
    + * Aggregation function which allows **arbitrary** user-defined java 
object to be used as internal
    + * aggregation buffer object.
    + *
    + * {{{
    + *                aggregation buffer for normal aggregation function `avg`
    + *                    |
    + *                    v
    + *                  
+--------------+---------------+-----------------------------------+
    + *                  |  sum1 (Long) | count1 (Long) | generic user-defined 
java objects |
    + *                  
+--------------+---------------+-----------------------------------+
    + *                                                     ^
    + *                                                     |
    + *                    Aggregation buffer object for 
`TypedImperativeAggregate` aggregation function
    + * }}}
    + *
    + * Work flow (Partial mode aggregate at Mapper side, and Final mode 
aggregate at Reducer side):
    + *
    + * Stage 1: Partial aggregate at Mapper side:
    + *
    + *  1. The framework calls `createAggregationBuffer(): T` to create an 
empty internal aggregation
    + *     buffer object.
    + *  2. Upon each input row, the framework calls
    + *     `update(buffer: T, input: InternalRow): Unit` to update the 
aggregation buffer object T.
    + *  3. After processing all rows of current group (group by key), the 
framework will serialize
    + *     aggregation buffer object T to storage format (Array[Byte]) and 
persist the Array[Byte]
    + *     to disk if needed.
    + *  4. The framework moves on to next group, until all groups have been 
processed.
    + *
    + * Shuffling exchange data to Reducer tasks...
    + *
    + * Stage 2: Final mode aggregate at Reducer side:
    + *
    + *  1. The framework calls `createAggregationBuffer(): T` to create an 
empty internal aggregation
    + *     buffer object (type T) for merging.
    + *  2. For each aggregation output of Stage 1, The framework de-serializes 
the storage
    + *     format (Array[Byte]) and produces one input aggregation object 
(type T).
    + *  3. For each input aggregation object, the framework calls 
`merge(buffer: T, input: T): Unit`
    + *     to merge the input aggregation object into aggregation buffer 
object.
    + *  4. After processing all input aggregation objects of current group 
(group by key), the framework
    + *     calls method `eval(buffer: T)` to generate the final output for 
this group.
    + *  5. The framework moves on to next group, until all groups have been 
processed.
    + *
    + * NOTE: SQL with TypedImperativeAggregate functions is planned in sort 
based aggregation,
    + * instead of hash based aggregation, as TypedImperativeAggregate use 
BinaryType as aggregation
    + * buffer's storage format, which is not supported by hash based 
aggregation. Hash based
    + * aggregation only support aggregation buffer of mutable types (like 
LongType, IntType that have
    + * fixed length and can be mutated in place in UnsafeRow)
    + */
    +abstract class TypedImperativeAggregate[T] extends ImperativeAggregate {
    --- End diff --
    
    Isn't this the wrong way around? Isn't `ImperativeAggregate` the untyped 
version of an `TypedImperativeAggregate`? Much like `Dataset` and `DataFrame`?
    
    I know this has been done for engineering purposes, but I still wonder if 
we shouldn't reverse the hierarchy here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to