Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/18113#discussion_r154289888
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/typedaggregators.scala
 ---
    @@ -99,3 +94,91 @@ class TypedAverage[IN](val f: IN => Double) extends 
Aggregator[IN, (Double, Long
         toColumn.asInstanceOf[TypedColumn[IN, java.lang.Double]]
       }
     }
    +
    +class TypedMinDouble[IN](val f: IN => Double) extends Aggregator[IN, 
Double, Double] {
    +  override def zero: Double = Double.PositiveInfinity
    +  override def reduce(b: Double, a: IN): Double = math.min(b, f(a))
    +  override def merge(b1: Double, b2: Double): Double = math.min(b1, b2)
    +  override def finish(reduction: Double): Double = {
    +    if (Double.PositiveInfinity == reduction) {
    --- End diff --
    
    After some more thoughts, options 3 is not reasonable as throwing exception 
is not a good idea in big data, especially in the last stage of a long-running 
job.
    
    option 2 is weird as it doesn't follow either java/scala or SQL.
    
    Let's go with option 1.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to