[ 
https://issues.apache.org/jira/browse/FLINK-3971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15324689#comment-15324689
 ] 

ASF GitHub Bot commented on FLINK-3971:
---------------------------------------

Github user fhueske commented on a diff in the pull request:

    https://github.com/apache/flink/pull/2049#discussion_r66638477
  
    --- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/api/table/runtime/aggregate/MaxAggregate.scala
 ---
    @@ -74,7 +82,7 @@ class ByteMaxAggregate extends MaxAggregate[Byte] {
       override def intermediateDataType = Array(BasicTypeInfo.BYTE_TYPE_INFO)
     
       override def initiate(intermediate: Row): Unit = {
    -    intermediate.setField(maxIndex, Byte.MinValue)
    +    intermediate.setField(maxIndex, null.asInstanceOf[Byte])
    --- End diff --
    
    No cast necessary


> Aggregates handle null values incorrectly.
> ------------------------------------------
>
>                 Key: FLINK-3971
>                 URL: https://issues.apache.org/jira/browse/FLINK-3971
>             Project: Flink
>          Issue Type: Bug
>          Components: Table API
>    Affects Versions: 1.1.0
>            Reporter: Fabian Hueske
>            Assignee: GaoLun
>            Priority: Critical
>             Fix For: 1.1.0
>
>
> Table API and SQL aggregates are supposed to ignore null values, e.g., 
> {{sum(1,2,null,4)}} is supposed to return {{7}}. 
> There current implementation is correct if at least one valid value is 
> present however, is incorrect if only null values are aggregated. {{sum(null, 
> null, null)}} should return {{null}} instead of {{0}}
> Currently only the Count aggregate handles the case of null-values-only 
> correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to