[ https://issues.apache.org/jira/browse/FLINK-3971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15327083#comment-15327083 ]
ASF GitHub Bot commented on FLINK-3971: --------------------------------------- Github user gallenvara commented on a diff in the pull request: https://github.com/apache/flink/pull/2049#discussion_r66765029 --- Diff: flink-libraries/flink-table/src/main/scala/org/apache/flink/api/table/runtime/aggregate/MaxAggregate.scala --- @@ -48,8 +57,16 @@ abstract class MaxAggregate[T](implicit ord: Ordering[T]) extends Aggregate[T] { override def merge(intermediate: Row, buffer: Row): Unit = { --- End diff -- Yes, you are right. The cases where `partialValue == null` should be ignored directly. > Aggregates handle null values incorrectly. > ------------------------------------------ > > Key: FLINK-3971 > URL: https://issues.apache.org/jira/browse/FLINK-3971 > Project: Flink > Issue Type: Bug > Components: Table API > Affects Versions: 1.1.0 > Reporter: Fabian Hueske > Assignee: GaoLun > Priority: Critical > Fix For: 1.1.0 > > > Table API and SQL aggregates are supposed to ignore null values, e.g., > {{sum(1,2,null,4)}} is supposed to return {{7}}. > There current implementation is correct if at least one valid value is > present however, is incorrect if only null values are aggregated. {{sum(null, > null, null)}} should return {{null}} instead of {{0}} > Currently only the Count aggregate handles the case of null-values-only > correctly. -- This message was sent by Atlassian JIRA (v6.3.4#6332)