[ https://issues.apache.org/jira/browse/SPARK-9971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14697868#comment-14697868 ]
Davies Liu commented on SPARK-9971: ----------------------------------- We had a long discussion about how to support NaN, but didn't find a good one from other databases to follow (they all handle them in different way from each other), then we came out a plan, see https://issues.apache.org/jira/browse/SPARK-9079. For 4 (in NaN in aggregation), we have not decided yet. The current behavior is not consistent across functions {code} >>> sf = sqlContext.createDataFrame([(1.0,), (float('nan'),)], ['f']) >>> sf.selectExpr('min(f)', 'max(f)', 'sum(f)', 'avg(f)', 'count(f)').show() +-------+-------+-------+-------+---------+ |'min(f)|'max(f)|'sum(f)|'avg(f)|'count(f)| +-------+-------+-------+-------+---------+ | 1.0| NaN| NaN| NaN| 2| +-------+-------+-------+-------+---------+ {code} cc [~yhuai] [~rxin] [~joshrosen] > MaxFunction not working correctly with columns containing Double.NaN > -------------------------------------------------------------------- > > Key: SPARK-9971 > URL: https://issues.apache.org/jira/browse/SPARK-9971 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 1.4.1 > Reporter: Frank Rosner > Priority: Minor > > h4. Problem Description > When using the {{max}} function on a {{DoubleType}} column that contains > {{Double.NaN}} values, the returned maximum value will be {{Double.NaN}}. > This is because it compares all values with the running maximum. However, {{x > < Double.NaN}} will always lead false for all {{x: Double}}, so will {{x > > Double.NaN}}. > h4. How to Reproduce > {code} > import org.apache.spark.sql.{SQLContext, Row} > import org.apache.spark.sql.types._ > val sql = new SQLContext(sc) > val rdd = sc.makeRDD(List(Row(Double.NaN), Row(-10d), Row(0d))) > val dataFrame = sql.createDataFrame(rdd, StructType(List( > StructField("col", DoubleType, false) > ))) > dataFrame.select(max("col")).first > // returns org.apache.spark.sql.Row = [NaN] > {code} > h4. Solution > The {{max}} and {{min}} functions should ignore NaN values, as they are not > numbers. If a column contains only NaN values, then the maximum and minimum > is not defined. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org