[ 
https://issues.apache.org/jira/browse/SPARK-9971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Frank Rosner updated SPARK-9971:
--------------------------------
    Description: 
h4. Problem Description

When using the {{max}} function on a {{DoubleType}} column that contains 
{{Double.NaN}} values, the returned maximum value will be {{Double.NaN}}. 

This is because it compares all values with the running maximum. However, {{x < 
Double.NaN}} will always lead false for all {{x: Double}}, so will {{x > 
Double.NaN}}.

h4. How to Reproduce

{code}
import org.apache.spark.sql.{SQLContext, Row}
import org.apache.spark.sql.types._

val sql = new SQLContext(sc)
val rdd = sc.makeRDD(List(Row(Double.NaN), Row(-10d), Row(0d)))
val dataFrame = sql.createDataFrame(rdd, StructType(List(
  StructField("col", DoubleType, false)
)))
dataFrame.select(max("col")).first
// returns org.apache.spark.sql.Row = [NaN]
{code}

h4. Solution

The {{max}} and {{min}} functions should ignore NaN values, as they are not 
numbers. If a column contains only NaN values, then the maximum and minimum is 
not defined.

  was:
h5. Problem Description

When using the {{max}} function on a {{DoubleType}} column that contains 
{{Double.NaN}} values, the returned maximum value will be {{Double.NaN}}. 

This is because it compares all values with the running maximum. However, {{x < 
Double.NaN}} will always lead false for all {{x: Double}}, so will {{x > 
Double.NaN}}.

h5. How to Reproduce

{code}
import org.apache.spark.sql.{SQLContext, Row}
import org.apache.spark.sql.types._

val sql = new SQLContext(sc)
val rdd = sc.makeRDD(List(Row(Double.NaN), Row(-10d), Row(0d)))
val dataFrame = sql.createDataFrame(rdd, StructType(List(
  StructField("col", DoubleType, false)
)))
dataFrame.select(max("col")).first
// returns org.apache.spark.sql.Row = [NaN]
{code}

h5. Solution

The {{max}} and {{min}} functions should ignore NaN values, as they are not 
numbers. If a column contains only NaN values, then the maximum and minimum is 
not defined.


> MaxFunction not working correctly with columns containing Double.NaN
> --------------------------------------------------------------------
>
>                 Key: SPARK-9971
>                 URL: https://issues.apache.org/jira/browse/SPARK-9971
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.4.1
>            Reporter: Frank Rosner
>            Priority: Minor
>
> h4. Problem Description
> When using the {{max}} function on a {{DoubleType}} column that contains 
> {{Double.NaN}} values, the returned maximum value will be {{Double.NaN}}. 
> This is because it compares all values with the running maximum. However, {{x 
> < Double.NaN}} will always lead false for all {{x: Double}}, so will {{x > 
> Double.NaN}}.
> h4. How to Reproduce
> {code}
> import org.apache.spark.sql.{SQLContext, Row}
> import org.apache.spark.sql.types._
> val sql = new SQLContext(sc)
> val rdd = sc.makeRDD(List(Row(Double.NaN), Row(-10d), Row(0d)))
> val dataFrame = sql.createDataFrame(rdd, StructType(List(
>   StructField("col", DoubleType, false)
> )))
> dataFrame.select(max("col")).first
> // returns org.apache.spark.sql.Row = [NaN]
> {code}
> h4. Solution
> The {{max}} and {{min}} functions should ignore NaN values, as they are not 
> numbers. If a column contains only NaN values, then the maximum and minimum 
> is not defined.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to