[ 
https://issues.apache.org/jira/browse/SPARK-20211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955239#comment-15955239
 ] 

Hyukjin Kwon commented on SPARK-20211:
--------------------------------------

Please refer http://spark.apache.org/contributing.html

{quote}
Critical: a large minority of users are missing important functionality without 
this, and/or a workaround is difficult
{quote}

workaround as below:

{code}
scala> sql("select double(1) > double(0.0001)").show()
+--------------------------------------------+
|(CAST(1 AS DOUBLE) > CAST(0.0001 AS DOUBLE))|
+--------------------------------------------+
|                                        true|
+--------------------------------------------+
{code}

Also, it states,

{quote}
Priority. Set to Major or below; higher priorities are generally reserved for 
committers to set
{quote}

> `1 > 0.0001` throws Decimal scale (0) cannot be greater than precision (-2) 
> exception
> -------------------------------------------------------------------------------------
>
>                 Key: SPARK-20211
>                 URL: https://issues.apache.org/jira/browse/SPARK-20211
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.1.1
>            Reporter: StanZhai
>            Priority: Critical
>              Labels: correctness
>
> The following SQL:
> {code}
> select 1 > 0.0001 from tb
> {code}
> throws Decimal scale (0) cannot be greater than precision (-2) exception in 
> Spark 2.x.
> `floor(0.0001)` and `ceil(0.0001)` have the same problem in Spark 1.6.x and 
> Spark 2.x.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to