[ 
https://issues.apache.org/jira/browse/SPARK-31306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Cutler resolved SPARK-31306.
----------------------------------
    Resolution: Fixed

Issue resolved by pull request 28071
https://github.com/apache/spark/pull/28071

> rand() function documentation suggests an inclusive upper bound of 1.0
> ----------------------------------------------------------------------
>
>                 Key: SPARK-31306
>                 URL: https://issues.apache.org/jira/browse/SPARK-31306
>             Project: Spark
>          Issue Type: Documentation
>          Components: PySpark, R, Spark Core
>    Affects Versions: 2.4.5, 3.0.0
>            Reporter: Ben
>            Priority: Major
>
>  The rand() function in PySpark, Spark, and R is documented as drawing from 
> U[0.0, 1.0]. This suggests an inclusive upper bound, and can be confusing 
> (i.e for a distribution written as `X ~ U(a, b)`, x can be a or b, so writing 
> `U[0.0, 1.0]` suggests the value returned could include 1.0). The function 
> itself uses Rand(), which is [documented |#L71] as having a result in the 
> range [0, 1).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to