[ 
https://issues.apache.org/jira/browse/SPARK-19451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15853782#comment-15853782
 ] 

Herman van Hovell commented on SPARK-19451:
-------------------------------------------

[~jchamp] how may rows are in your partitions? 2 billion? So this is an 
oversight, but I am not sure we should even try to support more than {{1 << 32 
- 1}} values in a partition.

> Long values in Window function
> ------------------------------
>
>                 Key: SPARK-19451
>                 URL: https://issues.apache.org/jira/browse/SPARK-19451
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.6.1, 2.0.2
>            Reporter: Julien Champ
>
> Hi there,
> there seems to be a major limitation in spark window functions and 
> rangeBetween method.
> If I have the following code :
> {code:title=Exemple |borderStyle=solid}
>     val tw =  Window.orderBy("date")
>       .partitionBy("id")
>       .rangeBetween( from , 0)
> {code}
> Everything seems ok, while *from* value is not too large... Even if the 
> rangeBetween() method supports Long parameters.
> But.... If i set *-2160000000L* value to *from* it does not work !
> It is probably related to this part of code in the between() method, of the 
> WindowSpec class, called by rangeBetween()
> {code:title=between() method|borderStyle=solid}
>     val boundaryStart = start match {
>       case 0 => CurrentRow
>       case Long.MinValue => UnboundedPreceding
>       case x if x < 0 => ValuePreceding(-start.toInt)
>       case x if x > 0 => ValueFollowing(start.toInt)
>     }
> {code}
> ( look at this *.toInt* )
> Does anybody know it there's a way to solve / patch this behavior ?
> Any help will be appreciated
> Thx



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to