[ 
https://issues.apache.org/jira/browse/SPARK-20969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16038825#comment-16038825
 ] 

Perrine Letellier edited comment on SPARK-20969 at 6/6/17 12:59 PM:
--------------------------------------------------------------------

[~viirya] Thanks for your answer !
I could get the expected result by specifying 
{code}Window.partitionBy("id").orderBy(col("ts").asc).rowsBetween(Window.unboundedPreceding,
 Window.unboundedFollowing){code} instead of simply 
{code}Window.partitionBy("id").orderBy(col("ts").asc) {code}.

Is it documented in any api doc that the default frame is {code}RANGE BETWEEN 
UNBOUNDED PRECEDING AND CURRENT ROW{code} ?


was (Author: pletelli):
[~viirya] Thanks for your answer !
I could get the expected result by specifying 
{{Window.partitionBy("id").orderBy(col("ts").asc).rowsBetween(Window.unboundedPreceding,
 Window.unboundedFollowing) }} instead of simply {{ 
Window.partitionBy("id").orderBy(col("ts").asc) }}.

Is it documented in any api doc that the default frame is {{ RANGE BETWEEN 
UNBOUNDED PRECEDING AND CURRENT ROW }} ?

> last() aggregate function fails returning the right answer with ordered 
> windows
> -------------------------------------------------------------------------------
>
>                 Key: SPARK-20969
>                 URL: https://issues.apache.org/jira/browse/SPARK-20969
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.1.1
>            Reporter: Perrine Letellier
>
> The column on which `orderBy` is performed is considered as another column on 
> which to partition.
> {code}
> scala> val df = sc.parallelize(List(("i1", 1, "desc1"), ("i1", 1, "desc2"), 
> ("i1", 2, "desc3"))).toDF("id", "ts", "description")
> scala> import org.apache.spark.sql.expressions.Window
> scala> val window = Window.partitionBy("id").orderBy(col("ts").asc)
> scala> df.withColumn("last", last(col("description")).over(window)).show
> +---+---+-----------+-----+
> | id| ts|description| last|
> +---+---+-----------+-----+
> | i1|  1|      desc1|desc2|
> | i1|  1|      desc2|desc2|
> | i1|  2|      desc3|desc3|
> +---+---+-----------+-----+
> {code}
> However what is expected is the same answer as if asking for `first()` with a 
> window with descending order.
> {code}
> scala> val window = Window.partitionBy("id").orderBy(col("ts").desc)
> scala> df.withColumn("hackedLast", 
> first(col("description")).over(window)).show
> +---+---+-----------+----------+
> | id| ts|description|hackedLast|
> +---+---+-----------+----------+
> | i1|  2|      desc3|     desc3|
> | i1|  1|      desc1|     desc3|
> | i1|  1|      desc2|     desc3|
> +---+---+-----------+----------+
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to