[jira] [Resolved] (SPARK-20969) last() aggregate function fails returning the right answer with ordered windows

2017-06-06 Thread Perrine Letellier (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Perrine Letellier resolved SPARK-20969.
---
Resolution: Not A Problem

> last() aggregate function fails returning the right answer with ordered 
> windows
> ---
>
> Key: SPARK-20969
> URL: https://issues.apache.org/jira/browse/SPARK-20969
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.1
>Reporter: Perrine Letellier
>
> The column on which `orderBy` is performed is considered as another column on 
> which to partition.
> {code}
> scala> val df = sc.parallelize(List(("i1", 1, "desc1"), ("i1", 1, "desc2"), 
> ("i1", 2, "desc3"))).toDF("id", "ts", "description")
> scala> import org.apache.spark.sql.expressions.Window
> scala> val window = Window.partitionBy("id").orderBy(col("ts").asc)
> scala> df.withColumn("last", last(col("description")).over(window)).show
> +---+---+---+-+
> | id| ts|description| last|
> +---+---+---+-+
> | i1|  1|  desc1|desc2|
> | i1|  1|  desc2|desc2|
> | i1|  2|  desc3|desc3|
> +---+---+---+-+
> {code}
> However what is expected is the same answer as if asking for `first()` with a 
> window with descending order.
> {code}
> scala> val window = Window.partitionBy("id").orderBy(col("ts").desc)
> scala> df.withColumn("hackedLast", 
> first(col("description")).over(window)).show
> +---+---+---+--+
> | id| ts|description|hackedLast|
> +---+---+---+--+
> | i1|  2|  desc3| desc3|
> | i1|  1|  desc1| desc3|
> | i1|  1|  desc2| desc3|
> +---+---+---+--+
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-20969) last() aggregate function fails returning the right answer with ordered windows

2017-06-06 Thread Perrine Letellier (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038825#comment-16038825
 ] 

Perrine Letellier edited comment on SPARK-20969 at 6/6/17 1:00 PM:
---

[~viirya] Thanks for your answer !
I could get the expected result by specifying 
{code}Window.partitionBy("id").orderBy(col("ts").asc).rowsBetween(Window.unboundedPreceding,
 Window.unboundedFollowing){code} instead of simply 
{code}Window.partitionBy("id").orderBy(col("ts").asc) {code}.

Is it documented in any api doc that the default frame is {{RANGE BETWEEN 
UNBOUNDED PRECEDING AND CURRENT ROW}} ?


was (Author: pletelli):
[~viirya] Thanks for your answer !
I could get the expected result by specifying 
{code}Window.partitionBy("id").orderBy(col("ts").asc).rowsBetween(Window.unboundedPreceding,
 Window.unboundedFollowing){code} instead of simply 
{code}Window.partitionBy("id").orderBy(col("ts").asc) {code}.

Is it documented in any api doc that the default frame is {code}RANGE BETWEEN 
UNBOUNDED PRECEDING AND CURRENT ROW{code} ?

> last() aggregate function fails returning the right answer with ordered 
> windows
> ---
>
> Key: SPARK-20969
> URL: https://issues.apache.org/jira/browse/SPARK-20969
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.1
>Reporter: Perrine Letellier
>
> The column on which `orderBy` is performed is considered as another column on 
> which to partition.
> {code}
> scala> val df = sc.parallelize(List(("i1", 1, "desc1"), ("i1", 1, "desc2"), 
> ("i1", 2, "desc3"))).toDF("id", "ts", "description")
> scala> import org.apache.spark.sql.expressions.Window
> scala> val window = Window.partitionBy("id").orderBy(col("ts").asc)
> scala> df.withColumn("last", last(col("description")).over(window)).show
> +---+---+---+-+
> | id| ts|description| last|
> +---+---+---+-+
> | i1|  1|  desc1|desc2|
> | i1|  1|  desc2|desc2|
> | i1|  2|  desc3|desc3|
> +---+---+---+-+
> {code}
> However what is expected is the same answer as if asking for `first()` with a 
> window with descending order.
> {code}
> scala> val window = Window.partitionBy("id").orderBy(col("ts").desc)
> scala> df.withColumn("hackedLast", 
> first(col("description")).over(window)).show
> +---+---+---+--+
> | id| ts|description|hackedLast|
> +---+---+---+--+
> | i1|  2|  desc3| desc3|
> | i1|  1|  desc1| desc3|
> | i1|  1|  desc2| desc3|
> +---+---+---+--+
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-20969) last() aggregate function fails returning the right answer with ordered windows

2017-06-06 Thread Perrine Letellier (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038825#comment-16038825
 ] 

Perrine Letellier edited comment on SPARK-20969 at 6/6/17 1:00 PM:
---

[~viirya] Thanks for your answer !
I could get the expected result by specifying 
{{Window.partitionBy("id").orderBy(col("ts").asc).rowsBetween(Window.unboundedPreceding,
 Window.unboundedFollowing)}} instead of simply 
{{Window.partitionBy("id").orderBy(col("ts").asc)}}.

Is it documented in any api doc that the default frame is {{RANGE BETWEEN 
UNBOUNDED PRECEDING AND CURRENT ROW}} ?


was (Author: pletelli):
[~viirya] Thanks for your answer !
I could get the expected result by specifying 
{code}Window.partitionBy("id").orderBy(col("ts").asc).rowsBetween(Window.unboundedPreceding,
 Window.unboundedFollowing){code} instead of simply 
{code}Window.partitionBy("id").orderBy(col("ts").asc) {code}.

Is it documented in any api doc that the default frame is {{RANGE BETWEEN 
UNBOUNDED PRECEDING AND CURRENT ROW}} ?

> last() aggregate function fails returning the right answer with ordered 
> windows
> ---
>
> Key: SPARK-20969
> URL: https://issues.apache.org/jira/browse/SPARK-20969
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.1
>Reporter: Perrine Letellier
>
> The column on which `orderBy` is performed is considered as another column on 
> which to partition.
> {code}
> scala> val df = sc.parallelize(List(("i1", 1, "desc1"), ("i1", 1, "desc2"), 
> ("i1", 2, "desc3"))).toDF("id", "ts", "description")
> scala> import org.apache.spark.sql.expressions.Window
> scala> val window = Window.partitionBy("id").orderBy(col("ts").asc)
> scala> df.withColumn("last", last(col("description")).over(window)).show
> +---+---+---+-+
> | id| ts|description| last|
> +---+---+---+-+
> | i1|  1|  desc1|desc2|
> | i1|  1|  desc2|desc2|
> | i1|  2|  desc3|desc3|
> +---+---+---+-+
> {code}
> However what is expected is the same answer as if asking for `first()` with a 
> window with descending order.
> {code}
> scala> val window = Window.partitionBy("id").orderBy(col("ts").desc)
> scala> df.withColumn("hackedLast", 
> first(col("description")).over(window)).show
> +---+---+---+--+
> | id| ts|description|hackedLast|
> +---+---+---+--+
> | i1|  2|  desc3| desc3|
> | i1|  1|  desc1| desc3|
> | i1|  1|  desc2| desc3|
> +---+---+---+--+
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-20969) last() aggregate function fails returning the right answer with ordered windows

2017-06-06 Thread Perrine Letellier (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038825#comment-16038825
 ] 

Perrine Letellier edited comment on SPARK-20969 at 6/6/17 12:59 PM:


[~viirya] Thanks for your answer !
I could get the expected result by specifying 
{code}Window.partitionBy("id").orderBy(col("ts").asc).rowsBetween(Window.unboundedPreceding,
 Window.unboundedFollowing){code} instead of simply 
{code}Window.partitionBy("id").orderBy(col("ts").asc) {code}.

Is it documented in any api doc that the default frame is {code}RANGE BETWEEN 
UNBOUNDED PRECEDING AND CURRENT ROW{code} ?


was (Author: pletelli):
[~viirya] Thanks for your answer !
I could get the expected result by specifying 
{{Window.partitionBy("id").orderBy(col("ts").asc).rowsBetween(Window.unboundedPreceding,
 Window.unboundedFollowing) }} instead of simply {{ 
Window.partitionBy("id").orderBy(col("ts").asc) }}.

Is it documented in any api doc that the default frame is {{ RANGE BETWEEN 
UNBOUNDED PRECEDING AND CURRENT ROW }} ?

> last() aggregate function fails returning the right answer with ordered 
> windows
> ---
>
> Key: SPARK-20969
> URL: https://issues.apache.org/jira/browse/SPARK-20969
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.1
>Reporter: Perrine Letellier
>
> The column on which `orderBy` is performed is considered as another column on 
> which to partition.
> {code}
> scala> val df = sc.parallelize(List(("i1", 1, "desc1"), ("i1", 1, "desc2"), 
> ("i1", 2, "desc3"))).toDF("id", "ts", "description")
> scala> import org.apache.spark.sql.expressions.Window
> scala> val window = Window.partitionBy("id").orderBy(col("ts").asc)
> scala> df.withColumn("last", last(col("description")).over(window)).show
> +---+---+---+-+
> | id| ts|description| last|
> +---+---+---+-+
> | i1|  1|  desc1|desc2|
> | i1|  1|  desc2|desc2|
> | i1|  2|  desc3|desc3|
> +---+---+---+-+
> {code}
> However what is expected is the same answer as if asking for `first()` with a 
> window with descending order.
> {code}
> scala> val window = Window.partitionBy("id").orderBy(col("ts").desc)
> scala> df.withColumn("hackedLast", 
> first(col("description")).over(window)).show
> +---+---+---+--+
> | id| ts|description|hackedLast|
> +---+---+---+--+
> | i1|  2|  desc3| desc3|
> | i1|  1|  desc1| desc3|
> | i1|  1|  desc2| desc3|
> +---+---+---+--+
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-20969) last() aggregate function fails returning the right answer with ordered windows

2017-06-06 Thread Perrine Letellier (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038825#comment-16038825
 ] 

Perrine Letellier edited comment on SPARK-20969 at 6/6/17 12:59 PM:


[~viirya] Thanks for your answer !
I could get the expected result by specifying 
{{Window.partitionBy("id").orderBy(col("ts").asc).rowsBetween(Window.unboundedPreceding,
 Window.unboundedFollowing) }} instead of simply {{ 
Window.partitionBy("id").orderBy(col("ts").asc) }}.

Is it documented in any api doc that the default frame is {{ RANGE BETWEEN 
UNBOUNDED PRECEDING AND CURRENT ROW }} ?


was (Author: pletelli):
[~viirya] Thanks for your answer !
I could get the expected result by specifying {code} 
Window.partitionBy("id").orderBy(col("ts").asc).rowsBetween(Window.unboundedPreceding,
 Window.unboundedFollowing) {code} instead of simply {code} 
Window.partitionBy("id").orderBy(col("ts").asc) {code}.

Is it documented in any api doc that the default frame is {code} RANGE BETWEEN 
UNBOUNDED PRECEDING AND CURRENT ROW {code} ?

> last() aggregate function fails returning the right answer with ordered 
> windows
> ---
>
> Key: SPARK-20969
> URL: https://issues.apache.org/jira/browse/SPARK-20969
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.1
>Reporter: Perrine Letellier
>
> The column on which `orderBy` is performed is considered as another column on 
> which to partition.
> {code}
> scala> val df = sc.parallelize(List(("i1", 1, "desc1"), ("i1", 1, "desc2"), 
> ("i1", 2, "desc3"))).toDF("id", "ts", "description")
> scala> import org.apache.spark.sql.expressions.Window
> scala> val window = Window.partitionBy("id").orderBy(col("ts").asc)
> scala> df.withColumn("last", last(col("description")).over(window)).show
> +---+---+---+-+
> | id| ts|description| last|
> +---+---+---+-+
> | i1|  1|  desc1|desc2|
> | i1|  1|  desc2|desc2|
> | i1|  2|  desc3|desc3|
> +---+---+---+-+
> {code}
> However what is expected is the same answer as if asking for `first()` with a 
> window with descending order.
> {code}
> scala> val window = Window.partitionBy("id").orderBy(col("ts").desc)
> scala> df.withColumn("hackedLast", 
> first(col("description")).over(window)).show
> +---+---+---+--+
> | id| ts|description|hackedLast|
> +---+---+---+--+
> | i1|  2|  desc3| desc3|
> | i1|  1|  desc1| desc3|
> | i1|  1|  desc2| desc3|
> +---+---+---+--+
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-20969) last() aggregate function fails returning the right answer with ordered windows

2017-06-06 Thread Perrine Letellier (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038825#comment-16038825
 ] 

Perrine Letellier commented on SPARK-20969:
---

[~viirya] Thanks for your answer !
I could get the expected result by specifying {code} 
Window.partitionBy("id").orderBy(col("ts").asc).rowsBetween(Window.unboundedPreceding,
 Window.unboundedFollowing) {code} instead of simply {code} 
Window.partitionBy("id").orderBy(col("ts").asc) {code}.

Is it documented in any api doc that the default frame is {code} RANGE BETWEEN 
UNBOUNDED PRECEDING AND CURRENT ROW {code} ?

> last() aggregate function fails returning the right answer with ordered 
> windows
> ---
>
> Key: SPARK-20969
> URL: https://issues.apache.org/jira/browse/SPARK-20969
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.1
>Reporter: Perrine Letellier
>
> The column on which `orderBy` is performed is considered as another column on 
> which to partition.
> {code}
> scala> val df = sc.parallelize(List(("i1", 1, "desc1"), ("i1", 1, "desc2"), 
> ("i1", 2, "desc3"))).toDF("id", "ts", "description")
> scala> import org.apache.spark.sql.expressions.Window
> scala> val window = Window.partitionBy("id").orderBy(col("ts").asc)
> scala> df.withColumn("last", last(col("description")).over(window)).show
> +---+---+---+-+
> | id| ts|description| last|
> +---+---+---+-+
> | i1|  1|  desc1|desc2|
> | i1|  1|  desc2|desc2|
> | i1|  2|  desc3|desc3|
> +---+---+---+-+
> {code}
> However what is expected is the same answer as if asking for `first()` with a 
> window with descending order.
> {code}
> scala> val window = Window.partitionBy("id").orderBy(col("ts").desc)
> scala> df.withColumn("hackedLast", 
> first(col("description")).over(window)).show
> +---+---+---+--+
> | id| ts|description|hackedLast|
> +---+---+---+--+
> | i1|  2|  desc3| desc3|
> | i1|  1|  desc1| desc3|
> | i1|  1|  desc2| desc3|
> +---+---+---+--+
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20969) last() aggregate function fails returning the right answer with ordered windows

2017-06-06 Thread Perrine Letellier (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Perrine Letellier updated SPARK-20969:
--
Description: 
The column on which `orderBy` is performed is considered as another column on 
which to partition.

{code}
scala> val df = sc.parallelize(List(("i1", 1, "desc1"), ("i1", 1, "desc2"), 
("i1", 2, "desc3"))).toDF("id", "ts", "description")
scala> import org.apache.spark.sql.expressions.Window
scala> val window = Window.partitionBy("id").orderBy(col("ts").asc)
scala> df.withColumn("last", last(col("description")).over(window)).show
+---+---+---+-+
| id| ts|description| last|
+---+---+---+-+
| i1|  1|  desc1|desc2|
| i1|  1|  desc2|desc2|
| i1|  2|  desc3|desc3|
+---+---+---+-+
{code}

However what is expected is the same answer as if asking for `first()` with a 
window with descending order.

{code}
scala> val window = Window.partitionBy("id").orderBy(col("ts").desc)
scala> df.withColumn("hackedLast", first(col("description")).over(window)).show
+---+---+---+--+
| id| ts|description|hackedLast|
+---+---+---+--+
| i1|  2|  desc3| desc3|
| i1|  1|  desc1| desc3|
| i1|  1|  desc2| desc3|
+---+---+---+--+
{code}

  was:
The column on which `orderBy` is performed is considered as another column on 
which to partition.

{code}
scala> val df = sc.parallelize(List(("i1", 1, "desc1"), ("i1", 1, "desc2"), 
("i1", 2, "desc3"))).toDF("id", "ts", "description")
scala> val window = Window.partitionBy("id").orderBy(col("ts").asc)
scala> df.withColumn("last", last(col("description")).over(window)).show
+---+---+-+-+
| id| ts| description| last|
+---+---+-+-+
| i1|  1|desc1|desc2|
| i1|  1|desc2|desc2|
| i1|  2|desc3|desc3|
+---+---+-+-+
{code}

However what is expected is the same answer as if asking for `first()` with a 
window with descending order.

{code}
scala> val window = Window.partitionBy("id").orderBy(col("ts").desc)
scala> df.withColumn("last", first(col("description")).over(window)).show
+---+---+-+-+
| id| ts| description| last|
+---+---+-+-+
| i1|  2|desc3|desc3|
| i1|  1|desc1|desc3|
| i1|  1|desc2|desc3|
+---+---+-+-+
{code}


> last() aggregate function fails returning the right answer with ordered 
> windows
> ---
>
> Key: SPARK-20969
> URL: https://issues.apache.org/jira/browse/SPARK-20969
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.1
>Reporter: Perrine Letellier
>
> The column on which `orderBy` is performed is considered as another column on 
> which to partition.
> {code}
> scala> val df = sc.parallelize(List(("i1", 1, "desc1"), ("i1", 1, "desc2"), 
> ("i1", 2, "desc3"))).toDF("id", "ts", "description")
> scala> import org.apache.spark.sql.expressions.Window
> scala> val window = Window.partitionBy("id").orderBy(col("ts").asc)
> scala> df.withColumn("last", last(col("description")).over(window)).show
> +---+---+---+-+
> | id| ts|description| last|
> +---+---+---+-+
> | i1|  1|  desc1|desc2|
> | i1|  1|  desc2|desc2|
> | i1|  2|  desc3|desc3|
> +---+---+---+-+
> {code}
> However what is expected is the same answer as if asking for `first()` with a 
> window with descending order.
> {code}
> scala> val window = Window.partitionBy("id").orderBy(col("ts").desc)
> scala> df.withColumn("hackedLast", 
> first(col("description")).over(window)).show
> +---+---+---+--+
> | id| ts|description|hackedLast|
> +---+---+---+--+
> | i1|  2|  desc3| desc3|
> | i1|  1|  desc1| desc3|
> | i1|  1|  desc2| desc3|
> +---+---+---+--+
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20969) last() aggregate function fails returning the right answer with ordered windows

2017-06-02 Thread Perrine Letellier (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Perrine Letellier updated SPARK-20969:
--
Description: 
The column on which `orderBy` is performed is considered as another column on 
which to partition.

{code}
scala> val df = sc.parallelize(List(("i1", 1, "desc1"), ("i1", 1, "desc2"), 
("i1", 2, "desc3"))).toDF("id", "ts", "description")
scala> val window = Window.partitionBy("id").orderBy(col("ts").asc)
scala> df.withColumn("last", last(col("description")).over(window)).show
+---+---+-+-+
| id| ts| description| last|
+---+---+-+-+
| i1|  1|desc1|desc2|
| i1|  1|desc2|desc2|
| i1|  2|desc3|desc3|
+---+---+-+-+
{code}

However what is expected is the same answer as if asking for `first()` with a 
window with descending order.

{code}
scala> val window = Window.partitionBy("id").orderBy(col("ts").desc)
scala> df.withColumn("last", first(col("description")).over(window)).show
+---+---+-+-+
| id| ts| description| last|
+---+---+-+-+
| i1|  2|desc3|desc3|
| i1|  1|desc1|desc3|
| i1|  1|desc2|desc3|
+---+---+-+-+
{code}

  was:
The column on which `orderBy` is performed is considered as another column on 
which to partition.

{code}
scala> val df = sc.parallelize(List(("i1", 1, "desc1"), ("i1", 1, "desc2"), 
("i1", 2, "desc3"))).toDF("id", "ts", "desc")
scala> val window = Window.partitionBy("id").orderBy(col("ts").asc)
scala> df.withColumn("last", last(col("description")).over(window)).show
+---+---+-+-+
| id| ts| description| last|
+---+---+-+-+
| i1|  1|desc1|desc2|
| i1|  1|desc2|desc2|
| i1|  2|desc3|desc3|
+---+---+-+-+
{code}

However what is expected is the same answer as if asking for `first()` with a 
window with descending order.

{code}
scala> val window = Window.partitionBy("id").orderBy(col("ts").desc)
scala> df.withColumn("last", first(col("description")).over(window)).show
+---+---+-+-+
| id| ts| description| last|
+---+---+-+-+
| i1|  2|desc3|desc3|
| i1|  1|desc1|desc3|
| i1|  1|desc2|desc3|
+---+---+-+-+
{code}


> last() aggregate function fails returning the right answer with ordered 
> windows
> ---
>
> Key: SPARK-20969
> URL: https://issues.apache.org/jira/browse/SPARK-20969
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.1
>Reporter: Perrine Letellier
>
> The column on which `orderBy` is performed is considered as another column on 
> which to partition.
> {code}
> scala> val df = sc.parallelize(List(("i1", 1, "desc1"), ("i1", 1, "desc2"), 
> ("i1", 2, "desc3"))).toDF("id", "ts", "description")
> scala> val window = Window.partitionBy("id").orderBy(col("ts").asc)
> scala> df.withColumn("last", last(col("description")).over(window)).show
> +---+---+-+-+
> | id| ts| description| last|
> +---+---+-+-+
> | i1|  1|desc1|desc2|
> | i1|  1|desc2|desc2|
> | i1|  2|desc3|desc3|
> +---+---+-+-+
> {code}
> However what is expected is the same answer as if asking for `first()` with a 
> window with descending order.
> {code}
> scala> val window = Window.partitionBy("id").orderBy(col("ts").desc)
> scala> df.withColumn("last", first(col("description")).over(window)).show
> +---+---+-+-+
> | id| ts| description| last|
> +---+---+-+-+
> | i1|  2|desc3|desc3|
> | i1|  1|desc1|desc3|
> | i1|  1|desc2|desc3|
> +---+---+-+-+
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20969) last() aggregate function fails returning the right answer with ordered windows

2017-06-02 Thread Perrine Letellier (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Perrine Letellier updated SPARK-20969:
--
Description: 
The column on which `orderBy` is performed is considered as another column on 
which to partition.

{code}
scala> val df = sc.parallelize(List(("i1", 1, "desc1"), ("i1", 1, "desc2"), 
("i1", 2, "desc3"))).toDF("id", "ts", "desc")
scala> val window = Window.partitionBy("id").orderBy(col("ts").asc)
scala> df.withColumn("last", last(col("description")).over(window)).show
+---+---+-+-+
| id| ts| description| last|
+---+---+-+-+
| i1|  1|desc1|desc2|
| i1|  1|desc2|desc2|
| i1|  2|desc3|desc3|
+---+---+-+-+
{code}

However what is expected is the same answer as if asking for `first()` with a 
window with descending order.

{code}
scala> val window = Window.partitionBy("id").orderBy(col("ts").desc)
scala> df.withColumn("last", first(col("description")).over(window)).show
+---+---+-+-+
| id| ts| description| last|
+---+---+-+-+
| i1|  2|desc3|desc3|
| i1|  1|desc1|desc3|
| i1|  1|desc2|desc3|
+---+---+-+-+
{code}

  was:
{code}
scala> val df = sc.parallelize(List(("i1", 1, "desc1"), ("i1", 1, "desc2"), 
("i1", 2, "desc3"))).toDF("id", "ts", "desc")
scala> val window = Window.partitionBy("id").orderBy(col("ts").asc)
scala> df.withColumn("last", last(col("description")).over(window)).show
+---+---+-+-+
| id| ts| description| last|
+---+---+-+-+
| i1|  1|desc1|desc2|
| i1|  1|desc2|desc2|
| i1|  2|desc3|desc3|
+---+---+-+-+
{code}

However what is expected is the same answer as if asking for `first()` with a 
window with descending order.

{code}
scala> val window = Window.partitionBy("id").orderBy(col("ts").desc)
scala> df.withColumn("last", first(col("description")).over(window)).show
+---+---+-+-+
| id| ts| description| last|
+---+---+-+-+
| i1|  2|desc3|desc3|
| i1|  1|desc1|desc3|
| i1|  1|desc2|desc3|
+---+---+-+-+
{code}


> last() aggregate function fails returning the right answer with ordered 
> windows
> ---
>
> Key: SPARK-20969
> URL: https://issues.apache.org/jira/browse/SPARK-20969
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.1
>Reporter: Perrine Letellier
>
> The column on which `orderBy` is performed is considered as another column on 
> which to partition.
> {code}
> scala> val df = sc.parallelize(List(("i1", 1, "desc1"), ("i1", 1, "desc2"), 
> ("i1", 2, "desc3"))).toDF("id", "ts", "desc")
> scala> val window = Window.partitionBy("id").orderBy(col("ts").asc)
> scala> df.withColumn("last", last(col("description")).over(window)).show
> +---+---+-+-+
> | id| ts| description| last|
> +---+---+-+-+
> | i1|  1|desc1|desc2|
> | i1|  1|desc2|desc2|
> | i1|  2|desc3|desc3|
> +---+---+-+-+
> {code}
> However what is expected is the same answer as if asking for `first()` with a 
> window with descending order.
> {code}
> scala> val window = Window.partitionBy("id").orderBy(col("ts").desc)
> scala> df.withColumn("last", first(col("description")).over(window)).show
> +---+---+-+-+
> | id| ts| description| last|
> +---+---+-+-+
> | i1|  2|desc3|desc3|
> | i1|  1|desc1|desc3|
> | i1|  1|desc2|desc3|
> +---+---+-+-+
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-20969) last() aggregate function fails returning the right answer with ordered windows

2017-06-02 Thread Perrine Letellier (JIRA)
Perrine Letellier created SPARK-20969:
-

 Summary: last() aggregate function fails returning the right 
answer with ordered windows
 Key: SPARK-20969
 URL: https://issues.apache.org/jira/browse/SPARK-20969
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 2.1.1
Reporter: Perrine Letellier


{code}
scala> val df = sc.parallelize(List(("i1", 1, "desc1"), ("i1", 1, "desc2"), 
("i1", 2, "desc3"))).toDF("id", "ts", "desc")
scala> val window = Window.partitionBy("id").orderBy(col("ts").asc)
scala> df.withColumn("last", last(col("description")).over(window)).show
+---+---+-+-+
| id| ts| description| last|
+---+---+-+-+
| i1|  1|desc1|desc2|
| i1|  1|desc2|desc2|
| i1|  2|desc3|desc3|
+---+---+-+-+
{code}

However what is expected is the same answer as if asking for `first()` with a 
window with descending order.

{code}
scala> val window = Window.partitionBy("id").orderBy(col("ts").desc)
scala> df.withColumn("last", first(col("description")).over(window)).show
+---+---+-+-+
| id| ts| description| last|
+---+---+-+-+
| i1|  2|desc3|desc3|
| i1|  1|desc1|desc3|
| i1|  1|desc2|desc3|
+---+---+-+-+
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org