[ https://issues.apache.org/jira/browse/SPARK-41793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17653348#comment-17653348 ]
Bruce Robbins commented on SPARK-41793: --------------------------------------- [~cloud_fan] [~ulysses] The change in behavior appears to be from commit [301a139638|https://github.com/apache/spark/commit/301a139638] (SPARK-39316). When I test with the commit immediately preceding, I get: {noformat} 1 1 {noformat} When I test on that commit, I get {noformat} 1 0 {noformat} I used this to test in spark-sql: {noformat} create or replace temp view test_table as select * from values (9223372036854775807l, cast('11342371013783243717493546650944543.47' as decimal(38,2))), (9223372036854775807l, cast('999999999999999999999999999999999999.99' as decimal(38,2))) as data(a, b); SELECT COUNT(1) OVER ( PARTITION BY a ORDER BY b ASC RANGE BETWEEN 10.2345 PRECEDING AND 6.7890 FOLLOWING ) AS CNT_1 FROM test_table; {noformat} > Incorrect result for window frames defined by a range clause on large > decimals > ------------------------------------------------------------------------------- > > Key: SPARK-41793 > URL: https://issues.apache.org/jira/browse/SPARK-41793 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 3.4.0 > Reporter: Gera Shegalov > Priority: Major > > Context > https://github.com/NVIDIA/spark-rapids/issues/7429#issuecomment-1368040686 > The following windowing query on a simple two-row input should produce two > non-empty windows as a result > {code} > from pprint import pprint > data = [ > ('9223372036854775807', '11342371013783243717493546650944543.47'), > ('9223372036854775807', '999999999999999999999999999999999999.99') > ] > df1 = spark.createDataFrame(data, 'a STRING, b STRING') > df2 = df1.select(df1.a.cast('LONG'), df1.b.cast('DECIMAL(38,2)')) > df2.createOrReplaceTempView('test_table') > df = sql(''' > SELECT > COUNT(1) OVER ( > PARTITION BY a > ORDER BY b ASC > RANGE BETWEEN 10.2345 PRECEDING AND 6.7890 FOLLOWING > ) AS CNT_1 > FROM > test_table > ''') > res = df.collect() > df.explain(True) > pprint(res) > {code} > SparkĀ 3.4.0-SNAPSHOT output: > {code} > [Row(CNT_1=1), Row(CNT_1=0)] > {code} > Spark 3.3.1 output as expected: > {code} > Row(CNT_1=1), Row(CNT_1=1)] > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org