[jira] [Updated] (SPARK-41898) Window.rowsBetween should handle `float("-inf")` and `float("+inf")` as argument

2023-01-05 Thread Sandeep Singh (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Singh updated SPARK-41898:
--
Description: 
{code:java}
df = self.spark.createDataFrame([(1, "1"), (2, "2"), (1, "2"), (1, "2")], 
["key", "value"])
w = Window.partitionBy("value").orderBy("key")
from pyspark.sql import functions as F

sel = df.select(
df.value,
df.key,
F.max("key").over(w.rowsBetween(0, 1)),
F.min("key").over(w.rowsBetween(0, 1)),
F.count("key").over(w.rowsBetween(float("-inf"), float("inf"))),
F.row_number().over(w),
F.rank().over(w),
F.dense_rank().over(w),
F.ntile(2).over(w),
)
rs = sorted(sel.collect()){code}
{code:java}
Traceback (most recent call last):   File 
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/tests/test_functions.py", 
line 821, in test_window_functions     
F.count("key").over(w.rowsBetween(float("-inf"), float("inf"))),   File 
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/window.py", line 
152, in rowsBetween     raise TypeError(f"start must be a int, but got 
{type(start).__name__}") TypeError: start must be a int, but got float {code}

  was:
{code:java}
from pyspark.sql.functions import assert_true

df = self.spark.range(3)

self.assertEqual(
df.select(assert_true(df.id < 3)).toDF("val").collect(),
[Row(val=None), Row(val=None), Row(val=None)],
)

with self.assertRaises(Py4JJavaError) as cm:
df.select(assert_true(df.id < 2, "too big")).toDF("val").collect(){code}
{code:java}
df = self.spark.createDataFrame([(1, "1"), (2, "2"), (1, "2"), (1, "2")], 
["key", "value"])
w = Window.partitionBy("value").orderBy("key")
from pyspark.sql import functions as F

sel = df.select(
df.value,
df.key,
F.max("key").over(w.rowsBetween(0, 1)),
F.min("key").over(w.rowsBetween(0, 1)),
F.count("key").over(w.rowsBetween(float("-inf"), float("inf"))),
F.row_number().over(w),
F.rank().over(w),
F.dense_rank().over(w),
F.ntile(2).over(w),
)
rs = sorted(sel.collect()){code}


> Window.rowsBetween should handle `float("-inf")` and `float("+inf")` as 
> argument
> 
>
> Key: SPARK-41898
> URL: https://issues.apache.org/jira/browse/SPARK-41898
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect
>Affects Versions: 3.4.0
>Reporter: Sandeep Singh
>Priority: Major
>
> {code:java}
> df = self.spark.createDataFrame([(1, "1"), (2, "2"), (1, "2"), (1, "2")], 
> ["key", "value"])
> w = Window.partitionBy("value").orderBy("key")
> from pyspark.sql import functions as F
> sel = df.select(
> df.value,
> df.key,
> F.max("key").over(w.rowsBetween(0, 1)),
> F.min("key").over(w.rowsBetween(0, 1)),
> F.count("key").over(w.rowsBetween(float("-inf"), float("inf"))),
> F.row_number().over(w),
> F.rank().over(w),
> F.dense_rank().over(w),
> F.ntile(2).over(w),
> )
> rs = sorted(sel.collect()){code}
> {code:java}
> Traceback (most recent call last):   File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/tests/test_functions.py",
>  line 821, in test_window_functions     
> F.count("key").over(w.rowsBetween(float("-inf"), float("inf"))),   File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/window.py", 
> line 152, in rowsBetween     raise TypeError(f"start must be a int, but got 
> {type(start).__name__}") TypeError: start must be a int, but got float {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-41898) Window.rowsBetween should handle `float("-inf")` and `float("+inf")` as argument

2023-01-05 Thread Sandeep Singh (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Singh updated SPARK-41898:
--
Description: 
{code:java}
from pyspark.sql.functions import assert_true

df = self.spark.range(3)

self.assertEqual(
df.select(assert_true(df.id < 3)).toDF("val").collect(),
[Row(val=None), Row(val=None), Row(val=None)],
)

with self.assertRaises(Py4JJavaError) as cm:
df.select(assert_true(df.id < 2, "too big")).toDF("val").collect(){code}
{code:java}
df = self.spark.createDataFrame([(1, "1"), (2, "2"), (1, "2"), (1, "2")], 
["key", "value"])
w = Window.partitionBy("value").orderBy("key")
from pyspark.sql import functions as F

sel = df.select(
df.value,
df.key,
F.max("key").over(w.rowsBetween(0, 1)),
F.min("key").over(w.rowsBetween(0, 1)),
F.count("key").over(w.rowsBetween(float("-inf"), float("inf"))),
F.row_number().over(w),
F.rank().over(w),
F.dense_rank().over(w),
F.ntile(2).over(w),
)
rs = sorted(sel.collect()){code}

  was:
PySpark throws Py4JJavaError where as connect throws SparkConnectException
{code:java}
from pyspark.sql.functions import assert_true

df = self.spark.range(3)

self.assertEqual(
df.select(assert_true(df.id < 3)).toDF("val").collect(),
[Row(val=None), Row(val=None), Row(val=None)],
)

with self.assertRaises(Py4JJavaError) as cm:
df.select(assert_true(df.id < 2, "too big")).toDF("val").collect(){code}
{code:java}
Traceback (most recent call last):
  File 
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/tests/test_functions.py", 
line 950, in test_assert_true
df.select(assert_true(df.id < 2, "too big")).toDF("val").collect()
  File 
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", 
line 1076, in collect
table = self._session.client.to_table(query)
  File 
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", line 
414, in to_table
table, _ = self._execute_and_fetch(req)
  File 
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", line 
586, in _execute_and_fetch
self._handle_error(rpc_error)
  File 
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", line 
629, in _handle_error
raise SparkConnectException(status.message, info.reason) from None
pyspark.sql.connect.client.SparkConnectException: (java.lang.RuntimeException) 
too big {code}


> Window.rowsBetween should handle `float("-inf")` and `float("+inf")` as 
> argument
> 
>
> Key: SPARK-41898
> URL: https://issues.apache.org/jira/browse/SPARK-41898
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect
>Affects Versions: 3.4.0
>Reporter: Sandeep Singh
>Priority: Major
>
> {code:java}
> from pyspark.sql.functions import assert_true
> df = self.spark.range(3)
> self.assertEqual(
> df.select(assert_true(df.id < 3)).toDF("val").collect(),
> [Row(val=None), Row(val=None), Row(val=None)],
> )
> with self.assertRaises(Py4JJavaError) as cm:
> df.select(assert_true(df.id < 2, "too big")).toDF("val").collect(){code}
> {code:java}
> df = self.spark.createDataFrame([(1, "1"), (2, "2"), (1, "2"), (1, "2")], 
> ["key", "value"])
> w = Window.partitionBy("value").orderBy("key")
> from pyspark.sql import functions as F
> sel = df.select(
> df.value,
> df.key,
> F.max("key").over(w.rowsBetween(0, 1)),
> F.min("key").over(w.rowsBetween(0, 1)),
> F.count("key").over(w.rowsBetween(float("-inf"), float("inf"))),
> F.row_number().over(w),
> F.rank().over(w),
> F.dense_rank().over(w),
> F.ntile(2).over(w),
> )
> rs = sorted(sel.collect()){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org