[ 
https://issues.apache.org/jira/browse/SPARK-41758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Singh updated SPARK-41758:
----------------------------------
    Description: 
Doctest in pyspark.sql.connect.window.Window.rowsBetween fails with the error 
below:
{code:java}
File "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/column.py", 
line 324, in pyspark.sql.connect.column.Column.over
Failed example:
    window = Window.partitionBy("name").orderBy("age")                 
.rowsBetween(Window.unboundedPreceding, Window.currentRow)
Exception raised:
    Traceback (most recent call last):
      File 
"/usr/local/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/doctest.py",
 line 1350, in __run
        exec(compile(example.source, filename, "single",
      File "<doctest pyspark.sql.connect.column.Column.over[1]>", line 1, in 
<module>
        window = Window.partitionBy("name").orderBy("age")                 
.rowsBetween(Window.unboundedPreceding, Window.currentRow)
      File "/Users/s.singh/personal/spark-oss/python/pyspark/sql/utils.py", 
line 346, in wrapped
        raise NotImplementedError()
    NotImplementedError{code}
We should enable this back after fixing the issue in Spark Connect

  was:
Doctest in pyspark.sql.connect.window.Window.bitwiseAnd fails with the error 
below:
{code:java}
File "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/column.py", 
line 86, in pyspark.sql.connect.column.Column.bitwiseAND
Failed example:
    df.select(df.a.bitwiseAND(df.b)).collect()
Exception raised:
    Traceback (most recent call last):
      File 
"/usr/local/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/doctest.py",
 line 1350, in __run
        exec(compile(example.source, filename, "single",
      File "<doctest pyspark.sql.connect.column.Column.bitwiseAND[2]>", line 1, 
in <module>
        df.select(df.a.bitwiseAND(df.b)).collect()
      File 
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", 
line 896, in collect
        pdf = self.toPandas()
      File 
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", 
line 910, in toPandas
        return self._session.client._to_pandas(query)
      File 
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", line 
337, in _to_pandas
        return self._execute_and_fetch(req)
      File 
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", line 
431, in _execute_and_fetch
        for b in self._stub.ExecutePlan(req, metadata=self._builder.metadata()):
      File "/usr/local/lib/python3.10/site-packages/grpc/_channel.py", line 
426, in __next__
        return self._next()
      File "/usr/local/lib/python3.10/site-packages/grpc/_channel.py", line 
826, in _next
        raise self
    grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC 
that terminated with:
        status = StatusCode.UNKNOWN
        details = "[UNRESOLVED_ROUTINE] Cannot resolve function `bitwiseAND` on 
search path [`system`.`builtin`, `system`.`session`, 
`spark_catalog`.`default`]."
        debug_error_string = "UNKNOWN:Error received from peer 
ipv6:%5B::1%5D:15002 {grpc_message:"[UNRESOLVED_ROUTINE] Cannot resolve 
function `bitwiseAND` on search path [`system`.`builtin`, `system`.`session`, 
`spark_catalog`.`default`].", grpc_status:2, 
created_time:"2022-12-28T05:16:01.360735-08:00"}"
    >
{code}
We should enable this back after fixing the issue in Spark Connect


> Support Window.rowsBetween
> --------------------------
>
>                 Key: SPARK-41758
>                 URL: https://issues.apache.org/jira/browse/SPARK-41758
>             Project: Spark
>          Issue Type: Sub-task
>          Components: Connect
>    Affects Versions: 3.4.0
>            Reporter: Sandeep Singh
>            Priority: Major
>
> Doctest in pyspark.sql.connect.window.Window.rowsBetween fails with the error 
> below:
> {code:java}
> File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/column.py", 
> line 324, in pyspark.sql.connect.column.Column.over
> Failed example:
>     window = Window.partitionBy("name").orderBy("age")                 
> .rowsBetween(Window.unboundedPreceding, Window.currentRow)
> Exception raised:
>     Traceback (most recent call last):
>       File 
> "/usr/local/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/doctest.py",
>  line 1350, in __run
>         exec(compile(example.source, filename, "single",
>       File "<doctest pyspark.sql.connect.column.Column.over[1]>", line 1, in 
> <module>
>         window = Window.partitionBy("name").orderBy("age")                 
> .rowsBetween(Window.unboundedPreceding, Window.currentRow)
>       File "/Users/s.singh/personal/spark-oss/python/pyspark/sql/utils.py", 
> line 346, in wrapped
>         raise NotImplementedError()
>     NotImplementedError{code}
> We should enable this back after fixing the issue in Spark Connect



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to