HyukjinKwon commented on PR #46417:
URL: https://github.com/apache/spark/pull/46417#issuecomment-2101624723

   ```
   2024-05-08T12:03:25.2315797Z 
======================================================================
   2024-05-08T12:03:25.2318738Z     self.assert_eq(
   2024-05-08T12:03:25.2319875Z     return assertPandasOnSparkEqual(
   2024-05-08T12:03:25.2321140Z     actual = actual.to_pandas()
   2024-05-08T12:03:25.2322246Z     return self._to_pandas()
   2024-05-08T12:03:25.2323357Z     return self._to_internal_pandas().copy()
   2024-05-08T12:03:25.2324573Z     return 
self._psdf._internal.to_pandas_frame.index
   2024-05-08T12:03:25.2325972Z     setattr(self, attr_name, fn(self))
   2024-05-08T12:03:25.2326986Z     pdf = sdf.toPandas()
   2024-05-08T12:03:25.2328107Z     return self._session.client.to_pandas(query)
   2024-05-08T12:03:25.2329443Z     table, schema, metrics, observed_metrics, _ 
= self._execute_and_fetch(
   2024-05-08T12:03:25.2333343Z     self._handle_rpc_error(error)
   2024-05-08T12:03:25.2335758Z Traceback (most recent call last):
   2024-05-08T12:03:25.2336884Z     process()
   2024-05-08T12:03:25.2340430Z     for batch in iterator:
   2024-05-08T12:03:25.2341708Z     for series in iterator:
   2024-05-08T12:03:25.2342700Z     for result_batch, result_type in 
result_iter:
   ```
   Running secret scanning on commit d31c8407a1a6a4188411ee9456ec0ca9544830e0
   No Databricks code found. Allowing Push: 
https://github.com/HyukjinKwon/spark.git
   Enumerating objects: 29, done.
   Counting objects: 100% (29/29), done.
   Delta compression using up to 16 threads
   Compressing objects: 100% (15/15), done.
   Writing objects: 100% (15/15), 1.18 KiB | 1.18 MiB/s, done.
   Total 15 (delta 14), reused 0 (delta 0), pack-reused 0
   remote: Resolving deltas: 100% (14/14), completed with 14 local objects.
   To https://github.com/HyukjinKwon/spark.git
    + 26cfd358a1e5...d31c8407a1a6 branch-3.5 -> branch-3.5 (forced update)
   (python3.11) ➜  spark-35 git:(branch-3.5) vi aaa
   (python3.11) ➜  spark-35 git:(branch-3.5) ✗ cat aaa
   ```
   ======================================================================
   ERROR [1.121s]: test_indexer_between_time 
(pyspark.pandas.tests.connect.indexes.test_parity_datetime.DatetimeIndexParityTests)
   ----------------------------------------------------------------------
   Traceback (most recent call last):
     File 
"/home/runner/work/spark/spark-3.5/python/pyspark/pandas/tests/indexes/test_datetime.py",
 line 155, in test_indexer_between_time
       self.assert_eq(
     File 
"/home/runner/work/spark/spark-3.5/python/pyspark/testing/pandasutils.py", line 
525, in assert_eq
       return assertPandasOnSparkEqual(
     File 
"/home/runner/work/spark/spark-3.5/python/pyspark/testing/pandasutils.py", line 
457, in assertPandasOnSparkEqual
       actual = actual.to_pandas()
     File 
"/home/runner/work/spark/spark-3.5/python/pyspark/pandas/indexes/base.py", line 
524, in to_pandas
       return self._to_pandas()
     File 
"/home/runner/work/spark/spark-3.5/python/pyspark/pandas/indexes/base.py", line 
530, in _to_pandas
       return self._to_internal_pandas().copy()
     File 
"/home/runner/work/spark/spark-3.5/python/pyspark/pandas/indexes/base.py", line 
503, in _to_internal_pandas
       return self._psdf._internal.to_pandas_frame.index
     File "/home/runner/work/spark/spark-3.5/python/pyspark/pandas/utils.py", 
line 600, in wrapped_lazy_property
       setattr(self, attr_name, fn(self))
     File 
"/home/runner/work/spark/spark-3.5/python/pyspark/pandas/internal.py", line 
1115, in to_pandas_frame
       pdf = sdf.toPandas()
     File 
"/home/runner/work/spark/spark-3.5/python/pyspark/sql/connect/dataframe.py", 
line 1663, in toPandas
       return self._session.client.to_pandas(query)
     File 
"/home/runner/work/spark/spark-3.5/python/pyspark/sql/connect/client/core.py", 
line 873, in to_pandas
       table, schema, metrics, observed_metrics, _ = self._execute_and_fetch(
     File 
"/home/runner/work/spark/spark-3.5/python/pyspark/sql/connect/client/core.py", 
line 1283, in _execute_and_fetch
       for response in self._execute_and_fetch_as_iterator(req):
     File 
"/home/runner/work/spark/spark-3.5/python/pyspark/sql/connect/client/core.py", 
line 1264, in _execute_and_fetch_as_iterator
       self._handle_error(error)
     File 
"/home/runner/work/spark/spark-3.5/python/pyspark/sql/connect/client/core.py", 
line 1503, in _handle_error
       self._handle_rpc_error(error)
     File 
"/home/runner/work/spark/spark-3.5/python/pyspark/sql/connect/client/core.py", 
line 1539, in _handle_rpc_error
       raise convert_exception(info, status.message) from None
   pyspark.errors.exceptions.connect.PythonException:
     An exception was thrown from the Python worker. Please see the stack trace 
below.
   Traceback (most recent call last):
     File 
"/home/runner/work/spark/spark/python/lib/pyspark.zip/pyspark/worker.py", line 
1834, in main
       process()
     File 
"/home/runner/work/spark/spark/python/lib/pyspark.zip/pyspark/worker.py", line 
1826, in process
       serializer.dump_stream(out_iter, outfile)
     File 
"/home/runner/work/spark/spark/python/lib/pyspark.zip/pyspark/sql/pandas/serializers.py",
 line 531, in dump_stream
       return ArrowStreamSerializer.dump_stream(self, 
init_stream_yield_batches(), stream)
     File 
"/home/runner/work/spark/spark/python/lib/pyspark.zip/pyspark/sql/pandas/serializers.py",
 line 104, in dump_stream
       for batch in iterator:
     File 
"/home/runner/work/spark/spark/python/lib/pyspark.zip/pyspark/sql/pandas/serializers.py",
 line 524, in init_stream_yield_batches
       for series in iterator:
     File 
"/home/runner/work/spark/spark/python/lib/pyspark.zip/pyspark/worker.py", line 
1529, in func
       for result_batch, result_type in result_iter:
     File "/home/runner/work/spark/spark-3.5/python/pyspark/pandas/groupby.py", 
line 2295, in rename_output
       pdf = func(pdf)
     File 
"/home/runner/work/spark/spark-3.5/python/pyspark/pandas/accessors.py", line 
350, in new_func
       return original_func(o, *args, **kwds)
     File 
"/home/runner/work/spark/spark-3.5/python/pyspark/pandas/indexes/datetimes.py", 
line 750, in pandas_between_time
       return pdf.between_time(start_time, end_time, include_start, include_end)
     File 
"/usr/share/miniconda/envs/server-env/lib/python3.10/site-packages/pandas/core/generic.py",
 line 9371, in between_time
       raise TypeError("Index must be DatetimeIndex")
   TypeError: Index must be DatetimeIndex
   
   
   ----------------------------------------------------------------------
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to