This is an automated email from the ASF dual-hosted git repository.

yangjie01 pushed a commit to branch branch-4.1
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-4.1 by this push:
     new aed0547af11d [SPARK-54375][CONNECT][TESTS] Add `assume` to cases in 
`PythonPipelineSuite` to skip tests when PyConnect dependencies is not available
aed0547af11d is described below

commit aed0547af11d3e1ca69c1f33f6e69acfb2308780
Author: yangjie01 <[email protected]>
AuthorDate: Mon Nov 17 16:40:42 2025 +0800

    [SPARK-54375][CONNECT][TESTS] Add `assume` to cases in 
`PythonPipelineSuite` to skip tests when PyConnect dependencies is not available
    
    ### What changes were proposed in this pull request?
    SPARK-54020 added some new test cases in `PythonPipelineSuite`. This pr 
incorporates `assume(PythonTestDepsChecker.isConnectDepsAvailable)` for these 
test cases to ensure that the tests are skipped rather than failing when 
PyConnect dependencies are missing.
    
    ### Why are the changes needed?
    Enhance the robustness of test cases. Prior to this, when executing 
`build/sbt "connect/testOnly 
org.apache.spark.sql.connect.pipelines.PythonPipelineSuite"`:
    
    ```
    [info] - reading internal datasets outside query function that trigger 
eager analysis or execution will fail (spark.sql("SELECT * FROM src")) *** 
FAILED *** (4 milliseconds)
    [info]   
"org.apache.spark.sql.connect.PythonTestDepsChecker.isConnectDepsAvailable was 
false" did not contain "TABLE_OR_VIEW_NOT_FOUND" (PythonPipelineSuite.scala:546)
    [info]   org.scalatest.exceptions.TestFailedException:
    [info]   at 
org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:472)
    [info]   at 
org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:471)
    [info]   at 
org.scalatest.Assertions$.newAssertionFailedException(Assertions.scala:1231)
    [info]   at 
org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:1295)
    [info]   at 
org.apache.spark.sql.connect.pipelines.PythonPipelineSuite.$anonfun$new$43(PythonPipelineSuite.scala:546)
    [info]   at 
org.apache.spark.sql.connect.pipelines.PythonPipelineSuite.$anonfun$new$43$adapted(PythonPipelineSuite.scala:532)
    [info]   at 
org.apache.spark.SparkFunSuite.$anonfun$gridTest$2(SparkFunSuite.scala:241)
    [info]   at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
    [info]   at 
org.scalatest.enablers.Timed$$anon$1.timeoutAfter(Timed.scala:127)
    [info]   at 
org.scalatest.concurrent.TimeLimits$.failAfterImpl(TimeLimits.scala:282)
    [info]   at 
org.scalatest.concurrent.TimeLimits.failAfter(TimeLimits.scala:231)
    [info]   at 
org.scalatest.concurrent.TimeLimits.failAfter$(TimeLimits.scala:230)
    ...
    [info] *** 24 TESTS FAILED ***
    [error] Failed tests:
    [error]         org.apache.spark.sql.connect.pipelines.PythonPipelineSuite
    [error] (connect / Test / testOnly) sbt.TestsFailedException: Tests 
unsuccessful
    ```
    
    ### Does this PR introduce _any_ user-facing change?
    No
    
    ### How was this patch tested?
    - Pass Github Actions
    - Manually verify that the relevant tests will no longer fail when 
PyConnect dependencies are missing.
    
    ### Was this patch authored or co-authored using generative AI tooling?
    No
    
    Closes #53088 from LuciferYang/SPARK-54375.
    
    Authored-by: yangjie01 <[email protected]>
    Signed-off-by: yangjie01 <[email protected]>
    (cherry picked from commit 722bcc0f0d15245a39fae62c0c1c764e4b6a02f8)
    Signed-off-by: yangjie01 <[email protected]>
---
 .../org/apache/spark/sql/connect/pipelines/PythonPipelineSuite.scala   | 3 +++
 1 file changed, 3 insertions(+)

diff --git 
a/sql/connect/server/src/test/scala/org/apache/spark/sql/connect/pipelines/PythonPipelineSuite.scala
 
b/sql/connect/server/src/test/scala/org/apache/spark/sql/connect/pipelines/PythonPipelineSuite.scala
index 45d8c7b18b84..826e2338589d 100644
--- 
a/sql/connect/server/src/test/scala/org/apache/spark/sql/connect/pipelines/PythonPipelineSuite.scala
+++ 
b/sql/connect/server/src/test/scala/org/apache/spark/sql/connect/pipelines/PythonPipelineSuite.scala
@@ -530,6 +530,7 @@ class PythonPipelineSuite
       "eager analysis or execution will fail")(
     Seq("""spark.sql("SELECT * FROM src")""", 
"""spark.read.table("src").collect()""")) {
     command =>
+      assume(PythonTestDepsChecker.isConnectDepsAvailable)
       val ex = intercept[RuntimeException] {
         buildGraph(s"""
         |@dp.materialized_view
@@ -1047,6 +1048,7 @@ class PythonPipelineSuite
 
   gridTest("Unsupported SQL command outside query function should result in a 
failure")(
     unsupportedSqlCommandList) { unsupportedSqlCommand =>
+    assume(PythonTestDepsChecker.isConnectDepsAvailable)
     val ex = intercept[RuntimeException] {
       buildGraph(s"""
         |spark.sql("$unsupportedSqlCommand")
@@ -1061,6 +1063,7 @@ class PythonPipelineSuite
 
   gridTest("Unsupported SQL command inside query function should result in a 
failure")(
     unsupportedSqlCommandList) { unsupportedSqlCommand =>
+    assume(PythonTestDepsChecker.isConnectDepsAvailable)
     val ex = intercept[RuntimeException] {
       buildGraph(s"""
         |@dp.materialized_view()


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to