This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 58c9b5ac8ff4 [SPARK-46683][SQL][TESTS][FOLLOW-UP] Fix typo, use 
queries in partition set
58c9b5ac8ff4 is described below

commit 58c9b5ac8ff4aec1ee5b2ae7e0d5702df2ad273c
Author: Andy Lam <andy....@databricks.com>
AuthorDate: Fri Feb 2 12:23:19 2024 +0900

    [SPARK-46683][SQL][TESTS][FOLLOW-UP] Fix typo, use queries in partition set
    
    ### What changes were proposed in this pull request?
    
    Fix a typo in GeneratedSubquerySuite, where it is using the set of ALL 
queries instead of the partitioned set.
    
    ### Why are the changes needed?
    
    The partitioned set will correspond to the test name.
    
    ### Does this PR introduce _any_ user-facing change?
    
    No.
    
    ### How was this patch tested?
    
    NA.
    
    ### Was this patch authored or co-authored using generative AI tooling?
    
    No.
    
    Closes #44956 from andylam-db/generated-subqueries-fix.
    
    Authored-by: Andy Lam <andy....@databricks.com>
    Signed-off-by: Hyukjin Kwon <gurwls...@apache.org>
---
 .../org/apache/spark/sql/jdbc/querytest/GeneratedSubquerySuite.scala    | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/connector/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/querytest/GeneratedSubquerySuite.scala
 
b/connector/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/querytest/GeneratedSubquerySuite.scala
index 23d61c532899..ff1afbd16865 100644
--- 
a/connector/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/querytest/GeneratedSubquerySuite.scala
+++ 
b/connector/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/querytest/GeneratedSubquerySuite.scala
@@ -418,7 +418,7 @@ class GeneratedSubquerySuite extends 
DockerJDBCIntegrationSuite with QueryGenera
         // Enable ANSI so that { NULL IN { <empty> } } behavior is correct in 
Spark.
         localSparkSession.conf.set(SQLConf.ANSI_ENABLED.key, true)
 
-        val generatedQueries = generatedQuerySpecs.map(_.query).toSeq
+        val generatedQueries = querySpec.map(_.query).toSeq
         // Randomize query order because we are taking a subset of queries.
         val shuffledQueries = scala.util.Random.shuffle(generatedQueries)
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to