Github user bersprockets commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20705#discussion_r173332327
  
    --- Diff: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/MetastoreDataSourcesSuite.scala
 ---
    @@ -852,52 +846,52 @@ class MetastoreDataSourcesSuite extends QueryTest 
with SQLTestUtils with TestHiv
           (from to to).map(i => i -> s"str$i").toDF("c1", "c2")
         }
     
    -    withTable("insertParquet") {
    -      createDF(0, 9).write.format("parquet").saveAsTable("insertParquet")
    +    withTable("t") {
    +      createDF(0, 9).write.saveAsTable("t")
           checkAnswer(
    -        sql("SELECT p.c1, p.c2 FROM insertParquet p WHERE p.c1 > 5"),
    +        sql("SELECT p.c1, p.c2 FROM t p WHERE p.c1 > 5"),
             (6 to 9).map(i => Row(i, s"str$i")))
     
           intercept[AnalysisException] {
    -        createDF(10, 
19).write.format("parquet").saveAsTable("insertParquet")
    +        createDF(10, 19).write.saveAsTable("t")
           }
     
    -      createDF(10, 
19).write.mode(SaveMode.Append).format("parquet").saveAsTable("insertParquet")
    +      createDF(10, 19).write.mode(SaveMode.Append).saveAsTable("t")
           checkAnswer(
    -        sql("SELECT p.c1, p.c2 FROM insertParquet p WHERE p.c1 > 5"),
    +        sql("SELECT p.c1, p.c2 FROM t p WHERE p.c1 > 5"),
             (6 to 19).map(i => Row(i, s"str$i")))
     
    -      createDF(20, 
29).write.mode(SaveMode.Append).format("parquet").saveAsTable("insertParquet")
    +      createDF(20, 29).write.mode(SaveMode.Append).saveAsTable("t")
           checkAnswer(
    -        sql("SELECT p.c1, c2 FROM insertParquet p WHERE p.c1 > 5 AND p.c1 
< 25"),
    +        sql("SELECT p.c1, c2 FROM t p WHERE p.c1 > 5 AND p.c1 < 25"),
             (6 to 24).map(i => Row(i, s"str$i")))
     
           intercept[AnalysisException] {
    -        createDF(30, 39).write.saveAsTable("insertParquet")
    +        createDF(30, 39).write.saveAsTable("t")
           }
     
    -      createDF(30, 
39).write.mode(SaveMode.Append).saveAsTable("insertParquet")
    +      createDF(30, 39).write.mode(SaveMode.Append).saveAsTable("t")
           checkAnswer(
    -        sql("SELECT p.c1, c2 FROM insertParquet p WHERE p.c1 > 5 AND p.c1 
< 35"),
    +        sql("SELECT p.c1, c2 FROM t p WHERE p.c1 > 5 AND p.c1 < 35"),
             (6 to 34).map(i => Row(i, s"str$i")))
     
    -      createDF(40, 
49).write.mode(SaveMode.Append).insertInto("insertParquet")
    +      createDF(40, 49).write.mode(SaveMode.Append).insertInto("t")
           checkAnswer(
    -        sql("SELECT p.c1, c2 FROM insertParquet p WHERE p.c1 > 5 AND p.c1 
< 45"),
    +        sql("SELECT p.c1, c2 FROM t p WHERE p.c1 > 5 AND p.c1 < 45"),
             (6 to 44).map(i => Row(i, s"str$i")))
     
    -      createDF(50, 
59).write.mode(SaveMode.Overwrite).saveAsTable("insertParquet")
    +      createDF(50, 59).write.mode(SaveMode.Overwrite).saveAsTable("t")
           checkAnswer(
    -        sql("SELECT p.c1, c2 FROM insertParquet p WHERE p.c1 > 51 AND p.c1 
< 55"),
    +        sql("SELECT p.c1, c2 FROM t p WHERE p.c1 > 51 AND p.c1 < 55"),
             (52 to 54).map(i => Row(i, s"str$i")))
    -      createDF(60, 
69).write.mode(SaveMode.Ignore).saveAsTable("insertParquet")
    +      createDF(60, 69).write.mode(SaveMode.Ignore).saveAsTable("t")
           checkAnswer(
    -        sql("SELECT p.c1, c2 FROM insertParquet p"),
    +        sql("SELECT p.c1, c2 FROM t p"),
             (50 to 59).map(i => Row(i, s"str$i")))
     
    -      createDF(70, 
79).write.mode(SaveMode.Overwrite).insertInto("insertParquet")
    +      createDF(70, 79).write.mode(SaveMode.Overwrite).insertInto("t")
           checkAnswer(
    -        sql("SELECT p.c1, c2 FROM insertParquet p"),
    +        sql("SELECT p.c1, c2 FROM t p"),
             (70 to 79).map(i => Row(i, s"str$i")))
    --- End diff --
    
    Curious about why the test named "SPARK-8156:create table to specific 
database by 'use dbname'" still has a hard-coded format of parquet. Is it 
testing functionality that is orthogonal to the format maybe?
    
    I changed the hard-coded format to json, orc, and csv, and each time that 
test passed.
    
    Similarly with 
      Suite: org.apache.spark.sql.sources.SaveLoadSuite
      Test: SPARK-23459: Improve error message when specified unknown column in 
partition columns


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to