cloud-fan commented on a change in pull request #30482:
URL: https://github.com/apache/spark/pull/30482#discussion_r529465696



##########
File path: 
sql/core/src/test/scala/org/apache/spark/sql/connector/AlterTablePartitionV2SQLSuite.scala
##########
@@ -243,4 +243,22 @@ class AlterTablePartitionV2SQLSuite extends 
DatasourceV2SQLBase {
       assert(!partTable.partitionExists(expectedPartition))
     }
   }
+
+  test("SPARK-33529: handle __HIVE_DEFAULT_PARTITION__") {
+    val t = "testpart.ns1.ns2.tbl"
+    withTable(t) {
+      sql(s"CREATE TABLE $t (part0 string) USING foo PARTITIONED BY (part0)")
+      val partTable = catalog("testpart")
+        .asTableCatalog
+        .loadTable(Identifier.of(Array("ns1", "ns2"), "tbl"))
+        .asPartitionable
+      val expectedPartition = InternalRow.fromSeq(Seq[Any](null))
+      assert(!partTable.partitionExists(expectedPartition))
+      val partSpec = "PARTITION (part0 = '__HIVE_DEFAULT_PARTITION__')"

Review comment:
       I'm not sure about it. It's more like a hive specific thing and we 
should let v2 implementation to decide how to handle null partition values. 
This should be internal details and shouldn't be exposed to end users.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to