stevenzwu commented on code in PR #15512:
URL: https://github.com/apache/iceberg/pull/15512#discussion_r3040898639


##########
spark/v4.1/spark/src/test/java/org/apache/iceberg/spark/sql/TestDeleteFrom.java:
##########
@@ -185,4 +187,32 @@ public void testDeleteFromTablePartitionedByVarbinary() {
         ImmutableList.of(row(1L, new byte[] {-29, -68, -47})),
         sql("SELECT * FROM %s where data = X'e3bcd1'", tableName));
   }
+
+  @TestTemplate
+  public void testDeleteWithWapBranch() throws NoSuchTableException {
+    sql(
+        "CREATE TABLE %s (id bigint, data string) USING iceberg TBLPROPERTIES 
('%s' = 'true')",
+        tableName, TableProperties.WRITE_AUDIT_PUBLISH_ENABLED);
+
+    spark.conf().set(SparkSQLProperties.WAP_BRANCH, "dev1");
+    try {
+      // all rows go into one file on the WAP branch; main stays empty
+      List<SimpleRecord> records =
+          Lists.newArrayList(
+              new SimpleRecord(1, "a"), new SimpleRecord(2, "b"), new 
SimpleRecord(3, "c"));
+      Dataset<Row> df = spark.createDataFrame(records, SimpleRecord.class);
+      df.coalesce(1).writeTo(tableName).append();
+
+      // delete a subset of rows - canDeleteWhere and deleteWhere must both
+      // resolve the WAP branch so they scan and commit to the same branch
+      sql("DELETE FROM %s WHERE id = 1", tableName);
+
+      assertEquals(

Review Comment:
   use assertj
   
   ```
   assertThat(sql("SELECT * FROM %s VERSION AS OF 'dev1' ORDER BY id", 
tableName))
   .containsExactlyInAnyOrder(...)
   ```



##########
spark/v4.1/spark/src/test/java/org/apache/iceberg/spark/sql/TestDeleteFrom.java:
##########
@@ -185,4 +187,32 @@ public void testDeleteFromTablePartitionedByVarbinary() {
         ImmutableList.of(row(1L, new byte[] {-29, -68, -47})),
         sql("SELECT * FROM %s where data = X'e3bcd1'", tableName));
   }
+
+  @TestTemplate
+  public void testDeleteWithWapBranch() throws NoSuchTableException {
+    sql(
+        "CREATE TABLE %s (id bigint, data string) USING iceberg TBLPROPERTIES 
('%s' = 'true')",
+        tableName, TableProperties.WRITE_AUDIT_PUBLISH_ENABLED);
+
+    spark.conf().set(SparkSQLProperties.WAP_BRANCH, "dev1");
+    try {
+      // all rows go into one file on the WAP branch; main stays empty

Review Comment:
   can we also insert some rows/files into the main branch first? ideally with 
a row of matching the predicate of `id=1` .



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to