danielhumanmod commented on code in PR #518:
URL: https://github.com/apache/incubator-xtable/pull/518#discussion_r1767735273


##########
xtable-core/src/test/java/org/apache/xtable/ITConversionController.java:
##########
@@ -261,6 +281,72 @@ public void testVariousOperations(
     }
   }
 
+  // The test content is the simplified version of testVariousOperations
+  // The difference is that the data source from Iceberg contains UUID columns
+  @ParameterizedTest
+  @MethodSource("generateTestParametersForUUID")
+  public void testVariousOperationsWithUUID(
+      String sourceTableFormat,
+      List<String> targetTableFormats,
+      SyncMode syncMode,
+      boolean isPartitioned) {
+    String tableName = getTableName();
+    ConversionController conversionController = new 
ConversionController(jsc.hadoopConfiguration());
+    String partitionConfig = null;
+    if (isPartitioned) {
+      partitionConfig = "level:VALUE";
+    }
+    ConversionSourceProvider<?> conversionSourceProvider =
+        getConversionSourceProvider(sourceTableFormat);
+    List<?> insertRecords;
+    try (GenericTable table =
+        GenericTable.getInstanceWithUUIDColumns(
+            tableName, tempDir, sparkSession, jsc, sourceTableFormat, 
isPartitioned)) {
+      insertRecords = table.insertRows(100);
+
+      ConversionConfig conversionConfig =
+          getTableSyncConfig(
+              sourceTableFormat,
+              syncMode,
+              tableName,
+              table,
+              targetTableFormats,
+              partitionConfig,
+              null);
+      conversionController.sync(conversionConfig, conversionSourceProvider);
+      checkDatasetEquivalence(sourceTableFormat, table, targetTableFormats, 
100);
+
+      // Upsert some records and sync again
+      table.upsertRows(insertRecords.subList(0, 20));
+      conversionController.sync(conversionConfig, conversionSourceProvider);
+      checkDatasetEquivalence(sourceTableFormat, table, targetTableFormats, 
100);
+
+      table.deleteRows(insertRecords.subList(30, 50));
+      conversionController.sync(conversionConfig, conversionSourceProvider);
+      checkDatasetEquivalence(sourceTableFormat, table, targetTableFormats, 
80);
+      checkDatasetEquivalenceWithFilter(
+          sourceTableFormat, table, targetTableFormats, 
table.getFilterQuery());
+    }
+
+    try (GenericTable tableWithUpdatedSchema =

Review Comment:
   > I think we can drop this as well. In the other testing, we the second 
try-with-resources is due to a change in schema in the source table so we test 
schema evolution but that is not being tested here. Let me know if I am missing 
something
   
   Thanks for sharing this context, Tim. Yeah this part can be removed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to