ahmedabu98 commented on code in PR #38061:
URL: https://github.com/apache/beam/pull/38061#discussion_r3173275248


##########
sdks/java/io/iceberg/src/test/java/org/apache/beam/sdk/io/iceberg/catalog/IcebergCatalogBaseIT.java:
##########
@@ -687,6 +687,32 @@ public void testWriteToPartitionedTable() throws 
IOException {
         returnedRecords, 
containsInAnyOrder(inputRows.stream().map(RECORD_FUNC::apply).toArray()));
   }
 
+  @Test
+  public void testWriteToPartitionedTableWithHashDistribution() throws 
IOException {
+    Map<String, Object> config = new 
HashMap<>(managedIcebergConfig(tableId()));
+    int truncLength = "value_x".length();
+    List<String> partitionFields =
+        Arrays.asList("bool_field", "hour(datetime)", "truncate(str, " + 
truncLength + ")");
+    config.put("partition_fields", partitionFields);
+    config.put("distribution_mode", "hash");
+    PCollection<Row> input = 
pipeline.apply(Create.of(inputRows)).setRowSchema(BEAM_SCHEMA);
+    input.apply(Managed.write(ICEBERG).withConfig(config));
+    pipeline.run().waitUntilFinish();
+
+    // Read back and check records are correct

Review Comment:
   It's hard to test with direct runner because it's autosharding 
implementation is not smart. Sometimes it creates more shards than necessary.
   
   I added a test for grouping minus autosharding though to validate the number 
of files created == number of partitions



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to