difin commented on code in PR #4477:
URL: https://github.com/apache/hive/pull/4477#discussion_r1265838628


##########
iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergSerDe.java:
##########
@@ -148,6 +148,14 @@ public void initialize(@Nullable Configuration 
configuration, Properties serDePr
     // TODO: remove once we have both Fanout and ClusteredWriter available: 
HIVE-25948
     HiveConf.setIntVar(configuration, 
HiveConf.ConfVars.HIVEOPTSORTDYNAMICPARTITIONTHRESHOLD, 1);
     HiveConf.setVar(configuration, HiveConf.ConfVars.DYNAMICPARTITIONINGMODE, 
"nonstrict");
+
+    Context.Operation operation = 
HiveCustomStorageHandlerUtils.getWriteOperation(configuration,
+            serDeProperties.getProperty(Catalogs.NAME));
+
+    if (operation != null) {
+      HiveConf.setFloatVar(configuration, 
HiveConf.ConfVars.TEZ_MAX_PARTITION_FACTOR, 1f);

Review Comment:
   Hi @okumin, 
   Thank you for all the info!
   I tried a DELETE query with your test table with 
`hive.tez.auto.reducer.parallelism.min.threshold=0.0 ` and it still gave 2 
reducers.
   I also noticed in debug that `GenTezUtils.createReduceWork()` is called 
before `HiveIcebergSerDe.initialize()` with a write operation where I tried to 
set max partition factor to 1 which probably explains why my change doesn't 
work.
   What do you think, can I set max partition factor to 1.0 in 
`GenTezUtils.createReduceWork()` if the operation is a write operation for 
Iceberg table?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to