pvary commented on a change in pull request #3131:
URL: https://github.com/apache/hive/pull/3131#discussion_r840373463



##########
File path: 
iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergOutputFormat.java
##########
@@ -83,9 +83,19 @@ private static HiveIcebergRecordWriter writer(JobConf jc) {
         .operationId(operationId)
         .build();
     String tableName = jc.get(Catalogs.NAME);
-    HiveFileWriterFactory hfwf = new HiveFileWriterFactory(table, fileFormat, 
schema,
-        null, fileFormat, null, null, null, null);
-    return new HiveIcebergRecordWriter(schema, spec, fileFormat,
-        hfwf, outputFileFactory, io, targetFileSize, taskAttemptID, tableName);
+    if (HiveIcebergStorageHandler.isDelete(jc, tableName)) {
+      // TODO: remove this Avro-specific logic once we have Avro writer 
function ready
+      // for now, this means that Avro delete files will not contain the 'row' 
column
+      Schema positionDeleteRowSchema = fileFormat == FileFormat.AVRO ? null : 
schema;
+      HiveFileWriterFactory hfwf = new HiveFileWriterFactory(table, 
fileFormat, schema,
+          null, fileFormat, null, null, null, positionDeleteRowSchema);
+      return new HiveIcebergDeleteWriter(hfwf, schema, spec, fileFormat, 
outputFileFactory, io, targetFileSize,
+          taskAttemptID, tableName);

Review comment:
       Why do we need different `HiveFileWriterFactory` for different cases? I 
thought that the factory hides that abstraction




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to