pvary commented on a change in pull request #2701:
URL: https://github.com/apache/hive/pull/2701#discussion_r724391313



##########
File path: 
iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergRecordWriter.java
##########
@@ -83,7 +102,29 @@ protected PartitionKey partition(Record row) {
 
   @Override
   public void write(Writable row) throws IOException {
-    super.write(((Container<Record>) row).get());
+    if (!isDelete) {
+      super.write(((Container<Record>) row).get());
+    } else {
+      Record rec = ((Container<Record>) row).get();
+      Record actualRow = GenericRecord.create(schema);
+      for (int i = 2; i < rec.size(); ++i) {
+        actualRow.set(i - 2, rec.get(i));
+      }
+      if (!spec.isUnpartitioned()) {
+        currentKey.partition(actualRow);
+      }
+      // for now, we always create parquet delete writer
+      PositionDeleteWriter<Record> deleteWriter =
+          appender.newPosDeleteWriter(fileFactory.newOutputFile(currentKey), 
FileFormat.PARQUET, currentKey);
+      // TODO: refactor not to write 1 delete file per row (use some rolling 
positional delete writer)

Review comment:
       We should open the writer in the constructor




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to