szehon-ho commented on code in PR #4812:
URL: https://github.com/apache/iceberg/pull/4812#discussion_r923950980
##########
core/src/main/java/org/apache/iceberg/MetadataColumns.java:
##########
@@ -53,6 +53,8 @@ private MetadataColumns() {
public static final String DELETE_FILE_ROW_FIELD_NAME = "row";
public static final int DELETE_FILE_ROW_FIELD_ID = Integer.MAX_VALUE - 103;
public static final String DELETE_FILE_ROW_DOC = "Deleted row values";
+ public static final int POSITION_DELETE_TABLE_PARTITION_FIELD_ID =
Integer.MAX_VALUE - 104;
Review Comment:
Hi guys, I hit the first issue , as the code for PositionDeletesTable/
DataTask is in core module, there is no way currently to access Parquet, ORC,
and the file readers to implement DataTask::rows(). I mean Spark could pass in
a positionDeleteReader that returns a CloseableIterable<Row> but then it seems
a bit silly to use the DataTask (except that DataTask can add the static
columns like partition and partition_spec_id)
Another thing that needs to be passed in is encryption (like in
DeleteFilter::inputFile(), it seems to be different in Spark/Flink. Let me
know if you have any thoughts?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]