Re: [PR] [HUDI-7518] Fix HoodieMetadataPayload merging logic around repeated deletes [hudi]

2024-03-26 Thread via GitHub


nsivabalan merged PR #10913:
URL: https://github.com/apache/hudi/pull/10913


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [HUDI-7518] Fix HoodieMetadataPayload merging logic around repeated deletes [hudi]

2024-03-26 Thread via GitHub


nsivabalan commented on code in PR #10913:
URL: https://github.com/apache/hudi/pull/10913#discussion_r1540365760


##
hudi-common/src/main/java/org/apache/hudi/metadata/HoodieMetadataPayload.java:
##
@@ -550,27 +552,34 @@ private Map 
combineFileSystemMetadata(HoodieMeta
 //  - First we merge records from all of the delta 
log-files
 //  - Then we merge records from base-files with the delta 
ones (coming as a result
 //  of the previous step)
-(oldFileInfo, newFileInfo) ->
-// NOTE: We can’t assume that MT update records will be 
ordered the same way as actual
-//   FS operations (since they are not atomic), therefore 
MT record merging should be a
-//   _commutative_ & _associative_ operation (ie one that 
would work even in case records
-//   will get re-ordered), which is
-//  - Possible for file-sizes (since file-sizes will 
ever grow, we can simply
-//  take max of the old and new records)
-//  - Not possible for is-deleted flags*
-//
-//   *However, we’re assuming that the case of concurrent 
write and deletion of the same
-//   file is _impossible_ -- it would only be possible 
with concurrent upsert and
-//   rollback operation (affecting the same log-file), 
which is implausible, b/c either
-//   of the following have to be true:
-//  - We’re appending to failed log-file (then the 
other writer is trying to
-//  rollback it concurrently, before it’s own write)
-//  - Rollback (of completed instant) is running 
concurrently with append (meaning
-//  that restore is running concurrently with a write, 
which is also nut supported
-//  currently)
-newFileInfo.getIsDeleted()
-? null
-: new 
HoodieMetadataFileInfo(Math.max(newFileInfo.getSize(), oldFileInfo.getSize()), 
false));
+(oldFileInfo, newFileInfo) -> {
+  // NOTE: We can’t assume that MT update records will be ordered 
the same way as actual
+  //   FS operations (since they are not atomic), therefore MT 
record merging should be a
+  //   _commutative_ & _associative_ operation (ie one that 
would work even in case records
+  //   will get re-ordered), which is
+  //  - Possible for file-sizes (since file-sizes will 
ever grow, we can simply
+  //  take max of the old and new records)
+  //  - Not possible for is-deleted flags*
+  //
+  //   *However, we’re assuming that the case of concurrent 
write and deletion of the same
+  //   file is _impossible_ -- it would only be possible with 
concurrent upsert and
+  //   rollback operation (affecting the same log-file), which 
is implausible, b/c either
+  //   of the following have to be true:
+  //  - We’re appending to failed log-file (then the other 
writer is trying to
+  //  rollback it concurrently, before it’s own write)
+  //  - Rollback (of completed instant) is running 
concurrently with append (meaning
+  //  that restore is running concurrently with a write, 
which is also nut supported
+  //  currently)
+  if (newFileInfo.getIsDeleted()) {
+if (oldFileInfo.getIsDeleted()) {
+  LOG.warn("A file is repeatedly deleted in the files 
partition of the metadata table: " + key);

Review Comment:
   gotcha



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [HUDI-7518] Fix HoodieMetadataPayload merging logic around repeated deletes [hudi]

2024-03-26 Thread via GitHub


yihua commented on code in PR #10913:
URL: https://github.com/apache/hudi/pull/10913#discussion_r1539980247


##
hudi-common/src/main/java/org/apache/hudi/metadata/HoodieMetadataPayload.java:
##
@@ -550,27 +552,34 @@ private Map 
combineFileSystemMetadata(HoodieMeta
 //  - First we merge records from all of the delta 
log-files
 //  - Then we merge records from base-files with the delta 
ones (coming as a result
 //  of the previous step)
-(oldFileInfo, newFileInfo) ->
-// NOTE: We can’t assume that MT update records will be 
ordered the same way as actual
-//   FS operations (since they are not atomic), therefore 
MT record merging should be a
-//   _commutative_ & _associative_ operation (ie one that 
would work even in case records
-//   will get re-ordered), which is
-//  - Possible for file-sizes (since file-sizes will 
ever grow, we can simply
-//  take max of the old and new records)
-//  - Not possible for is-deleted flags*
-//
-//   *However, we’re assuming that the case of concurrent 
write and deletion of the same
-//   file is _impossible_ -- it would only be possible 
with concurrent upsert and
-//   rollback operation (affecting the same log-file), 
which is implausible, b/c either
-//   of the following have to be true:
-//  - We’re appending to failed log-file (then the 
other writer is trying to
-//  rollback it concurrently, before it’s own write)
-//  - Rollback (of completed instant) is running 
concurrently with append (meaning
-//  that restore is running concurrently with a write, 
which is also nut supported
-//  currently)
-newFileInfo.getIsDeleted()
-? null
-: new 
HoodieMetadataFileInfo(Math.max(newFileInfo.getSize(), oldFileInfo.getSize()), 
false));
+(oldFileInfo, newFileInfo) -> {
+  // NOTE: We can’t assume that MT update records will be ordered 
the same way as actual
+  //   FS operations (since they are not atomic), therefore MT 
record merging should be a
+  //   _commutative_ & _associative_ operation (ie one that 
would work even in case records
+  //   will get re-ordered), which is
+  //  - Possible for file-sizes (since file-sizes will 
ever grow, we can simply
+  //  take max of the old and new records)
+  //  - Not possible for is-deleted flags*
+  //
+  //   *However, we’re assuming that the case of concurrent 
write and deletion of the same
+  //   file is _impossible_ -- it would only be possible with 
concurrent upsert and
+  //   rollback operation (affecting the same log-file), which 
is implausible, b/c either
+  //   of the following have to be true:
+  //  - We’re appending to failed log-file (then the other 
writer is trying to
+  //  rollback it concurrently, before it’s own write)
+  //  - Rollback (of completed instant) is running 
concurrently with append (meaning
+  //  that restore is running concurrently with a write, 
which is also nut supported
+  //  currently)
+  if (newFileInfo.getIsDeleted()) {
+if (oldFileInfo.getIsDeleted()) {
+  LOG.warn("A file is repeatedly deleted in the files 
partition of the metadata table: " + key);

Review Comment:
   `newFileInfo` and `oldFileInfo` are particularly for one file only here, 
identified by `key` which is the file name.



##
hudi-client/hudi-spark-client/src/test/java/org/apache/hudi/client/functional/TestHoodieBackedTableMetadata.java:
##
@@ -285,6 +297,112 @@ public void testMetadataRecordKeyExcludeFromPayload(final 
HoodieTableType tableT
 validateMetadata(testTable);
   }
 
+  /**
+   * This tests the case where the two clean actions delete the same file and 
commit
+   * to the metadata table. The metadata table should not contain the deleted 
file afterwards.
+   * A new cleaner plan may contain the same file to delete if the previous 
cleaner
+   * plan has not been successfully executed before the new one is scheduled.
+   */
+  @ParameterizedTest
+  @EnumSource(HoodieTableType.class)

Review Comment:
   Since we're improving MDT in 1.x releases, these tests serve as extra guards 
around the correctness.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use t

Re: [PR] [HUDI-7518] Fix HoodieMetadataPayload merging logic around repeated deletes [hudi]

2024-03-26 Thread via GitHub


yihua commented on code in PR #10913:
URL: https://github.com/apache/hudi/pull/10913#discussion_r1539972497


##
hudi-client/hudi-spark-client/src/test/java/org/apache/hudi/client/functional/TestHoodieBackedTableMetadata.java:
##
@@ -285,6 +297,112 @@ public void testMetadataRecordKeyExcludeFromPayload(final 
HoodieTableType tableT
 validateMetadata(testTable);
   }
 
+  /**
+   * This tests the case where the two clean actions delete the same file and 
commit
+   * to the metadata table. The metadata table should not contain the deleted 
file afterwards.
+   * A new cleaner plan may contain the same file to delete if the previous 
cleaner
+   * plan has not been successfully executed before the new one is scheduled.
+   */
+  @ParameterizedTest
+  @EnumSource(HoodieTableType.class)

Review Comment:
   This one runs quickly as the data files are dummy.  Given the criticality of 
MDT logic, I think it's still worthwhile adding a functional tests here to make 
sure assumption is intact.



##
hudi-client/hudi-spark-client/src/test/java/org/apache/hudi/client/functional/TestHoodieBackedTableMetadata.java:
##
@@ -285,6 +297,112 @@ public void testMetadataRecordKeyExcludeFromPayload(final 
HoodieTableType tableT
 validateMetadata(testTable);
   }
 
+  /**
+   * This tests the case where the two clean actions delete the same file and 
commit
+   * to the metadata table. The metadata table should not contain the deleted 
file afterwards.
+   * A new cleaner plan may contain the same file to delete if the previous 
cleaner
+   * plan has not been successfully executed before the new one is scheduled.
+   */
+  @ParameterizedTest
+  @EnumSource(HoodieTableType.class)

Review Comment:
   This one runs quickly as the data files are dummy.  Given the criticality of 
MDT logic, I think it's still worthwhile adding functional tests here to make 
sure the assumption in core logic is intact.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [HUDI-7518] Fix HoodieMetadataPayload merging logic around repeated deletes [hudi]

2024-03-25 Thread via GitHub


nsivabalan commented on code in PR #10913:
URL: https://github.com/apache/hudi/pull/10913#discussion_r1538177249


##
hudi-client/hudi-spark-client/src/test/java/org/apache/hudi/client/functional/TestHoodieBackedTableMetadata.java:
##
@@ -285,6 +297,112 @@ public void testMetadataRecordKeyExcludeFromPayload(final 
HoodieTableType tableT
 validateMetadata(testTable);
   }
 
+  /**
+   * This tests the case where the two clean actions delete the same file and 
commit
+   * to the metadata table. The metadata table should not contain the deleted 
file afterwards.
+   * A new cleaner plan may contain the same file to delete if the previous 
cleaner
+   * plan has not been successfully executed before the new one is scheduled.
+   */
+  @ParameterizedTest
+  @EnumSource(HoodieTableType.class)

Review Comment:
   esply w/ MDT, we have been adding lot of functional tests in general. what 
do you think



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [HUDI-7518] Fix HoodieMetadataPayload merging logic around repeated deletes [hudi]

2024-03-25 Thread via GitHub


nsivabalan commented on code in PR #10913:
URL: https://github.com/apache/hudi/pull/10913#discussion_r1538175862


##
hudi-common/src/main/java/org/apache/hudi/metadata/HoodieMetadataPayload.java:
##
@@ -550,27 +552,34 @@ private Map 
combineFileSystemMetadata(HoodieMeta
 //  - First we merge records from all of the delta 
log-files
 //  - Then we merge records from base-files with the delta 
ones (coming as a result
 //  of the previous step)
-(oldFileInfo, newFileInfo) ->
-// NOTE: We can’t assume that MT update records will be 
ordered the same way as actual
-//   FS operations (since they are not atomic), therefore 
MT record merging should be a
-//   _commutative_ & _associative_ operation (ie one that 
would work even in case records
-//   will get re-ordered), which is
-//  - Possible for file-sizes (since file-sizes will 
ever grow, we can simply
-//  take max of the old and new records)
-//  - Not possible for is-deleted flags*
-//
-//   *However, we’re assuming that the case of concurrent 
write and deletion of the same
-//   file is _impossible_ -- it would only be possible 
with concurrent upsert and
-//   rollback operation (affecting the same log-file), 
which is implausible, b/c either
-//   of the following have to be true:
-//  - We’re appending to failed log-file (then the 
other writer is trying to
-//  rollback it concurrently, before it’s own write)
-//  - Rollback (of completed instant) is running 
concurrently with append (meaning
-//  that restore is running concurrently with a write, 
which is also nut supported
-//  currently)
-newFileInfo.getIsDeleted()
-? null
-: new 
HoodieMetadataFileInfo(Math.max(newFileInfo.getSize(), oldFileInfo.getSize()), 
false));
+(oldFileInfo, newFileInfo) -> {
+  // NOTE: We can’t assume that MT update records will be ordered 
the same way as actual
+  //   FS operations (since they are not atomic), therefore MT 
record merging should be a
+  //   _commutative_ & _associative_ operation (ie one that 
would work even in case records
+  //   will get re-ordered), which is
+  //  - Possible for file-sizes (since file-sizes will 
ever grow, we can simply
+  //  take max of the old and new records)
+  //  - Not possible for is-deleted flags*
+  //
+  //   *However, we’re assuming that the case of concurrent 
write and deletion of the same
+  //   file is _impossible_ -- it would only be possible with 
concurrent upsert and
+  //   rollback operation (affecting the same log-file), which 
is implausible, b/c either
+  //   of the following have to be true:
+  //  - We’re appending to failed log-file (then the other 
writer is trying to
+  //  rollback it concurrently, before it’s own write)
+  //  - Rollback (of completed instant) is running 
concurrently with append (meaning
+  //  that restore is running concurrently with a write, 
which is also nut supported
+  //  currently)
+  if (newFileInfo.getIsDeleted()) {
+if (oldFileInfo.getIsDeleted()) {
+  LOG.warn("A file is repeatedly deleted in the files 
partition of the metadata table: " + key);

Review Comment:
   should we try to merge the file list in both here? 
   or is it safe to assume the newfileInfo will always be superset ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [HUDI-7518] Fix HoodieMetadataPayload merging logic around repeated deletes [hudi]

2024-03-25 Thread via GitHub


nsivabalan commented on code in PR #10913:
URL: https://github.com/apache/hudi/pull/10913#discussion_r1538174829


##
hudi-client/hudi-spark-client/src/test/java/org/apache/hudi/client/functional/TestHoodieBackedTableMetadata.java:
##
@@ -285,6 +297,112 @@ public void testMetadataRecordKeyExcludeFromPayload(final 
HoodieTableType tableT
 validateMetadata(testTable);
   }
 
+  /**
+   * This tests the case where the two clean actions delete the same file and 
commit
+   * to the metadata table. The metadata table should not contain the deleted 
file afterwards.
+   * A new cleaner plan may contain the same file to delete if the previous 
cleaner
+   * plan has not been successfully executed before the new one is scheduled.
+   */
+  @ParameterizedTest
+  @EnumSource(HoodieTableType.class)

Review Comment:
   Do we need this functional test? 
   I see we have added tests against HoodieMetadataPayload. 
   trying to avoid writing end to end functional tests if we can test the 
fix/functionality using UTs. 
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [HUDI-7518] Fix HoodieMetadataPayload merging logic around repeated deletes [hudi]

2024-03-22 Thread via GitHub


hudi-bot commented on PR #10913:
URL: https://github.com/apache/hudi/pull/10913#issuecomment-2016030909

   
   ## CI report:
   
   * 101b295c6c344d8fe13f56752e20b531da26ea49 UNKNOWN
   * e162c911b8baa8b184a377244be3a555b4578862 Azure: 
[SUCCESS](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=22995)
 
   
   
   Bot commands
 @hudi-bot supports the following commands:
   
- `@hudi-bot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [HUDI-7518] Fix HoodieMetadataPayload merging logic around repeated deletes [hudi]

2024-03-22 Thread via GitHub


hudi-bot commented on PR #10913:
URL: https://github.com/apache/hudi/pull/10913#issuecomment-2015960700

   
   ## CI report:
   
   * 101b295c6c344d8fe13f56752e20b531da26ea49 UNKNOWN
   * e162c911b8baa8b184a377244be3a555b4578862 Azure: 
[PENDING](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=22995)
 
   
   
   Bot commands
 @hudi-bot supports the following commands:
   
- `@hudi-bot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [HUDI-7518] Fix HoodieMetadataPayload merging logic around repeated deletes [hudi]

2024-03-22 Thread via GitHub


hudi-bot commented on PR #10913:
URL: https://github.com/apache/hudi/pull/10913#issuecomment-2015952928

   
   ## CI report:
   
   * 101b295c6c344d8fe13f56752e20b531da26ea49 UNKNOWN
   * e162c911b8baa8b184a377244be3a555b4578862 UNKNOWN
   
   
   Bot commands
 @hudi-bot supports the following commands:
   
- `@hudi-bot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [HUDI-7518] Fix HoodieMetadataPayload merging logic around repeated deletes [hudi]

2024-03-22 Thread via GitHub


hudi-bot commented on PR #10913:
URL: https://github.com/apache/hudi/pull/10913#issuecomment-2015942381

   
   ## CI report:
   
   * 101b295c6c344d8fe13f56752e20b531da26ea49 UNKNOWN
   
   
   Bot commands
 @hudi-bot supports the following commands:
   
- `@hudi-bot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org