aditiwari01 commented on issue #2743:
URL: https://github.com/apache/hudi/issues/2743#issuecomment-815462565
@n3nash Thanks for the clarificatio. Can we create a jira for the same. I
can't pick this right away but would try to conntribute as and when I get time.
Meanwhile I will try to
aditiwari01 commented on issue #2743:
URL: https://github.com/apache/hudi/issues/2743#issuecomment-813779228
Thanks @lw309637554 Will look into this deletePartition in depth.
As for my use case, the ideal situation would be to have some kind of row
level TTL taken care by cleaner/com
aditiwari01 commented on issue #2743:
URL: https://github.com/apache/hudi/issues/2743#issuecomment-813450681
I think the ideal way should be around compactor and cleaner.
Time based cleaner and filtering records with older commit time while
compacting base file should solve the issue.
aditiwari01 commented on issue #2743:
URL: https://github.com/apache/hudi/issues/2743#issuecomment-810782992
I was thinking around similar lines but we do have continuous jobs (not
deltastreamer, but spark streaming jobs with 5/10 mins minni batches). We can't
have a separate job for delet