[GitHub] [hudi] todd5167 commented on issue #5395: [SUPPORT] Failed to archive commits, thow error 'Directory is not empty'
todd5167 commented on issue #5395: URL: https://github.com/apache/hudi/issues/5395#issuecomment-1127500228 @nsivabalan @danny0405I deleted the ./hoodie folder and re-executed it because I needed to restore the task.Therefore, I can't see the Action information. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hudi] todd5167 commented on issue #5395: [SUPPORT] Failed to archive commits, thow error 'Directory is not empty'
todd5167 commented on issue #5395: URL: https://github.com/apache/hudi/issues/5395#issuecomment-1112815298 > So you do not write into same table from two separate Flink jobs. yes -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hudi] todd5167 commented on issue #5395: [SUPPORT] Failed to archive commits, thow error 'Directory is not empty'
todd5167 commented on issue #5395: URL: https://github.com/apache/hudi/issues/5395#issuecomment-689002 > > deleteAnyLeftOverMarkers > > I'm not quite sure, we actually clean the MARKER folder each time we commit an instant, and here we do it again when the instant was archived? (curious about the background here ?), and like what @codope saied, the call in [#L682-L684](https://github.com/apache/hudi/blob/762623a15cfeba6f3fe936c238d660685ae62b50/hudi-common/src/main/java/org/apache/hudi/common/fs/FSUtils.java#L682-L684) already delete recursively first. > > The instant commit and archiving both happens in JobManager in single thread, so there should not be parallelism problem. > > @todd5167 Did you write to the same table from two separate Flink jobs ? @danny0405 A flink job corresponds to a hudi table. checkpoint config: ``` Interval 10m 0s Timeout 3m 0s Minimum Pause Between Checkpoints0ms Maximum Concurrent Checkpoints 2 ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hudi] todd5167 commented on issue #5395: [SUPPORT] Failed to archive commits, thow error 'Directory is not empty'
todd5167 commented on issue #5395: URL: https://github.com/apache/hudi/issues/5395#issuecomment-1110621179 @yihua Because this problem causes the flink job to restart frequently, I use fs.delete(dirPath, true) to delete the folder.After I changed the code again, the ./hoodie folder was cleaned up by me. I re-consume historical data and incremental data again. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hudi] todd5167 commented on issue #5395: [SUPPORT] Failed to archive commits, thow error 'Directory is not empty'
todd5167 commented on issue #5395: URL: https://github.com/apache/hudi/issues/5395#issuecomment-1110469761 @yihua @codope Flink writes hudi configuration information as follows. I have restored from flink savepoint multiple times and keep getting this error. When the exception occurs .temp/20220421063410412 has subfolders.My temporary solution is, boolean result = fs.delete(dirPath, **true**); not sure what's wrong with this. ``` 'connector' = 'hudi', 'path' = 's3a://xxx/hudi/ac_withdraw_record', 'table.type' = 'MERGE_ON_READ', 'write.bucket_assign.tasks' = '2', 'write.tasks' = '2', 'changelog.enabled' = 'false', 'hoodie.cleaner.policy' = 'KEEP_LATEST_FILE_VERSIONS' , 'hoodie.cleaner.fileversions.retained' = '2' , 'write.task.max.size' = '2048', 'write.index_bootstrap.tasks' = '4', 'index.bootstrap.enabled' = 'true', 'index.global.enabled' = 'false', 'compaction.tasks' = '4', 'compaction.max_memory' = '1024', 'hoodie.compact.inline.trigger.strategy' = 'NUM_COMMITS', 'hive_sync.enable' = 'true', 'hive_sync.mode' = 'glue', 'hive_sync.table' = 'ac_withdraw_record', 'hive_sync.skip_ro_suffix' = 'true', 'hive_sync.db' = 'hudi_meta', 'hoodie.compact.inline.max.delta.commits' = '6' ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org