vinayakphegde commented on code in PR #7151:
URL: https://github.com/apache/hbase/pull/7151#discussion_r2206594759
##########
hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/impl/IncrementalTableBackupClient.java:
##########
@@ -370,13 +376,56 @@ protected void deleteBulkLoadDirectory() throws
IOException {
}
}
- protected void convertWALsToHFiles() throws IOException {
- // get incremental backup file list and prepare parameters for DistCp
- List<String> incrBackupFileList = backupInfo.getIncrBackupFileList();
+ protected Set<String> convertWALsToHFiles() throws IOException {
+ Set<String> backupFiles = new
HashSet<>(backupInfo.getIncrBackupFileList());
+ // filter missing files out (they have been copied by previous backups)
+ backupFiles = filterMissingFiles(backupFiles);
+ int attempt = 1;
+ int maxAttempts =
+ conf.getInt(CONVERT_TO_WAL_TO_HFILES_ATTEMPTS_KEY,
CONVERT_TO_WAL_TO_HFILES_ATTEMPTS_DEFAULT);
+
+ while (attempt <= maxAttempts) {
Review Comment:
Also, even if the file exists initially, it may get archived while WALPlayer
is still processing it — we need to confirm how file reads behave in HDFS or
Hadoop-S3 when a file is moved during read (does the read complete or fail?).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]