Apache9 commented on code in PR #7151:
URL: https://github.com/apache/hbase/pull/7151#discussion_r2202754870


##########
hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/impl/IncrementalTableBackupClient.java:
##########
@@ -370,13 +376,56 @@ protected void deleteBulkLoadDirectory() throws 
IOException {
     }
   }
 
-  protected void convertWALsToHFiles() throws IOException {
-    // get incremental backup file list and prepare parameters for DistCp
-    List<String> incrBackupFileList = backupInfo.getIncrBackupFileList();
+  protected Set<String> convertWALsToHFiles() throws IOException {
+    Set<String> backupFiles = new 
HashSet<>(backupInfo.getIncrBackupFileList());
+    // filter missing files out (they have been copied by previous backups)
+    backupFiles = filterMissingFiles(backupFiles);
+    int attempt = 1;
+    int maxAttempts =
+      conf.getInt(CONVERT_TO_WAL_TO_HFILES_ATTEMPTS_KEY, 
CONVERT_TO_WAL_TO_HFILES_ATTEMPTS_DEFAULT);
+
+    while (attempt <= maxAttempts) {

Review Comment:
   For replication, the solution is to only record the file name of the WAL 
file. And when opening, we will first try to open it in the normal WAL path, if 
we get a FNFE, then we try to locate it under the oldWALs path. So maybe we 
possible solution is to change the implementation of WALInputFormat? Or maybe 
add a flag to indicate that it should try to locate the WAL file in both WALs 
directory and oldWALs directory?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to