jojochuang commented on code in PR #9585:
URL: https://github.com/apache/ozone/pull/9585#discussion_r2790822017
##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMDBCheckpointServletInodeBasedXfer.java:
##########
@@ -349,29 +356,30 @@ private Collection<Path>
getSnapshotLocalDataPaths(OmSnapshotLocalDataManager lo
}
/**
- * Transfers the snapshot data from the specified snapshot directories into
the archive output stream,
- * handling deduplication and managing resource locking.
+ * Collects the snapshots to be transferred from the specified snapshot
directories
+ * into the archive output stream.
*
* @param sstFilesToExclude Set of SST file identifiers to exclude from
the archive.
- * @param tmpdir Temporary directory for intermediate
processing.
* @param snapshotPaths Set of paths to snapshot directories to be
processed.
Review Comment:
```suggestion
@VisibleForTesting
void collectSnapshotData(Set<String> sstFilesToExclude, Collection<Path>
snapshotPaths,
```
##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMDBCheckpointServletInodeBasedXfer.java:
##########
@@ -233,33 +243,34 @@ public void writeDbDataToStream(HttpServletRequest
request, OutputStream destina
}
boolean shouldContinue = true;
- try (ArchiveOutputStream<TarArchiveEntry> archiveOutputStream =
tar(destination)) {
+ try {
if (includeSnapshotData) {
// Process each snapshot db path and write it to archive
for (Path snapshotDbPath : snapshotPaths) {
if (!shouldContinue) {
break;
}
- shouldContinue = writeDBToArchive(sstFilesToExclude, snapshotDbPath,
- maxTotalSstSize, archiveOutputStream, tmpdir, null, true);
+ shouldContinue = collectFilesFromDir(sstFilesToExclude,
snapshotDbPath,
+ maxTotalSstSize, true, omdbArchiver);
}
if (shouldContinue) {
- shouldContinue = writeDBToArchive(sstFilesToExclude,
getSstBackupDir(),
- maxTotalSstSize, archiveOutputStream, tmpdir, null, true);
+ shouldContinue = collectFilesFromDir(sstFilesToExclude,
getSstBackupDir(),
+ maxTotalSstSize, true, omdbArchiver);
}
if (shouldContinue) {
- shouldContinue = writeDBToArchive(sstFilesToExclude,
getCompactionLogDir(),
- maxTotalSstSize, archiveOutputStream, tmpdir, null, true);
+ shouldContinue = collectFilesFromDir(sstFilesToExclude,
getCompactionLogDir(),
+ maxTotalSstSize, true, omdbArchiver);
}
}
if (shouldContinue) {
// we finished transferring files from snapshot DB's by now and
// this is the last step where we transfer the active om.db contents
Map<String, String> hardLinkFileMap = new HashMap<>();
+ omdbArchiver.setHardLinkFileMap(hardLinkFileMap);
Review Comment:
this line doesn't make much sense to me.... setting an internal reference to
an empty hash map.
In fact we can rewrite to get rid of setHardLinkFileMap() method.
it's fine but it's going to need some cleanup to make the logic more clear.
##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMDBCheckpointServletInodeBasedXfer.java:
##########
@@ -349,29 +356,30 @@ private Collection<Path>
getSnapshotLocalDataPaths(OmSnapshotLocalDataManager lo
}
/**
- * Transfers the snapshot data from the specified snapshot directories into
the archive output stream,
- * handling deduplication and managing resource locking.
+ * Collects the snapshots to be transferred from the specified snapshot
directories
+ * into the archive output stream.
*
Review Comment:
```suggestion
* @param omdbArchiver helper to archive the OM DB.
* @throws IOException if an I/O error occurs during processing.
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]