RussellSpitzer commented on code in PR #9546:
URL: https://github.com/apache/iceberg/pull/9546#discussion_r1505022747
##########
core/src/main/java/org/apache/iceberg/hadoop/HadoopTableOperations.java:
##########
@@ -289,64 +309,105 @@ Path versionHintFile() {
return metadataPath(Util.VERSION_HINT_FILENAME);
}
- private void writeVersionHint(int versionToWrite) {
+ @VisibleForTesting
+ void writeVersionHint(FileSystem fs, Integer versionToWrite) throws
Exception {
Path versionHintFile = versionHintFile();
- FileSystem fs = getFileSystem(versionHintFile, conf);
-
+ Path tempVersionHintFile = metadataPath(UUID.randomUUID() +
"-version-hint.temp");
try {
- Path tempVersionHintFile = metadataPath(UUID.randomUUID().toString() +
"-version-hint.temp");
writeVersionToPath(fs, tempVersionHintFile, versionToWrite);
- fs.delete(versionHintFile, false /* recursive delete */);
fs.rename(tempVersionHintFile, versionHintFile);
- } catch (IOException e) {
- LOG.warn("Failed to update version hint", e);
+ } catch (Exception e) {
+ // Cleaning up temporary files.
+ if (fs.exists(tempVersionHintFile)) {
+ io().deleteFile(tempVersionHintFile.toString());
+ }
+ throw e;
+ }
+ }
+
+ @VisibleForTesting
+ void deleteOldVersionHint(FileSystem fs, Path versionHintFile, Integer
nextVersion)
Review Comment:
Can we skip this for this PR, it's just making things more complicated here
I think. We could add it in a follow up. Remember the goal is
[Core: HadoopTable needs to skip file cleanup after task failure under some
boundary conditions.](https://github.com/apache/iceberg/pull/9546/files#top)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]