[ https://issues.apache.org/jira/browse/HDFS-15313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17158709#comment-17158709 ]
Stephen O'Donnell edited comment on HDFS-15313 at 7/15/20, 9:09 PM: -------------------------------------------------------------------- Looks to me like the only difference between the 3.1 and trunk patch is imports. Yetus gives a green run so I am +1 on the 3.1 patch. The 2.10 patch fails to compile for me locally. [~shashikant] could you give it another check please? {code} [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on project hadoop-hdfs: Compilation failure: Compilation failure: [ERROR] /Users/sodonnell/source/upstream_hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/Diff.java:[167,27] incompatible types [ERROR] required: java.util.List<E> [ERROR] found: java.util.List<capture#1 of ? extends java.lang.Object> [ERROR] /Users/sodonnell/source/upstream_hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/Diff.java:[198,27] incompatible types [ERROR] required: java.util.List<E> [ERROR] found: java.util.List<capture#2 of ? extends java.lang.Object> [ERROR] -> [Help 1] {code} was (Author: sodonnell): Looks to me like the only difference between the 3.1 and trunk path is imports. Yetus gives a green run so I am +1 on the 3.1 patch. The 2.10 patch fails to compile for me locally. [~shashikant] could you give it another check please? {code} [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on project hadoop-hdfs: Compilation failure: Compilation failure: [ERROR] /Users/sodonnell/source/upstream_hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/Diff.java:[167,27] incompatible types [ERROR] required: java.util.List<E> [ERROR] found: java.util.List<capture#1 of ? extends java.lang.Object> [ERROR] /Users/sodonnell/source/upstream_hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/Diff.java:[198,27] incompatible types [ERROR] required: java.util.List<E> [ERROR] found: java.util.List<capture#2 of ? extends java.lang.Object> [ERROR] -> [Help 1] {code} > Ensure inodes in active filesytem are not deleted during snapshot delete > ------------------------------------------------------------------------ > > Key: HDFS-15313 > URL: https://issues.apache.org/jira/browse/HDFS-15313 > Project: Hadoop HDFS > Issue Type: Bug > Components: snapshots > Reporter: Shashikant Banerjee > Assignee: Shashikant Banerjee > Priority: Major > Fix For: 3.2.2, 3.3.1, 3.4.0 > > Attachments: HDFS-15313-branch-3.1.001.patch, HDFS-15313.000.patch, > HDFS-15313.001.patch, HDFS-15313.branch-2.10.patch, > HDFS-15313.branch-2.8.patch > > > After HDFS-13101, it was observed in one of our customer deployments that > delete snapshot ends up cleaning up inodes from active fs which can be > referred from only one snapshot as the isLastReference() check for the parent > dir introduced in HDFS-13101 may return true in certain cases. The aim of > this Jira to add a check to ensure if the Inodes are being referred in the > active fs , should not get deleted while deletion of snapshot happens. -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org