[ https://issues.apache.org/jira/browse/HDFS-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16287285#comment-16287285 ]
genericqa commented on HDFS-12862: ---------------------------------- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 44s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-2.7.1 Compile Tests {color} || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 2m 2s{color} | {color:red} root in branch-2.7.1 failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 15s{color} | {color:red} hadoop-hdfs in branch-2.7.1 failed with JDK v1.8.0_151. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 11s{color} | {color:red} hadoop-hdfs in branch-2.7.1 failed with JDK v9-internal. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s{color} | {color:green} branch-2.7.1 passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 17s{color} | {color:red} hadoop-hdfs in branch-2.7.1 failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 9s{color} | {color:red} hadoop-hdfs in branch-2.7.1 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 10s{color} | {color:red} hadoop-hdfs in branch-2.7.1 failed with JDK v1.8.0_151. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 10s{color} | {color:red} hadoop-hdfs in branch-2.7.1 failed with JDK v9-internal. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 9s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 8s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_151. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 8s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_151. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 9s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v9-internal. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 9s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v9-internal. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 21s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 10s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 60 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 8s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 8s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_151. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 9s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v9-internal. {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 10s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v9-internal. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 23m 3s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:date2017-12-12 | | JIRA Issue | HDFS-12862 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12901618/HDFS-12862-branch-2.7.1.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux c587dfea9a13 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | branch-2.7.1 / a4c8829 | | maven | version: Apache Maven 3.3.9 | | Default Java | 9-internal | | Multi-JDK versions | /usr/lib/jvm/java-8-openjdk-amd64:1.8.0_151 /usr/lib/jvm/java-9-openjdk-amd64:9-internal | | mvninstall | https://builds.apache.org/job/PreCommit-HDFS-Build/22364/artifact/out/branch-mvninstall-root.txt | | compile | https://builds.apache.org/job/PreCommit-HDFS-Build/22364/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_151.txt | | compile | https://builds.apache.org/job/PreCommit-HDFS-Build/22364/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdk9-internal.txt | | mvnsite | https://builds.apache.org/job/PreCommit-HDFS-Build/22364/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt | | findbugs | https://builds.apache.org/job/PreCommit-HDFS-Build/22364/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs.txt | | javadoc | https://builds.apache.org/job/PreCommit-HDFS-Build/22364/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_151.txt | | javadoc | https://builds.apache.org/job/PreCommit-HDFS-Build/22364/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdk9-internal.txt | | mvninstall | https://builds.apache.org/job/PreCommit-HDFS-Build/22364/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt | | compile | https://builds.apache.org/job/PreCommit-HDFS-Build/22364/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_151.txt | | javac | https://builds.apache.org/job/PreCommit-HDFS-Build/22364/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_151.txt | | compile | https://builds.apache.org/job/PreCommit-HDFS-Build/22364/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdk9-internal.txt | | javac | https://builds.apache.org/job/PreCommit-HDFS-Build/22364/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdk9-internal.txt | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/22364/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | mvnsite | https://builds.apache.org/job/PreCommit-HDFS-Build/22364/artifact/out/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt | | whitespace | https://builds.apache.org/job/PreCommit-HDFS-Build/22364/artifact/out/whitespace-eol.txt | | findbugs | https://builds.apache.org/job/PreCommit-HDFS-Build/22364/artifact/out/patch-findbugs-hadoop-hdfs-project_hadoop-hdfs.txt | | javadoc | https://builds.apache.org/job/PreCommit-HDFS-Build/22364/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_151.txt | | javadoc | https://builds.apache.org/job/PreCommit-HDFS-Build/22364/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdk9-internal.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/22364/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk9-internal.txt | | JDK v9-internal Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/22364/testReport/ | | Max. process+thread count | 123 (vs. ulimit of 5000) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/22364/console | | Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > CacheDirective may invalidata,when NN restart or make a transition to Active. > ----------------------------------------------------------------------------- > > Key: HDFS-12862 > URL: https://issues.apache.org/jira/browse/HDFS-12862 > Project: Hadoop HDFS > Issue Type: Bug > Components: caching, hdfs > Affects Versions: 2.7.1 > Environment: > Reporter: Wang XL > Labels: patch > Attachments: HDFS-12862-branch-2.7.1.001.patch > > > The logic in FSNDNCacheOp#modifyCacheDirective is not correct. when modify > cacheDirective,the expiration in directive may be a relative expiryTime, and > EditLog will serial a relative expiry time. > {code:java} > // Some comments here > static void modifyCacheDirective( > FSNamesystem fsn, CacheManager cacheManager, CacheDirectiveInfo > directive, > EnumSet<CacheFlag> flags, boolean logRetryCache) throws IOException { > final FSPermissionChecker pc = getFsPermissionChecker(fsn); > cacheManager.modifyDirective(directive, pc, flags); > fsn.getEditLog().logModifyCacheDirectiveInfo(directive, logRetryCache); > } > {code} > But when SBN replay the log ,it will invoke > FSImageSerialization#readCacheDirectiveInfo as a absolute expiryTime.It will > result in the inconsistency . > {code:java} > public static CacheDirectiveInfo readCacheDirectiveInfo(DataInput in) > throws IOException { > CacheDirectiveInfo.Builder builder = > new CacheDirectiveInfo.Builder(); > builder.setId(readLong(in)); > int flags = in.readInt(); > if ((flags & 0x1) != 0) { > builder.setPath(new Path(readString(in))); > } > if ((flags & 0x2) != 0) { > builder.setReplication(readShort(in)); > } > if ((flags & 0x4) != 0) { > builder.setPool(readString(in)); > } > if ((flags & 0x8) != 0) { > builder.setExpiration( > CacheDirectiveInfo.Expiration.newAbsolute(readLong(in))); > } > if ((flags & ~0xF) != 0) { > throw new IOException("unknown flags set in " + > "ModifyCacheDirectiveInfoOp: " + flags); > } > return builder.build(); > } > {code} > In other words, fsn.getEditLog().logModifyCacheDirectiveInfo(directive, > logRetryCache) may serial a relative expiry time,But > builder.setExpiration(CacheDirectiveInfo.Expiration.newAbsolute(readLong(in))) > read it as a absolute expiryTime. -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org