[jira] [Commented] (HDFS-12862) CacheDirective may invalidata,when NN restart or make a transition to Active.
[ https://issues.apache.org/jira/browse/HDFS-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877652#comment-16877652 ] He Xiaoqiao commented on HDFS-12862: cc [~jojochuang],[~elgoiri],[~daryn] would you mind take another reviews? > CacheDirective may invalidata,when NN restart or make a transition to Active. > - > > Key: HDFS-12862 > URL: https://issues.apache.org/jira/browse/HDFS-12862 > Project: Hadoop HDFS > Issue Type: Bug > Components: caching, hdfs >Affects Versions: 2.7.1 > Environment: >Reporter: Wang XL >Assignee: Wang XL >Priority: Major > Labels: patch > Attachments: HDFS-12862-branch-2.7.1.001.patch, > HDFS-12862-trunk.002.patch, HDFS-12862-trunk.003.patch, > HDFS-12862-trunk.004.patch, HDFS-12862.005.patch, HDFS-12862.006.patch, > HDFS-12862.007.patch > > > The logic in FSNDNCacheOp#modifyCacheDirective is not correct. when modify > cacheDirective,the expiration in directive may be a relative expiryTime, and > EditLog will serial a relative expiry time. > {code:java} > // Some comments here > static void modifyCacheDirective( > FSNamesystem fsn, CacheManager cacheManager, CacheDirectiveInfo > directive, > EnumSet flags, boolean logRetryCache) throws IOException { > final FSPermissionChecker pc = getFsPermissionChecker(fsn); > cacheManager.modifyDirective(directive, pc, flags); > fsn.getEditLog().logModifyCacheDirectiveInfo(directive, logRetryCache); > } > {code} > But when SBN replay the log ,it will invoke > FSImageSerialization#readCacheDirectiveInfo as a absolute expiryTime.It will > result in the inconsistency . > {code:java} > public static CacheDirectiveInfo readCacheDirectiveInfo(DataInput in) > throws IOException { > CacheDirectiveInfo.Builder builder = > new CacheDirectiveInfo.Builder(); > builder.setId(readLong(in)); > int flags = in.readInt(); > if ((flags & 0x1) != 0) { > builder.setPath(new Path(readString(in))); > } > if ((flags & 0x2) != 0) { > builder.setReplication(readShort(in)); > } > if ((flags & 0x4) != 0) { > builder.setPool(readString(in)); > } > if ((flags & 0x8) != 0) { > builder.setExpiration( > CacheDirectiveInfo.Expiration.newAbsolute(readLong(in))); > } > if ((flags & ~0xF) != 0) { > throw new IOException("unknown flags set in " + > "ModifyCacheDirectiveInfoOp: " + flags); > } > return builder.build(); > } > {code} > In other words, fsn.getEditLog().logModifyCacheDirectiveInfo(directive, > logRetryCache) may serial a relative expiry time,But > builder.setExpiration(CacheDirectiveInfo.Expiration.newAbsolute(readLong(in))) >read it as a absolute expiryTime. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12862) CacheDirective may invalidata,when NN restart or make a transition to Active.
[ https://issues.apache.org/jira/browse/HDFS-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16873175#comment-16873175 ] He Xiaoqiao commented on HDFS-12862: Thanks [~Wang XL] for reporting and working on this issue. The failed unit test seems unrelated with this patch. [~jojochuang] do you mind take another review? > CacheDirective may invalidata,when NN restart or make a transition to Active. > - > > Key: HDFS-12862 > URL: https://issues.apache.org/jira/browse/HDFS-12862 > Project: Hadoop HDFS > Issue Type: Bug > Components: caching, hdfs >Affects Versions: 2.7.1 > Environment: >Reporter: Wang XL >Assignee: Wang XL >Priority: Major > Labels: patch > Attachments: HDFS-12862-branch-2.7.1.001.patch, > HDFS-12862-trunk.002.patch, HDFS-12862-trunk.003.patch, > HDFS-12862-trunk.004.patch, HDFS-12862.005.patch, HDFS-12862.006.patch, > HDFS-12862.007.patch > > > The logic in FSNDNCacheOp#modifyCacheDirective is not correct. when modify > cacheDirective,the expiration in directive may be a relative expiryTime, and > EditLog will serial a relative expiry time. > {code:java} > // Some comments here > static void modifyCacheDirective( > FSNamesystem fsn, CacheManager cacheManager, CacheDirectiveInfo > directive, > EnumSet flags, boolean logRetryCache) throws IOException { > final FSPermissionChecker pc = getFsPermissionChecker(fsn); > cacheManager.modifyDirective(directive, pc, flags); > fsn.getEditLog().logModifyCacheDirectiveInfo(directive, logRetryCache); > } > {code} > But when SBN replay the log ,it will invoke > FSImageSerialization#readCacheDirectiveInfo as a absolute expiryTime.It will > result in the inconsistency . > {code:java} > public static CacheDirectiveInfo readCacheDirectiveInfo(DataInput in) > throws IOException { > CacheDirectiveInfo.Builder builder = > new CacheDirectiveInfo.Builder(); > builder.setId(readLong(in)); > int flags = in.readInt(); > if ((flags & 0x1) != 0) { > builder.setPath(new Path(readString(in))); > } > if ((flags & 0x2) != 0) { > builder.setReplication(readShort(in)); > } > if ((flags & 0x4) != 0) { > builder.setPool(readString(in)); > } > if ((flags & 0x8) != 0) { > builder.setExpiration( > CacheDirectiveInfo.Expiration.newAbsolute(readLong(in))); > } > if ((flags & ~0xF) != 0) { > throw new IOException("unknown flags set in " + > "ModifyCacheDirectiveInfoOp: " + flags); > } > return builder.build(); > } > {code} > In other words, fsn.getEditLog().logModifyCacheDirectiveInfo(directive, > logRetryCache) may serial a relative expiry time,But > builder.setExpiration(CacheDirectiveInfo.Expiration.newAbsolute(readLong(in))) >read it as a absolute expiryTime. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12862) CacheDirective may invalidata,when NN restart or make a transition to Active.
[ https://issues.apache.org/jira/browse/HDFS-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16873097#comment-16873097 ] Hadoop QA commented on HDFS-12862: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 31s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 55s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 12s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}132m 5s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.server.diskbalancer.TestDiskBalancer | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e | | JIRA Issue | HDFS-12862 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12972929/HDFS-12862.007.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux d727ea2b9d93 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a79bdf7 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_212 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/27085/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/27085/testReport/ | | Max. process+thread count | 4883 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output |
[jira] [Commented] (HDFS-12862) CacheDirective may invalidata,when NN restart or make a transition to Active.
[ https://issues.apache.org/jira/browse/HDFS-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16871506#comment-16871506 ] Hadoop QA commented on HDFS-12862: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 56s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 3s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 37s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 65 unchanged - 0 fixed = 67 total (was 65) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 51s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 43s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 36s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}135m 21s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.server.diskbalancer.TestDiskBalancer | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e | | JIRA Issue | HDFS-12862 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12972744/HDFS-12862.006.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux cbab2fc8de87 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / b28ddb2 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_212 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-HDFS-Build/27056/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html | | checkstyle |
[jira] [Commented] (HDFS-12862) CacheDirective may invalidata,when NN restart or make a transition to Active.
[ https://issues.apache.org/jira/browse/HDFS-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16871090#comment-16871090 ] He Xiaoqiao commented on HDFS-12862: [~Wang XL], Thanks for your report and patch. a. Please fix checkstyle refer to https://builds.apache.org/job/PreCommit-HDFS-Build/27050/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt b. Please check the failed unit tests, especially {{TestCacheDirectives}}. > CacheDirective may invalidata,when NN restart or make a transition to Active. > - > > Key: HDFS-12862 > URL: https://issues.apache.org/jira/browse/HDFS-12862 > Project: Hadoop HDFS > Issue Type: Bug > Components: caching, hdfs >Affects Versions: 2.7.1 > Environment: >Reporter: Wang XL >Assignee: Wang XL >Priority: Major > Labels: patch > Attachments: HDFS-12862-branch-2.7.1.001.patch, > HDFS-12862-trunk.002.patch, HDFS-12862-trunk.003.patch, > HDFS-12862-trunk.004.patch, HDFS-12862.005.patch > > > The logic in FSNDNCacheOp#modifyCacheDirective is not correct. when modify > cacheDirective,the expiration in directive may be a relative expiryTime, and > EditLog will serial a relative expiry time. > {code:java} > // Some comments here > static void modifyCacheDirective( > FSNamesystem fsn, CacheManager cacheManager, CacheDirectiveInfo > directive, > EnumSet flags, boolean logRetryCache) throws IOException { > final FSPermissionChecker pc = getFsPermissionChecker(fsn); > cacheManager.modifyDirective(directive, pc, flags); > fsn.getEditLog().logModifyCacheDirectiveInfo(directive, logRetryCache); > } > {code} > But when SBN replay the log ,it will invoke > FSImageSerialization#readCacheDirectiveInfo as a absolute expiryTime.It will > result in the inconsistency . > {code:java} > public static CacheDirectiveInfo readCacheDirectiveInfo(DataInput in) > throws IOException { > CacheDirectiveInfo.Builder builder = > new CacheDirectiveInfo.Builder(); > builder.setId(readLong(in)); > int flags = in.readInt(); > if ((flags & 0x1) != 0) { > builder.setPath(new Path(readString(in))); > } > if ((flags & 0x2) != 0) { > builder.setReplication(readShort(in)); > } > if ((flags & 0x4) != 0) { > builder.setPool(readString(in)); > } > if ((flags & 0x8) != 0) { > builder.setExpiration( > CacheDirectiveInfo.Expiration.newAbsolute(readLong(in))); > } > if ((flags & ~0xF) != 0) { > throw new IOException("unknown flags set in " + > "ModifyCacheDirectiveInfoOp: " + flags); > } > return builder.build(); > } > {code} > In other words, fsn.getEditLog().logModifyCacheDirectiveInfo(directive, > logRetryCache) may serial a relative expiry time,But > builder.setExpiration(CacheDirectiveInfo.Expiration.newAbsolute(readLong(in))) >read it as a absolute expiryTime. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12862) CacheDirective may invalidata,when NN restart or make a transition to Active.
[ https://issues.apache.org/jira/browse/HDFS-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16871089#comment-16871089 ] Hadoop QA commented on HDFS-12862: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 26s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 59s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 38s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 31 new + 65 unchanged - 0 fixed = 96 total (was 65) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 10s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 40s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}134m 33s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestErasureCodingMultipleRacks | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.server.diskbalancer.TestDiskBalancer | | | hadoop.hdfs.server.namenode.TestCacheDirectives | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.server.datanode.TestBPOfferService | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e | | JIRA Issue | HDFS-12862 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12972699/HDFS-12862.005.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux b7e49028aac5 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / b28ddb2 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_212 | | findbugs | v3.1.0-RC1 | | findbugs |
[jira] [Commented] (HDFS-12862) CacheDirective may invalidata,when NN restart or make a transition to Active.
[ https://issues.apache.org/jira/browse/HDFS-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16871084#comment-16871084 ] Hadoop QA commented on HDFS-12862: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 25s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 2s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 37s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 7 new + 65 unchanged - 0 fixed = 72 total (was 65) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 35s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 44s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}167m 14s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.tools.TestDFSZKFailoverController | | | hadoop.hdfs.server.diskbalancer.TestDiskBalancer | | | hadoop.hdfs.server.namenode.TestNameNodeMXBean | | | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics | | | hadoop.hdfs.server.namenode.TestCacheDirectives | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e | | JIRA Issue | HDFS-12862 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12972696/HDFS-12862.005.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 36ff5e9db75d 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / b28ddb2 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_212 | | findbugs | v3.1.0-RC1 | | findbugs |
[jira] [Commented] (HDFS-12862) CacheDirective may invalidata,when NN restart or make a transition to Active.
[ https://issues.apache.org/jira/browse/HDFS-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16870960#comment-16870960 ] Wang XL commented on HDFS-12862: Thanks [~hexiaoqiao], [~jojochuang] for your comments. I have checked the failed UT and test locally, all UT pass the test, submit patch v005 to trigger jenkins. > CacheDirective may invalidata,when NN restart or make a transition to Active. > - > > Key: HDFS-12862 > URL: https://issues.apache.org/jira/browse/HDFS-12862 > Project: Hadoop HDFS > Issue Type: Bug > Components: caching, hdfs >Affects Versions: 2.7.1 > Environment: >Reporter: Wang XL >Assignee: Wang XL >Priority: Major > Labels: patch > Attachments: HDFS-12862-branch-2.7.1.001.patch, > HDFS-12862-trunk.002.patch, HDFS-12862-trunk.003.patch, > HDFS-12862-trunk.004.patch, HDFS-12862.005.patch > > > The logic in FSNDNCacheOp#modifyCacheDirective is not correct. when modify > cacheDirective,the expiration in directive may be a relative expiryTime, and > EditLog will serial a relative expiry time. > {code:java} > // Some comments here > static void modifyCacheDirective( > FSNamesystem fsn, CacheManager cacheManager, CacheDirectiveInfo > directive, > EnumSet flags, boolean logRetryCache) throws IOException { > final FSPermissionChecker pc = getFsPermissionChecker(fsn); > cacheManager.modifyDirective(directive, pc, flags); > fsn.getEditLog().logModifyCacheDirectiveInfo(directive, logRetryCache); > } > {code} > But when SBN replay the log ,it will invoke > FSImageSerialization#readCacheDirectiveInfo as a absolute expiryTime.It will > result in the inconsistency . > {code:java} > public static CacheDirectiveInfo readCacheDirectiveInfo(DataInput in) > throws IOException { > CacheDirectiveInfo.Builder builder = > new CacheDirectiveInfo.Builder(); > builder.setId(readLong(in)); > int flags = in.readInt(); > if ((flags & 0x1) != 0) { > builder.setPath(new Path(readString(in))); > } > if ((flags & 0x2) != 0) { > builder.setReplication(readShort(in)); > } > if ((flags & 0x4) != 0) { > builder.setPool(readString(in)); > } > if ((flags & 0x8) != 0) { > builder.setExpiration( > CacheDirectiveInfo.Expiration.newAbsolute(readLong(in))); > } > if ((flags & ~0xF) != 0) { > throw new IOException("unknown flags set in " + > "ModifyCacheDirectiveInfoOp: " + flags); > } > return builder.build(); > } > {code} > In other words, fsn.getEditLog().logModifyCacheDirectiveInfo(directive, > logRetryCache) may serial a relative expiry time,But > builder.setExpiration(CacheDirectiveInfo.Expiration.newAbsolute(readLong(in))) >read it as a absolute expiryTime. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12862) CacheDirective may invalidata,when NN restart or make a transition to Active.
[ https://issues.apache.org/jira/browse/HDFS-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867911#comment-16867911 ] He Xiaoqiao commented on HDFS-12862: Thanks [~jojochuang] for your comments. I have sync the info to [~Wang XL] offline, and he will update the patch later. Thanks again. > CacheDirective may invalidata,when NN restart or make a transition to Active. > - > > Key: HDFS-12862 > URL: https://issues.apache.org/jira/browse/HDFS-12862 > Project: Hadoop HDFS > Issue Type: Bug > Components: caching, hdfs >Affects Versions: 2.7.1 > Environment: >Reporter: Wang XL >Assignee: Wang XL >Priority: Major > Labels: patch > Attachments: HDFS-12862-branch-2.7.1.001.patch, > HDFS-12862-trunk.002.patch, HDFS-12862-trunk.003.patch, > HDFS-12862-trunk.004.patch > > > The logic in FSNDNCacheOp#modifyCacheDirective is not correct. when modify > cacheDirective,the expiration in directive may be a relative expiryTime, and > EditLog will serial a relative expiry time. > {code:java} > // Some comments here > static void modifyCacheDirective( > FSNamesystem fsn, CacheManager cacheManager, CacheDirectiveInfo > directive, > EnumSet flags, boolean logRetryCache) throws IOException { > final FSPermissionChecker pc = getFsPermissionChecker(fsn); > cacheManager.modifyDirective(directive, pc, flags); > fsn.getEditLog().logModifyCacheDirectiveInfo(directive, logRetryCache); > } > {code} > But when SBN replay the log ,it will invoke > FSImageSerialization#readCacheDirectiveInfo as a absolute expiryTime.It will > result in the inconsistency . > {code:java} > public static CacheDirectiveInfo readCacheDirectiveInfo(DataInput in) > throws IOException { > CacheDirectiveInfo.Builder builder = > new CacheDirectiveInfo.Builder(); > builder.setId(readLong(in)); > int flags = in.readInt(); > if ((flags & 0x1) != 0) { > builder.setPath(new Path(readString(in))); > } > if ((flags & 0x2) != 0) { > builder.setReplication(readShort(in)); > } > if ((flags & 0x4) != 0) { > builder.setPool(readString(in)); > } > if ((flags & 0x8) != 0) { > builder.setExpiration( > CacheDirectiveInfo.Expiration.newAbsolute(readLong(in))); > } > if ((flags & ~0xF) != 0) { > throw new IOException("unknown flags set in " + > "ModifyCacheDirectiveInfoOp: " + flags); > } > return builder.build(); > } > {code} > In other words, fsn.getEditLog().logModifyCacheDirectiveInfo(directive, > logRetryCache) may serial a relative expiry time,But > builder.setExpiration(CacheDirectiveInfo.Expiration.newAbsolute(readLong(in))) >read it as a absolute expiryTime. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12862) CacheDirective may invalidata,when NN restart or make a transition to Active.
[ https://issues.apache.org/jira/browse/HDFS-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867217#comment-16867217 ] Wei-Chiu Chuang commented on HDFS-12862: The patch doesn't pass the test. [~Wang XL] would you please double check? > CacheDirective may invalidata,when NN restart or make a transition to Active. > - > > Key: HDFS-12862 > URL: https://issues.apache.org/jira/browse/HDFS-12862 > Project: Hadoop HDFS > Issue Type: Bug > Components: caching, hdfs >Affects Versions: 2.7.1 > Environment: >Reporter: Wang XL >Assignee: Wang XL >Priority: Major > Labels: patch > Attachments: HDFS-12862-branch-2.7.1.001.patch, > HDFS-12862-trunk.002.patch, HDFS-12862-trunk.003.patch, > HDFS-12862-trunk.004.patch > > > The logic in FSNDNCacheOp#modifyCacheDirective is not correct. when modify > cacheDirective,the expiration in directive may be a relative expiryTime, and > EditLog will serial a relative expiry time. > {code:java} > // Some comments here > static void modifyCacheDirective( > FSNamesystem fsn, CacheManager cacheManager, CacheDirectiveInfo > directive, > EnumSet flags, boolean logRetryCache) throws IOException { > final FSPermissionChecker pc = getFsPermissionChecker(fsn); > cacheManager.modifyDirective(directive, pc, flags); > fsn.getEditLog().logModifyCacheDirectiveInfo(directive, logRetryCache); > } > {code} > But when SBN replay the log ,it will invoke > FSImageSerialization#readCacheDirectiveInfo as a absolute expiryTime.It will > result in the inconsistency . > {code:java} > public static CacheDirectiveInfo readCacheDirectiveInfo(DataInput in) > throws IOException { > CacheDirectiveInfo.Builder builder = > new CacheDirectiveInfo.Builder(); > builder.setId(readLong(in)); > int flags = in.readInt(); > if ((flags & 0x1) != 0) { > builder.setPath(new Path(readString(in))); > } > if ((flags & 0x2) != 0) { > builder.setReplication(readShort(in)); > } > if ((flags & 0x4) != 0) { > builder.setPool(readString(in)); > } > if ((flags & 0x8) != 0) { > builder.setExpiration( > CacheDirectiveInfo.Expiration.newAbsolute(readLong(in))); > } > if ((flags & ~0xF) != 0) { > throw new IOException("unknown flags set in " + > "ModifyCacheDirectiveInfoOp: " + flags); > } > return builder.build(); > } > {code} > In other words, fsn.getEditLog().logModifyCacheDirectiveInfo(directive, > logRetryCache) may serial a relative expiry time,But > builder.setExpiration(CacheDirectiveInfo.Expiration.newAbsolute(readLong(in))) >read it as a absolute expiryTime. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12862) CacheDirective may invalidata,when NN restart or make a transition to Active.
[ https://issues.apache.org/jira/browse/HDFS-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867003#comment-16867003 ] Hadoop QA commented on HDFS-12862: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 28s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 38s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 7 new + 65 unchanged - 0 fixed = 72 total (was 65) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 30s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 30s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}137m 44s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestMultipleNNPortQOP | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency | | | hadoop.hdfs.server.namenode.TestCacheDirectives | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e | | JIRA Issue | HDFS-12862 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12972125/HDFS-12862-trunk.004.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 19429d373588 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed Feb 13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 3c1a1ce | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_212 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/26996/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit |
[jira] [Commented] (HDFS-12862) CacheDirective may invalidata,when NN restart or make a transition to Active.
[ https://issues.apache.org/jira/browse/HDFS-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866882#comment-16866882 ] Wei-Chiu Chuang commented on HDFS-12862: This is a good fix. The patch got stalled but other than a trivial conflict the patch still applies. Attached a patch after resolving the conflict, and let's see what precommit says. I'll give another review afterwards. > CacheDirective may invalidata,when NN restart or make a transition to Active. > - > > Key: HDFS-12862 > URL: https://issues.apache.org/jira/browse/HDFS-12862 > Project: Hadoop HDFS > Issue Type: Bug > Components: caching, hdfs >Affects Versions: 2.7.1 > Environment: >Reporter: Wang XL >Priority: Major > Labels: patch > Attachments: HDFS-12862-branch-2.7.1.001.patch, > HDFS-12862-trunk.002.patch, HDFS-12862-trunk.003.patch, > HDFS-12862-trunk.004.patch > > > The logic in FSNDNCacheOp#modifyCacheDirective is not correct. when modify > cacheDirective,the expiration in directive may be a relative expiryTime, and > EditLog will serial a relative expiry time. > {code:java} > // Some comments here > static void modifyCacheDirective( > FSNamesystem fsn, CacheManager cacheManager, CacheDirectiveInfo > directive, > EnumSet flags, boolean logRetryCache) throws IOException { > final FSPermissionChecker pc = getFsPermissionChecker(fsn); > cacheManager.modifyDirective(directive, pc, flags); > fsn.getEditLog().logModifyCacheDirectiveInfo(directive, logRetryCache); > } > {code} > But when SBN replay the log ,it will invoke > FSImageSerialization#readCacheDirectiveInfo as a absolute expiryTime.It will > result in the inconsistency . > {code:java} > public static CacheDirectiveInfo readCacheDirectiveInfo(DataInput in) > throws IOException { > CacheDirectiveInfo.Builder builder = > new CacheDirectiveInfo.Builder(); > builder.setId(readLong(in)); > int flags = in.readInt(); > if ((flags & 0x1) != 0) { > builder.setPath(new Path(readString(in))); > } > if ((flags & 0x2) != 0) { > builder.setReplication(readShort(in)); > } > if ((flags & 0x4) != 0) { > builder.setPool(readString(in)); > } > if ((flags & 0x8) != 0) { > builder.setExpiration( > CacheDirectiveInfo.Expiration.newAbsolute(readLong(in))); > } > if ((flags & ~0xF) != 0) { > throw new IOException("unknown flags set in " + > "ModifyCacheDirectiveInfoOp: " + flags); > } > return builder.build(); > } > {code} > In other words, fsn.getEditLog().logModifyCacheDirectiveInfo(directive, > logRetryCache) may serial a relative expiry time,But > builder.setExpiration(CacheDirectiveInfo.Expiration.newAbsolute(readLong(in))) >read it as a absolute expiryTime. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12862) CacheDirective may invalidata,when NN restart or make a transition to Active.
[ https://issues.apache.org/jira/browse/HDFS-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691613#comment-16691613 ] Hadoop QA commented on HDFS-12862: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 48s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 44s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 7 new + 65 unchanged - 0 fixed = 72 total (was 65) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 8s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 6s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}130m 56s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestCacheDirectives | | | hadoop.hdfs.web.TestWebHdfsTimeouts | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HDFS-12862 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12935653/HDFS-12862-trunk.003.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux f8413022f719 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 5a7ca6a | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/2/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/2/testReport/ | | Max.
[jira] [Commented] (HDFS-12862) CacheDirective may invalidata,when NN restart or make a transition to Active.
[ https://issues.apache.org/jira/browse/HDFS-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691458#comment-16691458 ] He Xiaoqiao commented on HDFS-12862: LGTM for [^HDFS-12862-trunk.003.patch] , ping [~daryn],[~jojochuang] Do you mind another review? > CacheDirective may invalidata,when NN restart or make a transition to Active. > - > > Key: HDFS-12862 > URL: https://issues.apache.org/jira/browse/HDFS-12862 > Project: Hadoop HDFS > Issue Type: Bug > Components: caching, hdfs >Affects Versions: 2.7.1 > Environment: >Reporter: Wang XL >Priority: Major > Labels: patch > Attachments: HDFS-12862-branch-2.7.1.001.patch, > HDFS-12862-trunk.002.patch, HDFS-12862-trunk.003.patch > > > The logic in FSNDNCacheOp#modifyCacheDirective is not correct. when modify > cacheDirective,the expiration in directive may be a relative expiryTime, and > EditLog will serial a relative expiry time. > {code:java} > // Some comments here > static void modifyCacheDirective( > FSNamesystem fsn, CacheManager cacheManager, CacheDirectiveInfo > directive, > EnumSet flags, boolean logRetryCache) throws IOException { > final FSPermissionChecker pc = getFsPermissionChecker(fsn); > cacheManager.modifyDirective(directive, pc, flags); > fsn.getEditLog().logModifyCacheDirectiveInfo(directive, logRetryCache); > } > {code} > But when SBN replay the log ,it will invoke > FSImageSerialization#readCacheDirectiveInfo as a absolute expiryTime.It will > result in the inconsistency . > {code:java} > public static CacheDirectiveInfo readCacheDirectiveInfo(DataInput in) > throws IOException { > CacheDirectiveInfo.Builder builder = > new CacheDirectiveInfo.Builder(); > builder.setId(readLong(in)); > int flags = in.readInt(); > if ((flags & 0x1) != 0) { > builder.setPath(new Path(readString(in))); > } > if ((flags & 0x2) != 0) { > builder.setReplication(readShort(in)); > } > if ((flags & 0x4) != 0) { > builder.setPool(readString(in)); > } > if ((flags & 0x8) != 0) { > builder.setExpiration( > CacheDirectiveInfo.Expiration.newAbsolute(readLong(in))); > } > if ((flags & ~0xF) != 0) { > throw new IOException("unknown flags set in " + > "ModifyCacheDirectiveInfoOp: " + flags); > } > return builder.build(); > } > {code} > In other words, fsn.getEditLog().logModifyCacheDirectiveInfo(directive, > logRetryCache) may serial a relative expiry time,But > builder.setExpiration(CacheDirectiveInfo.Expiration.newAbsolute(readLong(in))) >read it as a absolute expiryTime. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12862) CacheDirective may invalidata,when NN restart or make a transition to Active.
[ https://issues.apache.org/jira/browse/HDFS-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16581859#comment-16581859 ] Wang XL commented on HDFS-12862: check failed UT(TestNameNodeMetadataConsistency, TestCacheDirectives) and test locally, it seems to work fine, ping [~daryn] , [~hexiaoqiao], [~jojochuang] any further more suggestions? > CacheDirective may invalidata,when NN restart or make a transition to Active. > - > > Key: HDFS-12862 > URL: https://issues.apache.org/jira/browse/HDFS-12862 > Project: Hadoop HDFS > Issue Type: Bug > Components: caching, hdfs >Affects Versions: 2.7.1 > Environment: >Reporter: Wang XL >Priority: Major > Labels: patch > Attachments: HDFS-12862-branch-2.7.1.001.patch, > HDFS-12862-trunk.002.patch, HDFS-12862-trunk.003.patch > > > The logic in FSNDNCacheOp#modifyCacheDirective is not correct. when modify > cacheDirective,the expiration in directive may be a relative expiryTime, and > EditLog will serial a relative expiry time. > {code:java} > // Some comments here > static void modifyCacheDirective( > FSNamesystem fsn, CacheManager cacheManager, CacheDirectiveInfo > directive, > EnumSet flags, boolean logRetryCache) throws IOException { > final FSPermissionChecker pc = getFsPermissionChecker(fsn); > cacheManager.modifyDirective(directive, pc, flags); > fsn.getEditLog().logModifyCacheDirectiveInfo(directive, logRetryCache); > } > {code} > But when SBN replay the log ,it will invoke > FSImageSerialization#readCacheDirectiveInfo as a absolute expiryTime.It will > result in the inconsistency . > {code:java} > public static CacheDirectiveInfo readCacheDirectiveInfo(DataInput in) > throws IOException { > CacheDirectiveInfo.Builder builder = > new CacheDirectiveInfo.Builder(); > builder.setId(readLong(in)); > int flags = in.readInt(); > if ((flags & 0x1) != 0) { > builder.setPath(new Path(readString(in))); > } > if ((flags & 0x2) != 0) { > builder.setReplication(readShort(in)); > } > if ((flags & 0x4) != 0) { > builder.setPool(readString(in)); > } > if ((flags & 0x8) != 0) { > builder.setExpiration( > CacheDirectiveInfo.Expiration.newAbsolute(readLong(in))); > } > if ((flags & ~0xF) != 0) { > throw new IOException("unknown flags set in " + > "ModifyCacheDirectiveInfoOp: " + flags); > } > return builder.build(); > } > {code} > In other words, fsn.getEditLog().logModifyCacheDirectiveInfo(directive, > logRetryCache) may serial a relative expiry time,But > builder.setExpiration(CacheDirectiveInfo.Expiration.newAbsolute(readLong(in))) >read it as a absolute expiryTime. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12862) CacheDirective may invalidata,when NN restart or make a transition to Active.
[ https://issues.apache.org/jira/browse/HDFS-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16580794#comment-16580794 ] genericqa commented on HDFS-12862: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 28s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 5s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 23s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}164m 3s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency | | | hadoop.hdfs.server.namenode.TestCacheDirectives | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 | | JIRA Issue | HDFS-12862 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12935653/HDFS-12862-trunk.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 30072ea31f4f 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / bdd0e01 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/24780/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/24780/testReport/ | | Max. process+thread count | 3741 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/24780/console |
[jira] [Commented] (HDFS-12862) CacheDirective may invalidata,when NN restart or make a transition to Active.
[ https://issues.apache.org/jira/browse/HDFS-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16580686#comment-16580686 ] Wang XL commented on HDFS-12862: Thanks [~daryn] for your suggestions, submit v003 following your advice and trigger jenkins. > CacheDirective may invalidata,when NN restart or make a transition to Active. > - > > Key: HDFS-12862 > URL: https://issues.apache.org/jira/browse/HDFS-12862 > Project: Hadoop HDFS > Issue Type: Bug > Components: caching, hdfs >Affects Versions: 2.7.1 > Environment: >Reporter: Wang XL >Priority: Major > Labels: patch > Attachments: HDFS-12862-branch-2.7.1.001.patch, > HDFS-12862-trunk.002.patch, HDFS-12862-trunk.003.patch > > > The logic in FSNDNCacheOp#modifyCacheDirective is not correct. when modify > cacheDirective,the expiration in directive may be a relative expiryTime, and > EditLog will serial a relative expiry time. > {code:java} > // Some comments here > static void modifyCacheDirective( > FSNamesystem fsn, CacheManager cacheManager, CacheDirectiveInfo > directive, > EnumSet flags, boolean logRetryCache) throws IOException { > final FSPermissionChecker pc = getFsPermissionChecker(fsn); > cacheManager.modifyDirective(directive, pc, flags); > fsn.getEditLog().logModifyCacheDirectiveInfo(directive, logRetryCache); > } > {code} > But when SBN replay the log ,it will invoke > FSImageSerialization#readCacheDirectiveInfo as a absolute expiryTime.It will > result in the inconsistency . > {code:java} > public static CacheDirectiveInfo readCacheDirectiveInfo(DataInput in) > throws IOException { > CacheDirectiveInfo.Builder builder = > new CacheDirectiveInfo.Builder(); > builder.setId(readLong(in)); > int flags = in.readInt(); > if ((flags & 0x1) != 0) { > builder.setPath(new Path(readString(in))); > } > if ((flags & 0x2) != 0) { > builder.setReplication(readShort(in)); > } > if ((flags & 0x4) != 0) { > builder.setPool(readString(in)); > } > if ((flags & 0x8) != 0) { > builder.setExpiration( > CacheDirectiveInfo.Expiration.newAbsolute(readLong(in))); > } > if ((flags & ~0xF) != 0) { > throw new IOException("unknown flags set in " + > "ModifyCacheDirectiveInfoOp: " + flags); > } > return builder.build(); > } > {code} > In other words, fsn.getEditLog().logModifyCacheDirectiveInfo(directive, > logRetryCache) may serial a relative expiry time,But > builder.setExpiration(CacheDirectiveInfo.Expiration.newAbsolute(readLong(in))) >read it as a absolute expiryTime. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12862) CacheDirective may invalidata,when NN restart or make a transition to Active.
[ https://issues.apache.org/jira/browse/HDFS-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16580079#comment-16580079 ] genericqa commented on HDFS-12862: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 40s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 7s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 40s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}175m 38s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestRollingUpgrade | | | hadoop.hdfs.client.impl.TestBlockReaderLocal | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 | | JIRA Issue | HDFS-12862 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12935552/HDFS-12862-trunk.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 15dcf100971f 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / d1830d8 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | whitespace | https://builds.apache.org/job/PreCommit-HDFS-Build/24775/artifact/out/whitespace-eol.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/24775/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/24775/testReport/ | | Max. process+thread count | 3023 (vs. ulimit of 1) | | modules | C:
[jira] [Commented] (HDFS-12862) CacheDirective may invalidata,when NN restart or make a transition to Active.
[ https://issues.apache.org/jira/browse/HDFS-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579903#comment-16579903 ] Daryn Sharp commented on HDFS-12862: Rather than rebuild the object, why not make the serialization routines be symmetrical? Since {{readCacheDirectiveInfo}} interprets as an absolute, shouldn't {{writeCacheDirectiveInfo}} (which is just a few lines above it) do the same? Ex. {code:java} @@ -538,7 +538,7 @@ public static void writeCacheDirectiveInfo(DataOutputStream out, writeString(directive.getPool(), out); } if (directive.getExpiration() != null) { - writeLong(directive.getExpiration().getMillis(), out); + writeLong(directive.getExpiration().getAbsoluteMillis(), out); } }{code} > CacheDirective may invalidata,when NN restart or make a transition to Active. > - > > Key: HDFS-12862 > URL: https://issues.apache.org/jira/browse/HDFS-12862 > Project: Hadoop HDFS > Issue Type: Bug > Components: caching, hdfs >Affects Versions: 2.7.1 > Environment: >Reporter: Wang XL >Priority: Major > Labels: patch > Attachments: HDFS-12862-branch-2.7.1.001.patch, > HDFS-12862-trunk.002.patch > > > The logic in FSNDNCacheOp#modifyCacheDirective is not correct. when modify > cacheDirective,the expiration in directive may be a relative expiryTime, and > EditLog will serial a relative expiry time. > {code:java} > // Some comments here > static void modifyCacheDirective( > FSNamesystem fsn, CacheManager cacheManager, CacheDirectiveInfo > directive, > EnumSet flags, boolean logRetryCache) throws IOException { > final FSPermissionChecker pc = getFsPermissionChecker(fsn); > cacheManager.modifyDirective(directive, pc, flags); > fsn.getEditLog().logModifyCacheDirectiveInfo(directive, logRetryCache); > } > {code} > But when SBN replay the log ,it will invoke > FSImageSerialization#readCacheDirectiveInfo as a absolute expiryTime.It will > result in the inconsistency . > {code:java} > public static CacheDirectiveInfo readCacheDirectiveInfo(DataInput in) > throws IOException { > CacheDirectiveInfo.Builder builder = > new CacheDirectiveInfo.Builder(); > builder.setId(readLong(in)); > int flags = in.readInt(); > if ((flags & 0x1) != 0) { > builder.setPath(new Path(readString(in))); > } > if ((flags & 0x2) != 0) { > builder.setReplication(readShort(in)); > } > if ((flags & 0x4) != 0) { > builder.setPool(readString(in)); > } > if ((flags & 0x8) != 0) { > builder.setExpiration( > CacheDirectiveInfo.Expiration.newAbsolute(readLong(in))); > } > if ((flags & ~0xF) != 0) { > throw new IOException("unknown flags set in " + > "ModifyCacheDirectiveInfoOp: " + flags); > } > return builder.build(); > } > {code} > In other words, fsn.getEditLog().logModifyCacheDirectiveInfo(directive, > logRetryCache) may serial a relative expiry time,But > builder.setExpiration(CacheDirectiveInfo.Expiration.newAbsolute(readLong(in))) >read it as a absolute expiryTime. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12862) CacheDirective may invalidata,when NN restart or make a transition to Active.
[ https://issues.apache.org/jira/browse/HDFS-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579315#comment-16579315 ] genericqa commented on HDFS-12862: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 8m 10s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-2.7.1 Compile Tests {color} || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 1m 46s{color} | {color:red} root in branch-2.7.1 failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 19s{color} | {color:red} hadoop-hdfs in branch-2.7.1 failed with JDK v1.8.0_181. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 10s{color} | {color:red} hadoop-hdfs in branch-2.7.1 failed with JDK v9-internal. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} branch-2.7.1 passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 20s{color} | {color:red} hadoop-hdfs in branch-2.7.1 failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 9s{color} | {color:red} hadoop-hdfs in branch-2.7.1 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 10s{color} | {color:red} hadoop-hdfs in branch-2.7.1 failed with JDK v1.8.0_181. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 10s{color} | {color:red} hadoop-hdfs in branch-2.7.1 failed with JDK v9-internal. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 9s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 10s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_181. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 10s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_181. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 10s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v9-internal. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 10s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v9-internal. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 21s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 11s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 60 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 10s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 9s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_181. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 10s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v9-internal. {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 9s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v9-internal. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 15m 10s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:date2018-08-14 | | JIRA Issue | HDFS-12862 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12901618/HDFS-12862-branch-2.7.1.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 3e0ab284a2ec 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64
[jira] [Commented] (HDFS-12862) CacheDirective may invalidata,when NN restart or make a transition to Active.
[ https://issues.apache.org/jira/browse/HDFS-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579299#comment-16579299 ] He Xiaoqiao commented on HDFS-12862: Thanks [~Wang XL], for reporting and working on this, some minor comments: 1. jenknis compile failed, maybe not rebase on the correct branch; 2. some whitespace issues reported, please fix them ping [~drankye] [~jojochuang], would you mind giving a look? thanks. > CacheDirective may invalidata,when NN restart or make a transition to Active. > - > > Key: HDFS-12862 > URL: https://issues.apache.org/jira/browse/HDFS-12862 > Project: Hadoop HDFS > Issue Type: Bug > Components: caching, hdfs >Affects Versions: 2.7.1 > Environment: >Reporter: Wang XL >Priority: Major > Labels: patch > Attachments: HDFS-12862-branch-2.7.1.001.patch > > > The logic in FSNDNCacheOp#modifyCacheDirective is not correct. when modify > cacheDirective,the expiration in directive may be a relative expiryTime, and > EditLog will serial a relative expiry time. > {code:java} > // Some comments here > static void modifyCacheDirective( > FSNamesystem fsn, CacheManager cacheManager, CacheDirectiveInfo > directive, > EnumSet flags, boolean logRetryCache) throws IOException { > final FSPermissionChecker pc = getFsPermissionChecker(fsn); > cacheManager.modifyDirective(directive, pc, flags); > fsn.getEditLog().logModifyCacheDirectiveInfo(directive, logRetryCache); > } > {code} > But when SBN replay the log ,it will invoke > FSImageSerialization#readCacheDirectiveInfo as a absolute expiryTime.It will > result in the inconsistency . > {code:java} > public static CacheDirectiveInfo readCacheDirectiveInfo(DataInput in) > throws IOException { > CacheDirectiveInfo.Builder builder = > new CacheDirectiveInfo.Builder(); > builder.setId(readLong(in)); > int flags = in.readInt(); > if ((flags & 0x1) != 0) { > builder.setPath(new Path(readString(in))); > } > if ((flags & 0x2) != 0) { > builder.setReplication(readShort(in)); > } > if ((flags & 0x4) != 0) { > builder.setPool(readString(in)); > } > if ((flags & 0x8) != 0) { > builder.setExpiration( > CacheDirectiveInfo.Expiration.newAbsolute(readLong(in))); > } > if ((flags & ~0xF) != 0) { > throw new IOException("unknown flags set in " + > "ModifyCacheDirectiveInfoOp: " + flags); > } > return builder.build(); > } > {code} > In other words, fsn.getEditLog().logModifyCacheDirectiveInfo(directive, > logRetryCache) may serial a relative expiry time,But > builder.setExpiration(CacheDirectiveInfo.Expiration.newAbsolute(readLong(in))) >read it as a absolute expiryTime. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12862) CacheDirective may invalidata,when NN restart or make a transition to Active.
[ https://issues.apache.org/jira/browse/HDFS-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287285#comment-16287285 ] genericqa commented on HDFS-12862: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 44s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-2.7.1 Compile Tests {color} || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 2m 2s{color} | {color:red} root in branch-2.7.1 failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 15s{color} | {color:red} hadoop-hdfs in branch-2.7.1 failed with JDK v1.8.0_151. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 11s{color} | {color:red} hadoop-hdfs in branch-2.7.1 failed with JDK v9-internal. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s{color} | {color:green} branch-2.7.1 passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 17s{color} | {color:red} hadoop-hdfs in branch-2.7.1 failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 9s{color} | {color:red} hadoop-hdfs in branch-2.7.1 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 10s{color} | {color:red} hadoop-hdfs in branch-2.7.1 failed with JDK v1.8.0_151. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 10s{color} | {color:red} hadoop-hdfs in branch-2.7.1 failed with JDK v9-internal. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 9s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 8s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_151. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 8s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_151. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 9s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v9-internal. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 9s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v9-internal. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 21s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 10s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 60 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 8s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 8s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_151. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 9s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v9-internal. {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 10s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v9-internal. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 23m 3s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:date2017-12-12 | | JIRA Issue | HDFS-12862 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12901618/HDFS-12862-branch-2.7.1.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux c587dfea9a13 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64