[jira] [Commented] (HDFS-12840) Creating a file with non-default EC policy in a EC zone is not correctly serialized in the editlog
[ https://issues.apache.org/jira/browse/HDFS-12840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16278150#comment-16278150 ] SammiChen commented on HDFS-12840: -- Thanks [~eddyxu] ! The latest patch looks overall good. 1. {{addFileForEditLog}} in {{FsDirWriteFileOp}} bq. ErasureCodingPolicy ecPolicy = null; the variable declaration can be in scope of {{isStriped}} 2. TestOfflineEditsViewer fails locally with editsStored.03 The current solution will append a "ERASURE_CODING_POLICY_ID" with value "63" to each "OP_ADD" operation. do you think a "0" value for the "replication policy ID" is more appropriate given this case? > Creating a file with non-default EC policy in a EC zone is not correctly > serialized in the editlog > -- > > Key: HDFS-12840 > URL: https://issues.apache.org/jira/browse/HDFS-12840 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.0-beta1 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu >Priority: Blocker > Labels: hdfs-ec-3.0-must-do > Attachments: HDFS-12840.00.patch, HDFS-12840.01.patch, > HDFS-12840.02.patch, HDFS-12840.03.patch, HDFS-12840.04.patch, > HDFS-12840.reprod.patch, editsStored, editsStored, editsStored.03 > > > When create a replicated file in an existing EC zone, the edit logs does not > differentiate it from an EC file. When {{FSEditLogLoader}} to replay edits, > this file is treated as EC file, as a results, it crashes the NN because the > blocks of this file are replicated, which does not match with {{INode}}. > {noformat} > ERROR org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: Encountered > exception on operation AddBlockOp [path=/system/balancer.id, > penultimateBlock=NULL, lastBlock=blk_1073743259_2455, RpcClientId=, > RpcCallId=-2] > java.lang.IllegalArgumentException: reportedBlock is not striped > at > com.google.common.base.Preconditions.checkArgument(Preconditions.java:88) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStriped.addStorage(BlockInfoStriped.java:118) > at > org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo.addBlock(DatanodeStorageInfo.java:256) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.addStoredBlock(BlockManager.java:3141) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.addStoredBlockUnderConstruction(BlockManager.java:3068) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processAndHandleReportedBlock(BlockManager.java:3864) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processQueuedMessages(BlockManager.java:2916) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processQueuedMessagesForBlock(BlockManager.java:2903) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.addNewBlock(FSEditLogLoader.java:1069) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:532) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:882) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:863) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:293) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:427) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:380) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:397) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12889) Router UI is missing robots.txt file
[ https://issues.apache.org/jira/browse/HDFS-12889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16278071#comment-16278071 ] genericqa commented on HDFS-12889: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 25m 6s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 27s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 20s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 36m 40s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-12889 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12900610/HDFS-12889.01.patch | | Optional Tests | asflicense shadedclient | | uname | Linux cb93da2c29a8 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 9f1bdaf | | maven | version: Apache Maven 3.3.9 | | asflicense | https://builds.apache.org/job/PreCommit-HDFS-Build/22285/artifact/out/patch-asflicense-problems.txt | | Max. process+thread count | 410 (vs. ulimit of 5000) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/22285/console | | Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Router UI is missing robots.txt file > > > Key: HDFS-12889 > URL: https://issues.apache.org/jira/browse/HDFS-12889 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Attachments: HDFS-12889.01.patch > > > similar to HDFS-9651 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12883) RBF: Document Router and State Store metrics
[ https://issues.apache.org/jira/browse/HDFS-12883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16278061#comment-16278061 ] genericqa commented on HDFS-12883: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 31s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 11s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 51s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 26s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 57s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 7s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}114m 37s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}206m 22s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.balancer.TestBalancerRPCDelay | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-12883 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12900588/HDFS-12883.002.patch | | Optional Tests | asflicense mvnsite compile javac javadoc mvninstall unit shadedclient findbugs checkstyle | |
[jira] [Updated] (HDFS-12889) Router UI is missing robots.txt file
[ https://issues.apache.org/jira/browse/HDFS-12889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12889: -- Status: Patch Available (was: In Progress) > Router UI is missing robots.txt file > > > Key: HDFS-12889 > URL: https://issues.apache.org/jira/browse/HDFS-12889 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Attachments: HDFS-12889.01.patch > > > similar to HDFS-9651 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12889) Router UI is missing robots.txt file
[ https://issues.apache.org/jira/browse/HDFS-12889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12889: -- Attachment: HDFS-12889.01.patch > Router UI is missing robots.txt file > > > Key: HDFS-12889 > URL: https://issues.apache.org/jira/browse/HDFS-12889 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Attachments: HDFS-12889.01.patch > > > similar to HDFS-9651 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12889) Router UI is missing robots.txt file
[ https://issues.apache.org/jira/browse/HDFS-12889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12889: -- Description: similar to HDFS-9651 > Router UI is missing robots.txt file > > > Key: HDFS-12889 > URL: https://issues.apache.org/jira/browse/HDFS-12889 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > > similar to HDFS-9651 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12889) Router UI is missing robots.txt file
[ https://issues.apache.org/jira/browse/HDFS-12889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12889: -- Environment: (was: Similar to HDFS-9651) > Router UI is missing robots.txt file > > > Key: HDFS-12889 > URL: https://issues.apache.org/jira/browse/HDFS-12889 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work started] (HDFS-12889) Router UI is missing robots.txt file
[ https://issues.apache.org/jira/browse/HDFS-12889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HDFS-12889 started by Bharat Viswanadham. - > Router UI is missing robots.txt file > > > Key: HDFS-12889 > URL: https://issues.apache.org/jira/browse/HDFS-12889 > Project: Hadoop HDFS > Issue Type: Bug > Environment: Similar to HDFS-9651 >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12889) Router UI is missing robots.txt file
Bharat Viswanadham created HDFS-12889: - Summary: Router UI is missing robots.txt file Key: HDFS-12889 URL: https://issues.apache.org/jira/browse/HDFS-12889 Project: Hadoop HDFS Issue Type: Bug Environment: Similar to HDFS-9651 Reporter: Bharat Viswanadham Assignee: Bharat Viswanadham -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12872) EC Checksum broken when BlockAccessToken is enabled
[ https://issues.apache.org/jira/browse/HDFS-12872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-12872: - Attachment: HDFS-12872.02.patch Thanks Uma for the review. Good catch, removed the else condition in patch 2. Also cleaned up the test by just setting the config on the existing test, I think this would give us same coverage but save minutes of test run time. Failed tests in last run look unrelated and environmental. > EC Checksum broken when BlockAccessToken is enabled > --- > > Key: HDFS-12872 > URL: https://issues.apache.org/jira/browse/HDFS-12872 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Critical > Labels: hdfs-ec-3.0-must-do > Attachments: HDFS-12872.01.patch, HDFS-12872.02.patch, > HDFS-12872.repro.patch > > > It appears {{hdfs ec -checksum}} doesn't work when block access token is > enabled. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11733) TestGetBlocks.getBlocksWithException() ignores datanode and size parameters.
[ https://issues.apache.org/jira/browse/HDFS-11733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277979#comment-16277979 ] genericqa commented on HDFS-11733: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 2m 36s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 12s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 19s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m 28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}145m 54s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestErasureCodingPolicies | | | hadoop.fs.TestUnbuffer | | | hadoop.hdfs.TestErasureCodingMultipleRacks | | | hadoop.hdfs.server.diskbalancer.TestConnectors | | | hadoop.hdfs.TestSnapshotCommands | | | hadoop.hdfs.server.namenode.TestReencryptionWithKMS | | | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.TestBlockStoragePolicy | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.TestDFSStripedInputStream | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC | | | hadoop.hdfs.server.diskbalancer.TestDiskBalancerWithMockMover | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-11733 | | JIRA Patch URL |
[jira] [Commented] (HDFS-12840) Creating a file with non-default EC policy in a EC zone is not correctly serialized in the editlog
[ https://issues.apache.org/jira/browse/HDFS-12840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277956#comment-16277956 ] genericqa commented on HDFS-12840: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 7 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 30s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 46s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 6 new + 729 unchanged - 2 fixed = 735 total (was 731) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 0s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 12s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 26s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 4s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 55s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 27s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}157m 55s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs | | | Possible null pointer dereference of replication in org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType, Short, Byte) Dereferenced at INodeFile.java:replication in org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType, Short, Byte) Dereferenced at INodeFile.java:[line 210] | | Failed junit tests | hadoop.fs.TestUnbuffer | | | hadoop.hdfs.TestFileConcurrentReader | | | hadoop.hdfs.server.namenode.TestQuotaByStorageType | | | hadoop.hdfs.TestFileAppend2 | | | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer | | |
[jira] [Commented] (HDFS-12886) Ignore minReplication for block recovery
[ https://issues.apache.org/jira/browse/HDFS-12886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277942#comment-16277942 ] Íñigo Goiri commented on HDFS-12886: Let's see what [^HDFS-12886.002.patch] looks like but that should fine. Then there is a more philosophical question regarding minReplication, should this situation we are fixing even be allowed? This is the correct fix but not sure we should even be getting to this situation. > Ignore minReplication for block recovery > > > Key: HDFS-12886 > URL: https://issues.apache.org/jira/browse/HDFS-12886 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs, namenode >Reporter: Lukas Majercak >Assignee: Lukas Majercak > Attachments: HDFS-12886.001.patch, HDFS-12886.002.patch > > > Ignore minReplication for blocks that went through recovery, and allow NN to > complete them and replicate. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12885) Add visibility/stability annotations
[ https://issues.apache.org/jira/browse/HDFS-12885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277932#comment-16277932 ] genericqa commented on HDFS-12885: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 32m 58s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} HDFS-9806 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 34s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 48s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 16s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 33s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 42s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 40s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 39s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in HDFS-9806 has 1 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 34s{color} | {color:red} hadoop-tools/hadoop-fs2img in HDFS-9806 has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 55s{color} | {color:green} HDFS-9806 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 59s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 52s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 33s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 7s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 14s{color} | {color:green} hadoop-fs2img in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 36s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}228m 9s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestErasureCodingPolicies | | | hadoop.hdfs.TestFileChecksum | | | hadoop.fs.TestUnbuffer | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.TestDFSStripedOutputStream | | | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | | | hadoop.tools.TestHdfsConfigFields | | | hadoop.hdfs.TestPread | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | |
[jira] [Commented] (HDFS-12882) Support full open(PathHandle) contract in HDFS
[ https://issues.apache.org/jira/browse/HDFS-12882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277910#comment-16277910 ] genericqa commented on HDFS-12882: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 41 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 7s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 54s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 11m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 16s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 26s{color} | {color:orange} root: The patch generated 36 new + 2069 unchanged - 12 fixed = 2105 total (was 2081) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 31s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 37s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 34s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 17s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 31s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}125m 17s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 48s{color} | {color:green} hadoop-hdfs-nfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 35s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}231m 39s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client | | | Class org.apache.hadoop.hdfs.protocol.HdfsPathHandle defines non-transient non-serializable
[jira] [Updated] (HDFS-12883) RBF: Document Router and State Store metrics
[ https://issues.apache.org/jira/browse/HDFS-12883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-12883: - Attachment: HDFS-12883.002.patch > RBF: Document Router and State Store metrics > > > Key: HDFS-12883 > URL: https://issues.apache.org/jira/browse/HDFS-12883 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: documentation >Affects Versions: 3.0.0-alpha3 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Labels: RBF > Attachments: HDFS-12883.001.patch, HDFS-12883.002.patch, > metric-screen-shot.jpg > > > Document Router and State Store metrics in doc. This will be helpful for > users to monitor RBF. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12883) RBF: Document Router and State Store metrics
[ https://issues.apache.org/jira/browse/HDFS-12883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277905#comment-16277905 ] Yiqun Lin commented on HDFS-12883: -- bq. Not sure router context should be a section; JournalNode and datanode are subsections of.. [~elgoiri], I think Router and State Store metrics would be better as a subsection under dfs context like {{JournalNode}},{{datanode}}. We don't need define a new context for router. These metrics should be one part of dfs. Attach the updated patch. > RBF: Document Router and State Store metrics > > > Key: HDFS-12883 > URL: https://issues.apache.org/jira/browse/HDFS-12883 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: documentation >Affects Versions: 3.0.0-alpha3 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Labels: RBF > Attachments: HDFS-12883.001.patch, HDFS-12883.002.patch, > metric-screen-shot.jpg > > > Document Router and State Store metrics in doc. This will be helpful for > users to monitor RBF. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12887) [READ] Allow Datanodes with Provided volumes to start when blocks with the same id exist locally
[ https://issues.apache.org/jira/browse/HDFS-12887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277897#comment-16277897 ] Virajith Jalaparti commented on HDFS-12887: --- The failed tests are unrelated. [~chris.douglas], can you take a look? > [READ] Allow Datanodes with Provided volumes to start when blocks with the > same id exist locally > > > Key: HDFS-12887 > URL: https://issues.apache.org/jira/browse/HDFS-12887 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-12887-HDFS-9806.001.patch > > > Fix {{ProvidedVolumeImpl.getVolumeMap}} to not throw an exception even when > an existing block in the volumemap has the same id. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12887) [READ] Allow Datanodes with Provided volumes to start when blocks with the same id exist locally
[ https://issues.apache.org/jira/browse/HDFS-12887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277889#comment-16277889 ] genericqa commented on HDFS-12887: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} HDFS-9806 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 26s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 11s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 10s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 11s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} HDFS-9806 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 11s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 39s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}149m 22s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-12887 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12900577/HDFS-12887-HDFS-9806.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux e1498b826d3e 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-9806 / ac98231 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/22278/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/22278/testReport/ | | Max. process+thread count | 3805 (vs. ulimit of
[jira] [Updated] (HDFS-12840) Creating a file with non-default EC policy in a EC zone is not correctly serialized in the editlog
[ https://issues.apache.org/jira/browse/HDFS-12840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-12840: --- Summary: Creating a file with non-default EC policy in a EC zone is not correctly serialized in the editlog (was: Creating a file with non-default EC policy in a EC zone does not correctly serialized in EditLogs) > Creating a file with non-default EC policy in a EC zone is not correctly > serialized in the editlog > -- > > Key: HDFS-12840 > URL: https://issues.apache.org/jira/browse/HDFS-12840 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.0-beta1 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu >Priority: Blocker > Labels: hdfs-ec-3.0-must-do > Attachments: HDFS-12840.00.patch, HDFS-12840.01.patch, > HDFS-12840.02.patch, HDFS-12840.03.patch, HDFS-12840.04.patch, > HDFS-12840.reprod.patch, editsStored, editsStored, editsStored.03 > > > When create a replicated file in an existing EC zone, the edit logs does not > differentiate it from an EC file. When {{FSEditLogLoader}} to replay edits, > this file is treated as EC file, as a results, it crashes the NN because the > blocks of this file are replicated, which does not match with {{INode}}. > {noformat} > ERROR org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: Encountered > exception on operation AddBlockOp [path=/system/balancer.id, > penultimateBlock=NULL, lastBlock=blk_1073743259_2455, RpcClientId=, > RpcCallId=-2] > java.lang.IllegalArgumentException: reportedBlock is not striped > at > com.google.common.base.Preconditions.checkArgument(Preconditions.java:88) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStriped.addStorage(BlockInfoStriped.java:118) > at > org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo.addBlock(DatanodeStorageInfo.java:256) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.addStoredBlock(BlockManager.java:3141) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.addStoredBlockUnderConstruction(BlockManager.java:3068) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processAndHandleReportedBlock(BlockManager.java:3864) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processQueuedMessages(BlockManager.java:2916) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processQueuedMessagesForBlock(BlockManager.java:2903) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.addNewBlock(FSEditLogLoader.java:1069) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:532) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:882) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:863) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:293) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:427) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:380) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:397) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12840) Creating a file with non-default EC policy in a EC zone does not correctly serialized in EditLogs
[ https://issues.apache.org/jira/browse/HDFS-12840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei (Eddy) Xu updated HDFS-12840: - Summary: Creating a file with non-default EC policy in a EC zone does not correctly serialized in EditLogs (was: Creating a replicated file in a EC zone does not correctly serialized in EditLogs) > Creating a file with non-default EC policy in a EC zone does not correctly > serialized in EditLogs > - > > Key: HDFS-12840 > URL: https://issues.apache.org/jira/browse/HDFS-12840 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.0-beta1 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu >Priority: Blocker > Labels: hdfs-ec-3.0-must-do > Attachments: HDFS-12840.00.patch, HDFS-12840.01.patch, > HDFS-12840.02.patch, HDFS-12840.03.patch, HDFS-12840.04.patch, > HDFS-12840.reprod.patch, editsStored, editsStored, editsStored.03 > > > When create a replicated file in an existing EC zone, the edit logs does not > differentiate it from an EC file. When {{FSEditLogLoader}} to replay edits, > this file is treated as EC file, as a results, it crashes the NN because the > blocks of this file are replicated, which does not match with {{INode}}. > {noformat} > ERROR org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: Encountered > exception on operation AddBlockOp [path=/system/balancer.id, > penultimateBlock=NULL, lastBlock=blk_1073743259_2455, RpcClientId=, > RpcCallId=-2] > java.lang.IllegalArgumentException: reportedBlock is not striped > at > com.google.common.base.Preconditions.checkArgument(Preconditions.java:88) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStriped.addStorage(BlockInfoStriped.java:118) > at > org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo.addBlock(DatanodeStorageInfo.java:256) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.addStoredBlock(BlockManager.java:3141) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.addStoredBlockUnderConstruction(BlockManager.java:3068) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processAndHandleReportedBlock(BlockManager.java:3864) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processQueuedMessages(BlockManager.java:2916) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processQueuedMessagesForBlock(BlockManager.java:2903) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.addNewBlock(FSEditLogLoader.java:1069) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:532) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:882) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:863) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:293) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:427) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:380) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:397) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11733) TestGetBlocks.getBlocksWithException() ignores datanode and size parameters.
[ https://issues.apache.org/jira/browse/HDFS-11733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277870#comment-16277870 ] genericqa commented on HDFS-11733: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 49s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 41s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 18s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}122m 31s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 | | | hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 | | | hadoop.hdfs.TestDistributedFileSystemWithECFile | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 | | | hadoop.hdfs.TestFileChecksum | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | | | hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 | | | hadoop.fs.TestUnbuffer | | |
[jira] [Updated] (HDFS-12832) INode.getFullPathName may throw ArrayIndexOutOfBoundsException lead to NameNode exit
[ https://issues.apache.org/jira/browse/HDFS-12832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du updated HDFS-12832: -- Fix Version/s: (was: 2.8.4) 2.8.3 > INode.getFullPathName may throw ArrayIndexOutOfBoundsException lead to > NameNode exit > > > Key: HDFS-12832 > URL: https://issues.apache.org/jira/browse/HDFS-12832 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.7.4, 3.0.0-beta1 >Reporter: DENG FEI >Assignee: Konstantin Shvachko >Priority: Critical > Fix For: 2.8.3, 2.7.5, 3.1.0, 2.10.0, 2.9.1, 3.0.1 > > Attachments: HDFS-12832-branch-2.002.patch, > HDFS-12832-branch-2.7.002.patch, HDFS-12832-trunk-001.patch, > HDFS-12832.002.patch, exception.log > > > {code:title=INode.java|borderStyle=solid} > public String getFullPathName() { > // Get the full path name of this inode. > if (isRoot()) { > return Path.SEPARATOR; > } > // compute size of needed bytes for the path > int idx = 0; > for (INode inode = this; inode != null; inode = inode.getParent()) { > // add component + delimiter (if not tail component) > idx += inode.getLocalNameBytes().length + (inode != this ? 1 : 0); > } > byte[] path = new byte[idx]; > for (INode inode = this; inode != null; inode = inode.getParent()) { > if (inode != this) { > path[--idx] = Path.SEPARATOR_CHAR; > } > byte[] name = inode.getLocalNameBytes(); > idx -= name.length; > System.arraycopy(name, 0, path, idx, name.length); > } > return DFSUtil.bytes2String(path); > } > {code} > We found ArrayIndexOutOfBoundsException at > _{color:#707070}System.arraycopy(name, 0, path, idx, name.length){color}_ > when ReplicaMonitor work ,and the NameNode will quit. > It seems the two loop is not synchronized, the path's length is changed. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12832) INode.getFullPathName may throw ArrayIndexOutOfBoundsException lead to NameNode exit
[ https://issues.apache.org/jira/browse/HDFS-12832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277839#comment-16277839 ] Junping Du commented on HDFS-12832: --- Merge to branch-2.8.3 as well. > INode.getFullPathName may throw ArrayIndexOutOfBoundsException lead to > NameNode exit > > > Key: HDFS-12832 > URL: https://issues.apache.org/jira/browse/HDFS-12832 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.7.4, 3.0.0-beta1 >Reporter: DENG FEI >Assignee: Konstantin Shvachko >Priority: Critical > Fix For: 2.8.3, 2.7.5, 3.1.0, 2.10.0, 2.9.1, 3.0.1 > > Attachments: HDFS-12832-branch-2.002.patch, > HDFS-12832-branch-2.7.002.patch, HDFS-12832-trunk-001.patch, > HDFS-12832.002.patch, exception.log > > > {code:title=INode.java|borderStyle=solid} > public String getFullPathName() { > // Get the full path name of this inode. > if (isRoot()) { > return Path.SEPARATOR; > } > // compute size of needed bytes for the path > int idx = 0; > for (INode inode = this; inode != null; inode = inode.getParent()) { > // add component + delimiter (if not tail component) > idx += inode.getLocalNameBytes().length + (inode != this ? 1 : 0); > } > byte[] path = new byte[idx]; > for (INode inode = this; inode != null; inode = inode.getParent()) { > if (inode != this) { > path[--idx] = Path.SEPARATOR_CHAR; > } > byte[] name = inode.getLocalNameBytes(); > idx -= name.length; > System.arraycopy(name, 0, path, idx, name.length); > } > return DFSUtil.bytes2String(path); > } > {code} > We found ArrayIndexOutOfBoundsException at > _{color:#707070}System.arraycopy(name, 0, path, idx, name.length){color}_ > when ReplicaMonitor work ,and the NameNode will quit. > It seems the two loop is not synchronized, the path's length is changed. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12638) Delete copy-on-truncate block along with the original block, when deleting a file being truncated
[ https://issues.apache.org/jira/browse/HDFS-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277838#comment-16277838 ] Junping Du commented on HDFS-12638: --- Merge to branch-2.8.3 as well. > Delete copy-on-truncate block along with the original block, when deleting a > file being truncated > - > > Key: HDFS-12638 > URL: https://issues.apache.org/jira/browse/HDFS-12638 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.8.2 >Reporter: Jiandan Yang >Assignee: Konstantin Shvachko >Priority: Blocker > Fix For: 2.8.3, 2.7.5, 3.1.0, 2.10.0, 2.9.1, 3.0.1 > > Attachments: HDFS-12638-branch-2.8.2.001.patch, HDFS-12638.002.patch, > HDFS-12638.003.patch, HDFS-12638.004.patch, OphanBlocksAfterTruncateDelete.jpg > > > Active NamNode exit due to NPE, I can confirm that the BlockCollection passed > in when creating ReplicationWork is null, but I do not know why > BlockCollection is null, By view history I found > [HDFS-9754|https://issues.apache.org/jira/browse/HDFS-9754] remove judging > whether BlockCollection is null. > NN logs are as following: > {code:java} > 2017-10-11 16:29:06,161 ERROR [ReplicationMonitor] > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: > ReplicationMonitor thread received Runtime exception. > java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:55) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1532) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1491) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3792) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3744) > at java.lang.Thread.run(Thread.java:834) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12638) Delete copy-on-truncate block along with the original block, when deleting a file being truncated
[ https://issues.apache.org/jira/browse/HDFS-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du updated HDFS-12638: -- Fix Version/s: (was: 2.8.4) 2.8.3 > Delete copy-on-truncate block along with the original block, when deleting a > file being truncated > - > > Key: HDFS-12638 > URL: https://issues.apache.org/jira/browse/HDFS-12638 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.8.2 >Reporter: Jiandan Yang >Assignee: Konstantin Shvachko >Priority: Blocker > Fix For: 2.8.3, 2.7.5, 3.1.0, 2.10.0, 2.9.1, 3.0.1 > > Attachments: HDFS-12638-branch-2.8.2.001.patch, HDFS-12638.002.patch, > HDFS-12638.003.patch, HDFS-12638.004.patch, OphanBlocksAfterTruncateDelete.jpg > > > Active NamNode exit due to NPE, I can confirm that the BlockCollection passed > in when creating ReplicationWork is null, but I do not know why > BlockCollection is null, By view history I found > [HDFS-9754|https://issues.apache.org/jira/browse/HDFS-9754] remove judging > whether BlockCollection is null. > NN logs are as following: > {code:java} > 2017-10-11 16:29:06,161 ERROR [ReplicationMonitor] > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: > ReplicationMonitor thread received Runtime exception. > java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:55) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1532) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1491) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3792) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3744) > at java.lang.Thread.run(Thread.java:834) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12713) [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata and PROVIDED storage metadata
[ https://issues.apache.org/jira/browse/HDFS-12713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-12713: -- Status: Patch Available (was: Open) > [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata > and PROVIDED storage metadata > > > Key: HDFS-12713 > URL: https://issues.apache.org/jira/browse/HDFS-12713 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Ewan Higgs > Attachments: HDFS-12713-HDFS-9806.001.patch, > HDFS-12713-HDFS-9806.002.patch, HDFS-12713-HDFS-9806.003.patch, > HDFS-12713-HDFS-9806.004.patch, HDFS-12713-HDFS-9806.005.patch, > HDFS-12713-HDFS-9806.006.patch, HDFS-12713-HDFS-9806.007.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12713) [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata and PROVIDED storage metadata
[ https://issues.apache.org/jira/browse/HDFS-12713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-12713: -- Status: Open (was: Patch Available) > [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata > and PROVIDED storage metadata > > > Key: HDFS-12713 > URL: https://issues.apache.org/jira/browse/HDFS-12713 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Ewan Higgs > Attachments: HDFS-12713-HDFS-9806.001.patch, > HDFS-12713-HDFS-9806.002.patch, HDFS-12713-HDFS-9806.003.patch, > HDFS-12713-HDFS-9806.004.patch, HDFS-12713-HDFS-9806.005.patch, > HDFS-12713-HDFS-9806.006.patch, HDFS-12713-HDFS-9806.007.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12713) [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata and PROVIDED storage metadata
[ https://issues.apache.org/jira/browse/HDFS-12713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277824#comment-16277824 ] genericqa commented on HDFS-12713: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 9m 57s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 7 new or modified test files. {color} | || || || || {color:brown} HDFS-9806 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 6m 19s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 33m 9s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 31s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 15s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 37s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 25m 40s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 50s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s{color} | {color:green} HDFS-9806 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 34s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 13s{color} | {color:red} hadoop-fs2img in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 11m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 45s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 11s{color} | {color:orange} root: The patch generated 12 new + 581 unchanged - 9 fixed = 593 total (was 590) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 44s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 24s{color} | {color:red} hadoop-fs2img in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 13s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 24s{color} | {color:red} hadoop-fs2img in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} hadoop-hdfs-project_hadoop-hdfs generated 0 new + 84 unchanged - 1 fixed = 84 total (was 85) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} hadoop-fs2img in the patch passed. {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 42s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 22s{color} | {color:red} hadoop-fs2img in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}136m 0s{color} | {color:black} {color} | \\ \\
[jira] [Updated] (HDFS-12840) Creating a replicated file in a EC zone does not correctly serialized in EditLogs
[ https://issues.apache.org/jira/browse/HDFS-12840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei (Eddy) Xu updated HDFS-12840: - Attachment: HDFS-12840.04.patch Fix findbugs warnings in {{04}} patch, it can run against {{editsStore.03}}. > Creating a replicated file in a EC zone does not correctly serialized in > EditLogs > - > > Key: HDFS-12840 > URL: https://issues.apache.org/jira/browse/HDFS-12840 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.0-beta1 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu >Priority: Blocker > Labels: hdfs-ec-3.0-must-do > Attachments: HDFS-12840.00.patch, HDFS-12840.01.patch, > HDFS-12840.02.patch, HDFS-12840.03.patch, HDFS-12840.04.patch, > HDFS-12840.reprod.patch, editsStored, editsStored, editsStored.03 > > > When create a replicated file in an existing EC zone, the edit logs does not > differentiate it from an EC file. When {{FSEditLogLoader}} to replay edits, > this file is treated as EC file, as a results, it crashes the NN because the > blocks of this file are replicated, which does not match with {{INode}}. > {noformat} > ERROR org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: Encountered > exception on operation AddBlockOp [path=/system/balancer.id, > penultimateBlock=NULL, lastBlock=blk_1073743259_2455, RpcClientId=, > RpcCallId=-2] > java.lang.IllegalArgumentException: reportedBlock is not striped > at > com.google.common.base.Preconditions.checkArgument(Preconditions.java:88) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStriped.addStorage(BlockInfoStriped.java:118) > at > org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo.addBlock(DatanodeStorageInfo.java:256) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.addStoredBlock(BlockManager.java:3141) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.addStoredBlockUnderConstruction(BlockManager.java:3068) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processAndHandleReportedBlock(BlockManager.java:3864) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processQueuedMessages(BlockManager.java:2916) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processQueuedMessagesForBlock(BlockManager.java:2903) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.addNewBlock(FSEditLogLoader.java:1069) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:532) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:882) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:863) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:293) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:427) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:380) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:397) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12840) Creating a replicated file in a EC zone does not correctly serialized in EditLogs
[ https://issues.apache.org/jira/browse/HDFS-12840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277817#comment-16277817 ] Lei (Eddy) Xu edited comment on HDFS-12840 at 12/5/17 12:44 AM: Thanks for the suggestions and reviews [~Sammi] and [~rakesh_r] bq. TestOfflineEditsViewer.testStored is failing, is this related to the patch?. Please download {{editsStored.03}} with the patch, and place it to {{./hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored}}. bq. REPLICATION_POLICY_ID is defined in ErasureCodeConstants Done bq. TestRetryCacheWithHA, 40 instead of 41. It was due to creating {{RS-6-3}} files sometimes failed with not enough DNs in the tests. It seems flaky, and not relevant. Will file a new JIRA for it. I removed creating file with default policy in the tests, as it is not relevant. bq. Refactor: ecPolicyID => erasureCodingPolicyId ... Done Could you give another review, [~Sammi], [~rakesh_r] and [~xiaochen] was (Author: eddyxu): Thanks for the suggestions and reviews [~Sammi] and [~rakesh_r] bq. TestOfflineEditsViewer.testStored is failing, is this related to the patch?. Please download {{editsStored.03}} with the patch, and place it to {{./hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored}}. bq. REPLICATION_POLICY_ID is defined in ErasureCodeConstants Done bq. TestRetryCacheWithHA, 40 instead of 41. It was due to creating {{RS-6-3}} files sometimes failed with not enough DNs in the tests. It seems flaky, and not relevant. Will file a new JIRA for it. I removed creating file with default policy in the tests, as it is not relevant. bq. Refactor: ecPolicyID => erasureCodingPolicyId ... Done Could you give another review, [~Sammi], [~rakesh_r] and [~xiaochen] > Creating a replicated file in a EC zone does not correctly serialized in > EditLogs > - > > Key: HDFS-12840 > URL: https://issues.apache.org/jira/browse/HDFS-12840 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.0-beta1 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu >Priority: Blocker > Labels: hdfs-ec-3.0-must-do > Attachments: HDFS-12840.00.patch, HDFS-12840.01.patch, > HDFS-12840.02.patch, HDFS-12840.03.patch, HDFS-12840.reprod.patch, > editsStored, editsStored, editsStored.03 > > > When create a replicated file in an existing EC zone, the edit logs does not > differentiate it from an EC file. When {{FSEditLogLoader}} to replay edits, > this file is treated as EC file, as a results, it crashes the NN because the > blocks of this file are replicated, which does not match with {{INode}}. > {noformat} > ERROR org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: Encountered > exception on operation AddBlockOp [path=/system/balancer.id, > penultimateBlock=NULL, lastBlock=blk_1073743259_2455, RpcClientId=, > RpcCallId=-2] > java.lang.IllegalArgumentException: reportedBlock is not striped > at > com.google.common.base.Preconditions.checkArgument(Preconditions.java:88) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStriped.addStorage(BlockInfoStriped.java:118) > at > org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo.addBlock(DatanodeStorageInfo.java:256) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.addStoredBlock(BlockManager.java:3141) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.addStoredBlockUnderConstruction(BlockManager.java:3068) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processAndHandleReportedBlock(BlockManager.java:3864) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processQueuedMessages(BlockManager.java:2916) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processQueuedMessagesForBlock(BlockManager.java:2903) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.addNewBlock(FSEditLogLoader.java:1069) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:532) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:882) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:863) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:293) > at >
[jira] [Updated] (HDFS-12840) Creating a replicated file in a EC zone does not correctly serialized in EditLogs
[ https://issues.apache.org/jira/browse/HDFS-12840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei (Eddy) Xu updated HDFS-12840: - Attachment: editsStored.03 HDFS-12840.03.patch Thanks for the suggestions and reviews [~Sammi] and [~rakesh_r] bq. TestOfflineEditsViewer.testStored is failing, is this related to the patch?. Please download {{editsStored.03}} with the patch, and place it to {{./hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored}}. bq. REPLICATION_POLICY_ID is defined in ErasureCodeConstants Done bq. TestRetryCacheWithHA, 40 instead of 41. It was due to creating {{RS-6-3}} files sometimes failed with not enough DNs in the tests. It seems flaky, and not relevant. Will file a new JIRA for it. I removed creating file with default policy in the tests, as it is not relevant. bq. Refactor: ecPolicyID => erasureCodingPolicyId ... Done Could you give another review, [~Sammi], [~rakesh_r] and [~xiaochen] > Creating a replicated file in a EC zone does not correctly serialized in > EditLogs > - > > Key: HDFS-12840 > URL: https://issues.apache.org/jira/browse/HDFS-12840 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.0-beta1 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu >Priority: Blocker > Labels: hdfs-ec-3.0-must-do > Attachments: HDFS-12840.00.patch, HDFS-12840.01.patch, > HDFS-12840.02.patch, HDFS-12840.03.patch, HDFS-12840.reprod.patch, > editsStored, editsStored, editsStored.03 > > > When create a replicated file in an existing EC zone, the edit logs does not > differentiate it from an EC file. When {{FSEditLogLoader}} to replay edits, > this file is treated as EC file, as a results, it crashes the NN because the > blocks of this file are replicated, which does not match with {{INode}}. > {noformat} > ERROR org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: Encountered > exception on operation AddBlockOp [path=/system/balancer.id, > penultimateBlock=NULL, lastBlock=blk_1073743259_2455, RpcClientId=, > RpcCallId=-2] > java.lang.IllegalArgumentException: reportedBlock is not striped > at > com.google.common.base.Preconditions.checkArgument(Preconditions.java:88) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStriped.addStorage(BlockInfoStriped.java:118) > at > org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo.addBlock(DatanodeStorageInfo.java:256) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.addStoredBlock(BlockManager.java:3141) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.addStoredBlockUnderConstruction(BlockManager.java:3068) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processAndHandleReportedBlock(BlockManager.java:3864) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processQueuedMessages(BlockManager.java:2916) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processQueuedMessagesForBlock(BlockManager.java:2903) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.addNewBlock(FSEditLogLoader.java:1069) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:532) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:882) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:863) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:293) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:427) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:380) > at > org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:397) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12888) NameNode web UI shows stale config values after cli refresh
[ https://issues.apache.org/jira/browse/HDFS-12888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang updated HDFS-12888: - Description: To reproduce: # Run webui /conf # Use {{hdfs --refreshSuperUserGroupsConfiguration}} to update a configuration value # Run webui /conf again, it will still show the old configuration value was: To reproduce: # Run webui /conf # Use {{hdfs -refresh}} to update a configuration value # Run webui /conf again, it will still show the old configuration value > NameNode web UI shows stale config values after cli refresh > --- > > Key: HDFS-12888 > URL: https://issues.apache.org/jira/browse/HDFS-12888 > Project: Hadoop HDFS > Issue Type: Bug > Components: ui >Affects Versions: 2.7.4 >Reporter: Zhe Zhang > > To reproduce: > # Run webui /conf > # Use {{hdfs --refreshSuperUserGroupsConfiguration}} to update a > configuration value > # Run webui /conf again, it will still show the old configuration value -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12888) NameNode web UI shows stale config values after cli refresh
Zhe Zhang created HDFS-12888: Summary: NameNode web UI shows stale config values after cli refresh Key: HDFS-12888 URL: https://issues.apache.org/jira/browse/HDFS-12888 Project: Hadoop HDFS Issue Type: Bug Components: ui Affects Versions: 2.7.4 Reporter: Zhe Zhang To reproduce: # Run webui /conf # Use {{hdfs -refresh}} to update a configuration value # Run webui /conf again, it will still show the old configuration value -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12886) Ignore minReplication for block recovery
[ https://issues.apache.org/jira/browse/HDFS-12886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lukas Majercak updated HDFS-12886: -- Attachment: HDFS-12886.002.patch [~goiri], the test failed with MULTIPLIER being 30. But I've changed the heartbeat interval from 1second to the 3second default in the test which solved the issue. Uploaded patch002 with the change + fixing checkstyle. > Ignore minReplication for block recovery > > > Key: HDFS-12886 > URL: https://issues.apache.org/jira/browse/HDFS-12886 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs, namenode >Reporter: Lukas Majercak >Assignee: Lukas Majercak > Attachments: HDFS-12886.001.patch, HDFS-12886.002.patch > > > Ignore minReplication for blocks that went through recovery, and allow NN to > complete them and replicate. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12886) Ignore minReplication for block recovery
[ https://issues.apache.org/jira/browse/HDFS-12886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277805#comment-16277805 ] Lukas Majercak edited comment on HDFS-12886 at 12/5/17 12:29 AM: - [~goiri], the test failed with MULTIPLIER being 30. But I've changed the heartbeat interval from 1second to the 3second default in the test which solves the issue. Uploaded patch002 with the change + fixing checkstyle. was (Author: lukmajercak): [~goiri], the test failed with MULTIPLIER being 30. But I've changed the heartbeat interval from 1second to the 3second default in the test which solved the issue. Uploaded patch002 with the change + fixing checkstyle. > Ignore minReplication for block recovery > > > Key: HDFS-12886 > URL: https://issues.apache.org/jira/browse/HDFS-12886 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs, namenode >Reporter: Lukas Majercak >Assignee: Lukas Majercak > Attachments: HDFS-12886.001.patch, HDFS-12886.002.patch > > > Ignore minReplication for blocks that went through recovery, and allow NN to > complete them and replicate. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11313) Segmented Block Reports
[ https://issues.apache.org/jira/browse/HDFS-11313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko updated HDFS-11313: --- Target Version/s: (was: 2.7.5) > Segmented Block Reports > --- > > Key: HDFS-11313 > URL: https://issues.apache.org/jira/browse/HDFS-11313 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, namenode >Affects Versions: 2.6.2 >Reporter: Konstantin Shvachko >Assignee: Vinitha Reddy Gankidi > Attachments: SegmentedBlockReports.pdf > > > Block reports from a single DataNode can be currently split into multiple > RPCs each reporting a single DataNode storage (disk). The reports are still > large since disks are getting bigger. Splitting blockReport RPCs into > multiple smaller calls would improve NameNode performance and overall HDFS > stability. > This was discussed in multiple jiras. Here the approach is to let NameNode > divide blockID space into segments and then ask DataNodes to report replicas > in a particular range of IDs. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12886) Ignore minReplication for block recovery
[ https://issues.apache.org/jira/browse/HDFS-12886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277792#comment-16277792 ] genericqa commented on HDFS-12886: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 52s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 37s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 168 unchanged - 0 fixed = 170 total (was 168) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 42s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}115m 38s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}165m 57s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-12886 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12900552/HDFS-12886.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux a349e7f3cf72 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / d8863fc | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/22273/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit |
[jira] [Commented] (HDFS-12886) Ignore minReplication for block recovery
[ https://issues.apache.org/jira/browse/HDFS-12886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277764#comment-16277764 ] Íñigo Goiri commented on HDFS-12886: Thanks [~lukmajercak] for working on this. Can we avoid changing {{BLOCK_RECOVERY_TIMEOUT_MULTIPLIER}}? > Ignore minReplication for block recovery > > > Key: HDFS-12886 > URL: https://issues.apache.org/jira/browse/HDFS-12886 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs, namenode >Reporter: Lukas Majercak >Assignee: Lukas Majercak > Attachments: HDFS-12886.001.patch > > > Ignore minReplication for blocks that went through recovery, and allow NN to > complete them and replicate. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10967) Add configuration for BlockPlacementPolicy to avoid near-full DataNodes
[ https://issues.apache.org/jira/browse/HDFS-10967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277759#comment-16277759 ] genericqa commented on HDFS-10967: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 6s{color} | {color:red} HDFS-10967 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-10967 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12832548/HDFS-10967.03.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/22279/console | | Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Add configuration for BlockPlacementPolicy to avoid near-full DataNodes > --- > > Key: HDFS-10967 > URL: https://issues.apache.org/jira/browse/HDFS-10967 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Zhe Zhang >Assignee: Zhe Zhang > Labels: balancer > Attachments: HDFS-10967.00.patch, HDFS-10967.01.patch, > HDFS-10967.02.patch, HDFS-10967.03.patch > > > Large production clusters are likely to have heterogeneous nodes in terms of > storage capacity, memory, and CPU cores. It is not always possible to > proportionally ingest data into DataNodes based on their remaining storage > capacity. Therefore it's possible for a subset of DataNodes to be much closer > to full capacity than the rest. > This heterogeneity is most likely rack-by-rack -- i.e. _m_ whole racks of > low-storage nodes and _n_ whole racks of high-storage nodes. So It'd be very > useful if we can lower the chance for those near-full DataNodes to become > destinations for the 2nd and 3rd replicas. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10967) Add configuration for BlockPlacementPolicy to avoid near-full DataNodes
[ https://issues.apache.org/jira/browse/HDFS-10967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang updated HDFS-10967: - Target Version/s: (was: 2.7.5) Removing the Target version since 2.7.5 is releasing soon. > Add configuration for BlockPlacementPolicy to avoid near-full DataNodes > --- > > Key: HDFS-10967 > URL: https://issues.apache.org/jira/browse/HDFS-10967 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Zhe Zhang >Assignee: Zhe Zhang > Labels: balancer > Attachments: HDFS-10967.00.patch, HDFS-10967.01.patch, > HDFS-10967.02.patch, HDFS-10967.03.patch > > > Large production clusters are likely to have heterogeneous nodes in terms of > storage capacity, memory, and CPU cores. It is not always possible to > proportionally ingest data into DataNodes based on their remaining storage > capacity. Therefore it's possible for a subset of DataNodes to be much closer > to full capacity than the rest. > This heterogeneity is most likely rack-by-rack -- i.e. _m_ whole racks of > low-storage nodes and _n_ whole racks of high-storage nodes. So It'd be very > useful if we can lower the chance for those near-full DataNodes to become > destinations for the 2nd and 3rd replicas. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12887) [READ] Allow Datanodes with Provided volumes to start when blocks with the same id exist locally
[ https://issues.apache.org/jira/browse/HDFS-12887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-12887: -- Status: Patch Available (was: Open) > [READ] Allow Datanodes with Provided volumes to start when blocks with the > same id exist locally > > > Key: HDFS-12887 > URL: https://issues.apache.org/jira/browse/HDFS-12887 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-12887-HDFS-9806.001.patch > > > Fix {{ProvidedVolumeImpl.getVolumeMap}} to not throw an exception even when > an existing block in the volumemap has the same id. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12887) [READ] Allow Datanodes with Provided volumes to start when blocks with the same id exist locally
[ https://issues.apache.org/jira/browse/HDFS-12887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277739#comment-16277739 ] Virajith Jalaparti edited comment on HDFS-12887 at 12/4/17 11:27 PM: - This fix ensures that Datanodes can start successfully in the following scenario: # Datanode DN1 is configured with {{PROVIDED}} and {{DISK}} volumes. # Increase in replication on a (provided) file containing block {{i}} leads to DN1 moving replica with block id {{i}} from {{PROVIDED}} volume to the {{DISK}} volume. # DN1 goes down. # DN1 starts up, and finds replica with block id {{i}} on both the {{DISK}} and {{PROVIDED}} volumes and *fails to start* ({{IOException}} is thrown in {{ProvidedBlockPoolSlice.fetchVolumeMap}}). This scenario can be avoided once HDFS-9810 is completed. was (Author: virajith): This fix ensures that Datanodes can start successfully in the following scenario: # Datanode DN1 is configured with {{PROVIDED}} and {{DISK}} volumes. # Increase in replication on a (provided) file containing block {{i}} leads to DN1 moving block {{i}} from {{PROVIDED}} volume to the {{DISK}} volume. # DN1 goes down. # DN1 starts up, and finds block with id {{i}} on both the {{DISK}} and {{PROVIDED}} volumes and *fails to start* ({{IOException}} is thrown in {{ProvidedBlockPoolSlice.fetchVolumeMap}}). This scenario can be avoided once HDFS-9810 is completed. > [READ] Allow Datanodes with Provided volumes to start when blocks with the > same id exist locally > > > Key: HDFS-12887 > URL: https://issues.apache.org/jira/browse/HDFS-12887 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-12887-HDFS-9806.001.patch > > > Fix {{ProvidedVolumeImpl.getVolumeMap}} to not throw an exception even when > an existing block in the volumemap has the same id. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12887) [READ] Allow Datanodes with Provided volumes to start when blocks with the same id exist locally
[ https://issues.apache.org/jira/browse/HDFS-12887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277739#comment-16277739 ] Virajith Jalaparti edited comment on HDFS-12887 at 12/4/17 11:27 PM: - This fix ensures that Datanodes can start successfully in the following scenario: # Datanode DN1 is configured with {{PROVIDED}} and {{DISK}} volumes. # Increase in replication on a (provided) file containing block with id {{i}} leads to DN1 moving replica with block id {{i}} from {{PROVIDED}} volume to the {{DISK}} volume. # DN1 goes down. # DN1 starts up, and finds replica with block id {{i}} on both the {{DISK}} and {{PROVIDED}} volumes and *fails to start* ({{IOException}} is thrown in {{ProvidedBlockPoolSlice.fetchVolumeMap}}). This scenario can be avoided once HDFS-9810 is completed. was (Author: virajith): This fix ensures that Datanodes can start successfully in the following scenario: # Datanode DN1 is configured with {{PROVIDED}} and {{DISK}} volumes. # Increase in replication on a (provided) file containing block {{i}} leads to DN1 moving replica with block id {{i}} from {{PROVIDED}} volume to the {{DISK}} volume. # DN1 goes down. # DN1 starts up, and finds replica with block id {{i}} on both the {{DISK}} and {{PROVIDED}} volumes and *fails to start* ({{IOException}} is thrown in {{ProvidedBlockPoolSlice.fetchVolumeMap}}). This scenario can be avoided once HDFS-9810 is completed. > [READ] Allow Datanodes with Provided volumes to start when blocks with the > same id exist locally > > > Key: HDFS-12887 > URL: https://issues.apache.org/jira/browse/HDFS-12887 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-12887-HDFS-9806.001.patch > > > Fix {{ProvidedVolumeImpl.getVolumeMap}} to not throw an exception even when > an existing block in the volumemap has the same id. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12887) [READ] Allow Datanodes with Provided volumes to start when blocks with the same id exist locally
[ https://issues.apache.org/jira/browse/HDFS-12887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-12887: -- Attachment: HDFS-12887-HDFS-9806.001.patch Patch logs a warning instead of throwing an {{IOException}} when a replica id with the same block id is detected. > [READ] Allow Datanodes with Provided volumes to start when blocks with the > same id exist locally > > > Key: HDFS-12887 > URL: https://issues.apache.org/jira/browse/HDFS-12887 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-12887-HDFS-9806.001.patch > > > Fix {{ProvidedVolumeImpl.getVolumeMap}} to not throw an exception even when > an existing block in the volumemap has the same id. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12887) [READ] Allow Datanodes with Provided volumes to start when blocks with the same id exist locally
[ https://issues.apache.org/jira/browse/HDFS-12887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277739#comment-16277739 ] Virajith Jalaparti commented on HDFS-12887: --- This fix ensures that Datanodes can start successfully in the following scenario: # Datanode DN1 is configured with {{PROVIDED}} and {{DISK}} volumes. # Increase in replication on a (provided) file containing block {{i}} leads to DN1 moving block {{i}} from {{PROVIDED}} volume to the {{DISK}} volume. # DN1 goes down. # DN1 starts up, and finds block with id {{i}} on both the {{DISK}} and {{PROVIDED}} volumes and *fails to start* ({{IOException}} is thrown in {{ProvidedBlockPoolSlice.fetchVolumeMap}}). This scenario can be avoided once HDFS-9810 is completed. > [READ] Allow Datanodes with Provided volumes to start when blocks with the > same id exist locally > > > Key: HDFS-12887 > URL: https://issues.apache.org/jira/browse/HDFS-12887 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > > Fix {{ProvidedVolumeImpl.getVolumeMap}} to not throw an exception even when > an existing block in the volumemap has the same id. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12887) [READ] Allow Datanodes with Provided volumes to start when blocks with the same id exist locally
[ https://issues.apache.org/jira/browse/HDFS-12887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-12887: -- Description: Fix {{ProvidedVolumeImpl.getVolumeMap}} to not throw an exception even when an existing block in the volumemap has the same id. > [READ] Allow Datanodes with Provided volumes to start when blocks with the > same id exist locally > > > Key: HDFS-12887 > URL: https://issues.apache.org/jira/browse/HDFS-12887 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > > Fix {{ProvidedVolumeImpl.getVolumeMap}} to not throw an exception even when > an existing block in the volumemap has the same id. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12887) [READ] Allow Datanodes with Provided volumes to start when blocks with the same id exist locally
Virajith Jalaparti created HDFS-12887: - Summary: [READ] Allow Datanodes with Provided volumes to start when blocks with the same id exist locally Key: HDFS-12887 URL: https://issues.apache.org/jira/browse/HDFS-12887 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Virajith Jalaparti Assignee: Virajith Jalaparti -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12885) Add visibility/stability annotations
[ https://issues.apache.org/jira/browse/HDFS-12885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-12885: -- Status: Patch Available (was: Open) > Add visibility/stability annotations > > > Key: HDFS-12885 > URL: https://issues.apache.org/jira/browse/HDFS-12885 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chris Douglas >Priority: Trivial > Attachments: HDFS-12885-HDFS-9806.00.patch, > HDFS-12885-HDFS-9806.001.patch > > > Classes added in HDFS-9806 should include stability/visibility annotations > (HADOOP-5073) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12885) Add visibility/stability annotations
[ https://issues.apache.org/jira/browse/HDFS-12885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277723#comment-16277723 ] Virajith Jalaparti commented on HDFS-12885: --- The patch looks good. Added annotations to what were missed in v0 ({{AliasMapProtocolPB}}, {{FinalizedProvidedReplica}}, {{ProvidedReplica}}, {{ProvidedVolumeImpl}}) in v1. > Add visibility/stability annotations > > > Key: HDFS-12885 > URL: https://issues.apache.org/jira/browse/HDFS-12885 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chris Douglas >Priority: Trivial > Attachments: HDFS-12885-HDFS-9806.00.patch, > HDFS-12885-HDFS-9806.001.patch > > > Classes added in HDFS-9806 should include stability/visibility annotations > (HADOOP-5073) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12832) INode.getFullPathName may throw ArrayIndexOutOfBoundsException lead to NameNode exit
[ https://issues.apache.org/jira/browse/HDFS-12832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko updated HDFS-12832: --- Labels: (was: release-blocker) > INode.getFullPathName may throw ArrayIndexOutOfBoundsException lead to > NameNode exit > > > Key: HDFS-12832 > URL: https://issues.apache.org/jira/browse/HDFS-12832 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.7.4, 3.0.0-beta1 >Reporter: DENG FEI >Assignee: Konstantin Shvachko >Priority: Critical > Fix For: 2.7.5, 3.1.0, 2.10.0, 2.9.1, 3.0.1, 2.8.4 > > Attachments: HDFS-12832-branch-2.002.patch, > HDFS-12832-branch-2.7.002.patch, HDFS-12832-trunk-001.patch, > HDFS-12832.002.patch, exception.log > > > {code:title=INode.java|borderStyle=solid} > public String getFullPathName() { > // Get the full path name of this inode. > if (isRoot()) { > return Path.SEPARATOR; > } > // compute size of needed bytes for the path > int idx = 0; > for (INode inode = this; inode != null; inode = inode.getParent()) { > // add component + delimiter (if not tail component) > idx += inode.getLocalNameBytes().length + (inode != this ? 1 : 0); > } > byte[] path = new byte[idx]; > for (INode inode = this; inode != null; inode = inode.getParent()) { > if (inode != this) { > path[--idx] = Path.SEPARATOR_CHAR; > } > byte[] name = inode.getLocalNameBytes(); > idx -= name.length; > System.arraycopy(name, 0, path, idx, name.length); > } > return DFSUtil.bytes2String(path); > } > {code} > We found ArrayIndexOutOfBoundsException at > _{color:#707070}System.arraycopy(name, 0, path, idx, name.length){color}_ > when ReplicaMonitor work ,and the NameNode will quit. > It seems the two loop is not synchronized, the path's length is changed. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12886) Ignore minReplication for block recovery
[ https://issues.apache.org/jira/browse/HDFS-12886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lukas Majercak updated HDFS-12886: -- Description: Ignore minReplication for blocks that went through recovery, and allow NN to complete them and replicate. > Ignore minReplication for block recovery > > > Key: HDFS-12886 > URL: https://issues.apache.org/jira/browse/HDFS-12886 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs, namenode >Reporter: Lukas Majercak >Assignee: Lukas Majercak > Attachments: HDFS-12886.001.patch > > > Ignore minReplication for blocks that went through recovery, and allow NN to > complete them and replicate. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11576) Block recovery will fail indefinitely if recovery time > heartbeat interval
[ https://issues.apache.org/jira/browse/HDFS-11576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277693#comment-16277693 ] Lukas Majercak commented on HDFS-11576: --- LGTM > Block recovery will fail indefinitely if recovery time > heartbeat interval > --- > > Key: HDFS-11576 > URL: https://issues.apache.org/jira/browse/HDFS-11576 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs, namenode >Affects Versions: 2.7.1, 2.7.2, 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2 >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Critical > Fix For: 3.0.0 > > Attachments: HDFS-11576-branch-2.00.patch, > HDFS-11576-branch-2.01.patch, HDFS-11576.001.patch, HDFS-11576.002.patch, > HDFS-11576.003.patch, HDFS-11576.004.patch, HDFS-11576.005.patch, > HDFS-11576.006.patch, HDFS-11576.007.patch, HDFS-11576.008.patch, > HDFS-11576.009.patch, HDFS-11576.010.patch, HDFS-11576.011.patch, > HDFS-11576.012.patch, HDFS-11576.013.patch, HDFS-11576.014.patch, > HDFS-11576.015.patch, HDFS-11576.repro.patch > > > Block recovery will fail indefinitely if the time to recover a block is > always longer than the heartbeat interval. Scenario: > 1. DN sends heartbeat > 2. NN sends a recovery command to DN, recoveryID=X > 3. DN starts recovery > 4. DN sends another heartbeat > 5. NN sends a recovery command to DN, recoveryID=X+1 > 6. DN calls commitBlockSyncronization after succeeding with first recovery to > NN, which fails because X < X+1 > ... -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12882) Support full open(PathHandle) contract in HDFS
[ https://issues.apache.org/jira/browse/HDFS-12882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277675#comment-16277675 ] genericqa commented on HDFS-12882: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 41 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 30s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 41s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 49s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 11m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 1s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 14s{color} | {color:orange} root: The patch generated 36 new + 2069 unchanged - 12 fixed = 2105 total (was 2081) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 1s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 41s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 45s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 8s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 34s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}123m 30s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 44s{color} | {color:green} hadoop-hdfs-nfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 37s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}226m 53s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client | | | Class org.apache.hadoop.hdfs.protocol.HdfsPathHandle defines non-transient non-serializable
[jira] [Commented] (HDFS-12882) Support full open(PathHandle) contract in HDFS
[ https://issues.apache.org/jira/browse/HDFS-12882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277674#comment-16277674 ] Chris Douglas commented on HDFS-12882: -- v02 won't generate block tokens, so that needs to be fixed. I'm unsure if this will, or should, work with symlinks. > Support full open(PathHandle) contract in HDFS > -- > > Key: HDFS-12882 > URL: https://issues.apache.org/jira/browse/HDFS-12882 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Reporter: Chris Douglas >Assignee: Chris Douglas > Attachments: HDFS-12882.00.patch, HDFS-12882.00.salient.txt, > HDFS-12882.01.patch, HDFS-12882.02.patch > > > HDFS-7878 added support for {{open(PathHandle)}}, but it only partially > implemented the semantics specified in the contract (i.e., open-by-inodeID). > HDFS should implement all permutations of the default options for > {{PathHandle}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12713) [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata and PROVIDED storage metadata
[ https://issues.apache.org/jira/browse/HDFS-12713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-12713: -- Attachment: HDFS-12713-HDFS-9806.007.patch > [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata > and PROVIDED storage metadata > > > Key: HDFS-12713 > URL: https://issues.apache.org/jira/browse/HDFS-12713 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Ewan Higgs > Attachments: HDFS-12713-HDFS-9806.001.patch, > HDFS-12713-HDFS-9806.002.patch, HDFS-12713-HDFS-9806.003.patch, > HDFS-12713-HDFS-9806.004.patch, HDFS-12713-HDFS-9806.005.patch, > HDFS-12713-HDFS-9806.006.patch, HDFS-12713-HDFS-9806.007.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11576) Block recovery will fail indefinitely if recovery time > heartbeat interval
[ https://issues.apache.org/jira/browse/HDFS-11576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277602#comment-16277602 ] Chris Douglas commented on HDFS-11576: -- ASF license warnings are unrelated, as are the unit test failures. [~lukmajercak], does the branch-2 version look good to commit? > Block recovery will fail indefinitely if recovery time > heartbeat interval > --- > > Key: HDFS-11576 > URL: https://issues.apache.org/jira/browse/HDFS-11576 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs, namenode >Affects Versions: 2.7.1, 2.7.2, 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2 >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Critical > Fix For: 3.0.0 > > Attachments: HDFS-11576-branch-2.00.patch, > HDFS-11576-branch-2.01.patch, HDFS-11576.001.patch, HDFS-11576.002.patch, > HDFS-11576.003.patch, HDFS-11576.004.patch, HDFS-11576.005.patch, > HDFS-11576.006.patch, HDFS-11576.007.patch, HDFS-11576.008.patch, > HDFS-11576.009.patch, HDFS-11576.010.patch, HDFS-11576.011.patch, > HDFS-11576.012.patch, HDFS-11576.013.patch, HDFS-11576.014.patch, > HDFS-11576.015.patch, HDFS-11576.repro.patch > > > Block recovery will fail indefinitely if the time to recover a block is > always longer than the heartbeat interval. Scenario: > 1. DN sends heartbeat > 2. NN sends a recovery command to DN, recoveryID=X > 3. DN starts recovery > 4. DN sends another heartbeat > 5. NN sends a recovery command to DN, recoveryID=X+1 > 6. DN calls commitBlockSyncronization after succeeding with first recovery to > NN, which fails because X < X+1 > ... -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12713) [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata and PROVIDED storage metadata
[ https://issues.apache.org/jira/browse/HDFS-12713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277589#comment-16277589 ] genericqa commented on HDFS-12713: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 9m 24s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 7 new or modified test files. {color} | || || || || {color:brown} HDFS-9806 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 34s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 31s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 39s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 11s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 32s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 9s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 31s{color} | {color:red} hadoop-tools/hadoop-fs2img in HDFS-9806 has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s{color} | {color:green} HDFS-9806 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 11m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 42s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 11s{color} | {color:orange} root: The patch generated 12 new + 582 unchanged - 8 fixed = 594 total (was 590) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 54s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 87m 6s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 39s{color} | {color:green} hadoop-fs2img in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 34s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}182m 17s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.fs.TestUnbuffer | | | hadoop.tools.TestHdfsConfigFields | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-12713 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12900528/HDFS-12713-HDFS-9806.006.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux 66cb6388e13f 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool |
[jira] [Updated] (HDFS-12886) Ignore minReplication for block recovery
[ https://issues.apache.org/jira/browse/HDFS-12886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lukas Majercak updated HDFS-12886: -- Status: Patch Available (was: In Progress) > Ignore minReplication for block recovery > > > Key: HDFS-12886 > URL: https://issues.apache.org/jira/browse/HDFS-12886 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs, namenode >Reporter: Lukas Majercak >Assignee: Lukas Majercak > Attachments: HDFS-12886.001.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12741) ADD support for KSM --createObjectStore command
[ https://issues.apache.org/jira/browse/HDFS-12741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277496#comment-16277496 ] genericqa commented on HDFS-12741: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 25m 46s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 22s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 21s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 13s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 40s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 6 new + 3 unchanged - 0 fixed = 9 total (was 3) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 30s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}154m 43s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}239m 37s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 | | | hadoop.ozone.web.client.TestKeysRatis | | | hadoop.hdfs.TestDFSStorageStateRecovery | | | hadoop.hdfs.TestDFSStripedInputStream | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.TestPersistBlocks | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 | | | hadoop.hdfs.TestFileAppend | | | hadoop.hdfs.TestReadStripedFileWithDecoding | | | hadoop.ozone.client.rpc.TestOzoneRpcClient | | | hadoop.hdfs.qjournal.client.TestEpochsAreUnique | | | hadoop.hdfs.TestDFSStripedOutputStream | | | hadoop.hdfs.TestHdfsAdmin | | | hadoop.hdfs.TestFileCreation | | | hadoop.hdfs.TestErasureCodingPolicyWithSnapshotWithRandomECPolicy | | | hadoop.hdfs.TestClientReportBadBlock | | | hadoop.cblock.TestCBlockReadWrite | | | hadoop.hdfs.TestReplication | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 | | | hadoop.hdfs.TestFileAppend3 | | | hadoop.fs.TestUnbuffer | | | hadoop.hdfs.TestParallelUnixDomainRead | | | hadoop.ozone.scm.TestSCMCli | | |
[jira] [Created] (HDFS-12886) Ignore minReplication for block recovery
Lukas Majercak created HDFS-12886: - Summary: Ignore minReplication for block recovery Key: HDFS-12886 URL: https://issues.apache.org/jira/browse/HDFS-12886 Project: Hadoop HDFS Issue Type: Bug Components: hdfs, namenode Reporter: Lukas Majercak Assignee: Lukas Majercak -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12872) EC Checksum broken when BlockAccessToken is enabled
[ https://issues.apache.org/jira/browse/HDFS-12872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277463#comment-16277463 ] Uma Maheswara Rao G commented on HDFS-12872: HI [~xiaochen], Thanks for working on it. I think it make sense to set blocktocken at group level to use for calculating DN. {code} sb.setBlockToken(blockTokenSecretManager.generateToken( +NameNode.getRemoteUser().getShortUserName(), +internalBlock, EnumSet.of(mode), b.getStorageTypes(), +b.getStorageIDs())); {code} Isn't this code common to else part now? should we remove else part code and make this set for block irrespective of isStriped ? > EC Checksum broken when BlockAccessToken is enabled > --- > > Key: HDFS-12872 > URL: https://issues.apache.org/jira/browse/HDFS-12872 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Critical > Labels: hdfs-ec-3.0-must-do > Attachments: HDFS-12872.01.patch, HDFS-12872.repro.patch > > > It appears {{hdfs ec -checksum}} doesn't work when block access token is > enabled. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12885) Add visibility/stability annotations
[ https://issues.apache.org/jira/browse/HDFS-12885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Douglas updated HDFS-12885: - Attachment: HDFS-12885-HDFS-9806.00.patch Did a pass over the branch. [~virajith], [~ehiggs] could you take a look? > Add visibility/stability annotations > > > Key: HDFS-12885 > URL: https://issues.apache.org/jira/browse/HDFS-12885 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chris Douglas >Priority: Trivial > Attachments: HDFS-12885-HDFS-9806.00.patch > > > Classes added in HDFS-9806 should include stability/visibility annotations > (HADOOP-5073) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12885) Add visibility/stability annotations
Chris Douglas created HDFS-12885: Summary: Add visibility/stability annotations Key: HDFS-12885 URL: https://issues.apache.org/jira/browse/HDFS-12885 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Chris Douglas Priority: Trivial Classes added in HDFS-9806 should include stability/visibility annotations (HADOOP-5073) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12884) BlockUnderConstructionFeature.truncateBlock should be of type BlockInfo
Konstantin Shvachko created HDFS-12884: -- Summary: BlockUnderConstructionFeature.truncateBlock should be of type BlockInfo Key: HDFS-12884 URL: https://issues.apache.org/jira/browse/HDFS-12884 Project: Hadoop HDFS Issue Type: Improvement Components: namenode Affects Versions: 2.7.4 Reporter: Konstantin Shvachko {{BlockUnderConstructionFeature.truncateBlock}} type should be changed to {{BlockInfo}} from {{Block}}. {{truncateBlock}} is always assigned as {{BlockInfo}}, so this will avoid unnecessary casts. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12855) Fsck violates namesystem locking
[ https://issues.apache.org/jira/browse/HDFS-12855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277323#comment-16277323 ] Konstantin Shvachko commented on HDFS-12855: Yes I believe it affects all versions. Delete is clearly one case. But any change in file path (move or rename) will cause problems as well. See HDFS-12832 as an example. > Fsck violates namesystem locking > - > > Key: HDFS-12855 > URL: https://issues.apache.org/jira/browse/HDFS-12855 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.4 >Reporter: Konstantin Shvachko > > {{NamenodeFsck}} access {{FSNamesystem}} structures, such as INodes, > BlockInfo without holding a lock. See e.g. {{NamenodeFsck.blockIdCK()}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10686) libhdfs++: implement delegation token authorization
[ https://issues.apache.org/jira/browse/HDFS-10686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] James Clampffer updated HDFS-10686: --- Summary: libhdfs++: implement delegation token authorization (was: libhdfs++: implement token authorization) > libhdfs++: implement delegation token authorization > --- > > Key: HDFS-10686 > URL: https://issues.apache.org/jira/browse/HDFS-10686 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Bob Hansen > > The current libhdfs++ SASL implementation does a kerberos handshake for each > connection. HDFS includes support for issuing and using time-limited tokens > to reduce the load on the kerberos server. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12640) libhdfs++: automatic CI tests are getting stuck in test_libhdfs_mini_stress_hdfspp_test_shim_static
[ https://issues.apache.org/jira/browse/HDFS-12640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] James Clampffer updated HDFS-12640: --- Status: Open (was: Patch Available) > libhdfs++: automatic CI tests are getting stuck in > test_libhdfs_mini_stress_hdfspp_test_shim_static > --- > > Key: HDFS-12640 > URL: https://issues.apache.org/jira/browse/HDFS-12640 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: James Clampffer >Assignee: James Clampffer > Attachments: HDFS-12640.HDFS-8707.000.patch > > > All of the automated tests seem to get stuck, or at least stop generating > useful output, in test_libhdfs_mini_stress_hdfspp_test_shim_static. Not able > to reproduce the issue locally in docker. > Right now this is blocking a few patches, and not having those patches > committed is slowing down work on other parts of the library. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-10686) libhdfs++: implement delegation token authorization
[ https://issues.apache.org/jira/browse/HDFS-10686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] James Clampffer reassigned HDFS-10686: -- Assignee: James Clampffer > libhdfs++: implement delegation token authorization > --- > > Key: HDFS-10686 > URL: https://issues.apache.org/jira/browse/HDFS-10686 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Bob Hansen >Assignee: James Clampffer > > The current libhdfs++ SASL implementation does a kerberos handshake for each > connection. HDFS includes support for issuing and using time-limited tokens > to reduce the load on the kerberos server. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12396) Webhdfs file system should get delegation token from kms provider.
[ https://issues.apache.org/jira/browse/HDFS-12396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277268#comment-16277268 ] Hudson commented on HDFS-12396: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13317 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13317/]) HDFS-12396. Webhdfs file system should get delegation token from kms (xiao: rev 404eab4dc0582e0384b93664ea6ee77ccd5eeebc) * (add) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/HdfsKMSUtil.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZonesWithKMS.java * (add) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/package-info.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java * (add) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderTokenIssuer.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java > Webhdfs file system should get delegation token from kms provider. > -- > > Key: HDFS-12396 > URL: https://issues.apache.org/jira/browse/HDFS-12396 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption, kms, webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah > Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1, 2.8.4 > > Attachments: HDFS-12396-branch-2.001.patch, > HDFS-12396-branch-2.002.patch, HDFS-12396-branch-2.8.001.patch, > HDFS-12396-branch-2.8.002.patch, HDFS-12396-branch-2.8.patch, > HDFS-12396-branch-2.patch, HDFS-12396.001.patch, HDFS-12396.002.patch, > HDFS-12396.003.patch, HDFS-12396.004.patch, HDFS-12396.005.patch, > HDFS-12396.006.patch, HDFS-12396.007.patch, HDFS-12396.008.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12396) Webhdfs file system should get delegation token from kms provider.
[ https://issues.apache.org/jira/browse/HDFS-12396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277254#comment-16277254 ] Rushabh S Shah commented on HDFS-12396: --- bq. Should we check-in to 3.0 also ? Ignore this comment. Didn't read the last update. > Webhdfs file system should get delegation token from kms provider. > -- > > Key: HDFS-12396 > URL: https://issues.apache.org/jira/browse/HDFS-12396 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption, kms, webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah > Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1, 2.8.4 > > Attachments: HDFS-12396-branch-2.001.patch, > HDFS-12396-branch-2.002.patch, HDFS-12396-branch-2.8.001.patch, > HDFS-12396-branch-2.8.002.patch, HDFS-12396-branch-2.8.patch, > HDFS-12396-branch-2.patch, HDFS-12396.001.patch, HDFS-12396.002.patch, > HDFS-12396.003.patch, HDFS-12396.004.patch, HDFS-12396.005.patch, > HDFS-12396.006.patch, HDFS-12396.007.patch, HDFS-12396.008.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12396) Webhdfs file system should get delegation token from kms provider.
[ https://issues.apache.org/jira/browse/HDFS-12396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277249#comment-16277249 ] Rushabh S Shah commented on HDFS-12396: --- Thanks [~daryn] for the reviews. Thanks [~xiaochen] for the review and commit. Should we check-in to 3.0 also ? > Webhdfs file system should get delegation token from kms provider. > -- > > Key: HDFS-12396 > URL: https://issues.apache.org/jira/browse/HDFS-12396 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption, kms, webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah > Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1, 2.8.4 > > Attachments: HDFS-12396-branch-2.001.patch, > HDFS-12396-branch-2.002.patch, HDFS-12396-branch-2.8.001.patch, > HDFS-12396-branch-2.8.002.patch, HDFS-12396-branch-2.8.patch, > HDFS-12396-branch-2.patch, HDFS-12396.001.patch, HDFS-12396.002.patch, > HDFS-12396.003.patch, HDFS-12396.004.patch, HDFS-12396.005.patch, > HDFS-12396.006.patch, HDFS-12396.007.patch, HDFS-12396.008.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12396) Webhdfs file system should get delegation token from kms provider.
[ https://issues.apache.org/jira/browse/HDFS-12396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-12396: - Fix Version/s: 3.0.1 > Webhdfs file system should get delegation token from kms provider. > -- > > Key: HDFS-12396 > URL: https://issues.apache.org/jira/browse/HDFS-12396 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption, kms, webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah > Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1, 2.8.4 > > Attachments: HDFS-12396-branch-2.001.patch, > HDFS-12396-branch-2.002.patch, HDFS-12396-branch-2.8.001.patch, > HDFS-12396-branch-2.8.002.patch, HDFS-12396-branch-2.8.patch, > HDFS-12396-branch-2.patch, HDFS-12396.001.patch, HDFS-12396.002.patch, > HDFS-12396.003.patch, HDFS-12396.004.patch, HDFS-12396.005.patch, > HDFS-12396.006.patch, HDFS-12396.007.patch, HDFS-12396.008.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12396) Webhdfs file system should get delegation token from kms provider.
[ https://issues.apache.org/jira/browse/HDFS-12396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277248#comment-16277248 ] Xiao Chen commented on HDFS-12396: -- ... looking at the branches, cherry-picked the trunk commit to branch-3.0 as well. Compiled before pushing. > Webhdfs file system should get delegation token from kms provider. > -- > > Key: HDFS-12396 > URL: https://issues.apache.org/jira/browse/HDFS-12396 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption, kms, webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah > Fix For: 3.1.0, 2.10.0, 2.9.1, 2.8.4 > > Attachments: HDFS-12396-branch-2.001.patch, > HDFS-12396-branch-2.002.patch, HDFS-12396-branch-2.8.001.patch, > HDFS-12396-branch-2.8.002.patch, HDFS-12396-branch-2.8.patch, > HDFS-12396-branch-2.patch, HDFS-12396.001.patch, HDFS-12396.002.patch, > HDFS-12396.003.patch, HDFS-12396.004.patch, HDFS-12396.005.patch, > HDFS-12396.006.patch, HDFS-12396.007.patch, HDFS-12396.008.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12882) Support full open(PathHandle) contract in HDFS
[ https://issues.apache.org/jira/browse/HDFS-12882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277238#comment-16277238 ] Chris Douglas commented on HDFS-12882: -- Sorry, should clarify: bq. New clients talking to old servers will get errors because the location field was ignored. New clients invoking new APIs on old servers will get errors (and not undefined behavior). All existing APIs are unchanged. > Support full open(PathHandle) contract in HDFS > -- > > Key: HDFS-12882 > URL: https://issues.apache.org/jira/browse/HDFS-12882 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Reporter: Chris Douglas >Assignee: Chris Douglas > Attachments: HDFS-12882.00.patch, HDFS-12882.00.salient.txt, > HDFS-12882.01.patch > > > HDFS-7878 added support for {{open(PathHandle)}}, but it only partially > implemented the semantics specified in the contract (i.e., open-by-inodeID). > HDFS should implement all permutations of the default options for > {{PathHandle}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12396) Webhdfs file system should get delegation token from kms provider.
[ https://issues.apache.org/jira/browse/HDFS-12396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-12396: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.8.4 2.9.1 2.10.0 3.1.0 Status: Resolved (was: Patch Available) Committed 008 patch to trunk, branch-2.002 patch to branch-2, branch-2.8.002 patch to branch-2.8. Cherry picked the branch-2 commit to branch-2.9. Compiled on branch-2.9 before pushing. Thanks Rushabh for the fix and Daryn for reviews! > Webhdfs file system should get delegation token from kms provider. > -- > > Key: HDFS-12396 > URL: https://issues.apache.org/jira/browse/HDFS-12396 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption, kms, webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah > Fix For: 3.1.0, 2.10.0, 2.9.1, 2.8.4 > > Attachments: HDFS-12396-branch-2.001.patch, > HDFS-12396-branch-2.002.patch, HDFS-12396-branch-2.8.001.patch, > HDFS-12396-branch-2.8.002.patch, HDFS-12396-branch-2.8.patch, > HDFS-12396-branch-2.patch, HDFS-12396.001.patch, HDFS-12396.002.patch, > HDFS-12396.003.patch, HDFS-12396.004.patch, HDFS-12396.005.patch, > HDFS-12396.006.patch, HDFS-12396.007.patch, HDFS-12396.008.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12713) [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata and PROVIDED storage metadata
[ https://issues.apache.org/jira/browse/HDFS-12713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-12713: -- Status: Patch Available (was: Open) > [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata > and PROVIDED storage metadata > > > Key: HDFS-12713 > URL: https://issues.apache.org/jira/browse/HDFS-12713 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Ewan Higgs > Attachments: HDFS-12713-HDFS-9806.001.patch, > HDFS-12713-HDFS-9806.002.patch, HDFS-12713-HDFS-9806.003.patch, > HDFS-12713-HDFS-9806.004.patch, HDFS-12713-HDFS-9806.005.patch, > HDFS-12713-HDFS-9806.006.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12713) [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata and PROVIDED storage metadata
[ https://issues.apache.org/jira/browse/HDFS-12713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277228#comment-16277228 ] Virajith Jalaparti commented on HDFS-12713: --- Thanks for taking a look [~chris.douglas]. Posting a rebased patch (adds {{blockPoolID}} to the {{LevelDBFileRegionAliasMap}}). > [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata > and PROVIDED storage metadata > > > Key: HDFS-12713 > URL: https://issues.apache.org/jira/browse/HDFS-12713 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Ewan Higgs > Attachments: HDFS-12713-HDFS-9806.001.patch, > HDFS-12713-HDFS-9806.002.patch, HDFS-12713-HDFS-9806.003.patch, > HDFS-12713-HDFS-9806.004.patch, HDFS-12713-HDFS-9806.005.patch, > HDFS-12713-HDFS-9806.006.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12713) [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata and PROVIDED storage metadata
[ https://issues.apache.org/jira/browse/HDFS-12713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-12713: -- Status: Open (was: Patch Available) > [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata > and PROVIDED storage metadata > > > Key: HDFS-12713 > URL: https://issues.apache.org/jira/browse/HDFS-12713 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Ewan Higgs > Attachments: HDFS-12713-HDFS-9806.001.patch, > HDFS-12713-HDFS-9806.002.patch, HDFS-12713-HDFS-9806.003.patch, > HDFS-12713-HDFS-9806.004.patch, HDFS-12713-HDFS-9806.005.patch, > HDFS-12713-HDFS-9806.006.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12713) [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata and PROVIDED storage metadata
[ https://issues.apache.org/jira/browse/HDFS-12713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-12713: -- Attachment: HDFS-12713-HDFS-9806.006.patch > [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata > and PROVIDED storage metadata > > > Key: HDFS-12713 > URL: https://issues.apache.org/jira/browse/HDFS-12713 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Ewan Higgs > Attachments: HDFS-12713-HDFS-9806.001.patch, > HDFS-12713-HDFS-9806.002.patch, HDFS-12713-HDFS-9806.003.patch, > HDFS-12713-HDFS-9806.004.patch, HDFS-12713-HDFS-9806.005.patch, > HDFS-12713-HDFS-9806.006.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12882) Support full open(PathHandle) contract in HDFS
[ https://issues.apache.org/jira/browse/HDFS-12882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Douglas updated HDFS-12882: - Attachment: HDFS-12882.01.patch Reattaching v00 to run through Jenkins > Support full open(PathHandle) contract in HDFS > -- > > Key: HDFS-12882 > URL: https://issues.apache.org/jira/browse/HDFS-12882 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Reporter: Chris Douglas >Assignee: Chris Douglas > Attachments: HDFS-12882.00.patch, HDFS-12882.00.salient.txt, > HDFS-12882.01.patch > > > HDFS-7878 added support for {{open(PathHandle)}}, but it only partially > implemented the semantics specified in the contract (i.e., open-by-inodeID). > HDFS should implement all permutations of the default options for > {{PathHandle}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12866) Recursive delete of a large directory or snapshot makes namenode unresponsive
[ https://issues.apache.org/jira/browse/HDFS-12866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277219#comment-16277219 ] Daryn Sharp commented on HDFS-12866: bq. Indeed I was thinking traversing to the root to check, like done in FSNamesystem#isFileDeleted, it cost some time, but we can find if an INode is disconnected, right? I thought the parent was nulled for an inodeRef.WithName when deleted explicitly or implicitly as source of move. The {{FSN#isFileDeleted}} implementation shows otherwise and is shockingly bad: looking up every ancestor child inode in its parent for an equality check. bq. So the main issue of this approach is the cost of traversing to the root to check if any ancestor is disconnected? I wonder how bad it is. Actually the main issue is what does a profile reveal? Let's not make premature optimizations w/o solid analysis. As for the traverse, making that a pervasive check throughout operations is penalizing the common case for what should be a relatively rare case (deletion of super-large directory). Perhaps every 1-2y a massive directory is removed and stalls the NN for mins. I want that danger removed but not at the expense of general performance. bq. In IBR and FBR, can we assume the file exists if the INode is there? It will be if only an ancestor is unlinked. Don't have time to look, but I have concerns of what happens if a block slated for removal is updated and possibly added to other data structures (corrupt, excess, etc) or worse generates an edit which cannot be replayed. > Recursive delete of a large directory or snapshot makes namenode unresponsive > - > > Key: HDFS-12866 > URL: https://issues.apache.org/jira/browse/HDFS-12866 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Reporter: Yongjun Zhang > > Currently file/directory deletion happens in two steps (see > {{FSNamesystem#delete(String src, boolean recursive, boolean logRetryCache)}}: > # Do the following under fsn write lock and release the lock afterwards > ** 1.1 recursively traverse the target, collect INodes and all blocks to be > deleted > ** 1.2 delete all INodes > # Delete the blocks to be deleted incrementally, chunk by chunk. That is, in > a loop, do: > ** acquire fsn write lock, > ** delete chunk of blocks > ** release fsn write lock > Breaking the deletion to two steps is to not hold the fsn write lock for too > long thus making NN not responsive. However, even with this, for deleting > large directory, or deleting snapshot that has a lot of contents, step 1 > itself would takes long time thus still hold the fsn write lock for too long > and make NN not responsive. > A possible solution would be to add one more sub step in step 1, and only > hold fsn write lock in sub step 1.1: > * 1.1. hold the fsn write lock, disconnect the target to be deleted from its > parent dir, release the lock > * 1.2 recursively traverse the target, collect INodes and all blocks to be > deleted > * 1.3 delete all INodes > Then do step 2. > This means, any operations on any file/dir need to check if its ancestor is > deleted (ancestor is disconnected), similar to what's done in > FSNamesystem#isFileDeleted method. > I'm throwing the thought here for further discussion. Welcome comments and > inputs. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12396) Webhdfs file system should get delegation token from kms provider.
[ https://issues.apache.org/jira/browse/HDFS-12396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277218#comment-16277218 ] Xiao Chen commented on HDFS-12396: -- +1 from me too. Thanks for thoroughly verifying the pre-commits [~shahrs87]. Committing this... > Webhdfs file system should get delegation token from kms provider. > -- > > Key: HDFS-12396 > URL: https://issues.apache.org/jira/browse/HDFS-12396 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption, kms, webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah > Attachments: HDFS-12396-branch-2.001.patch, > HDFS-12396-branch-2.002.patch, HDFS-12396-branch-2.8.001.patch, > HDFS-12396-branch-2.8.002.patch, HDFS-12396-branch-2.8.patch, > HDFS-12396-branch-2.patch, HDFS-12396.001.patch, HDFS-12396.002.patch, > HDFS-12396.003.patch, HDFS-12396.004.patch, HDFS-12396.005.patch, > HDFS-12396.006.patch, HDFS-12396.007.patch, HDFS-12396.008.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12882) Support full open(PathHandle) contract in HDFS
[ https://issues.apache.org/jira/browse/HDFS-12882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277217#comment-16277217 ] genericqa commented on HDFS-12882: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 32m 18s{color} | {color:blue} Docker mode activated. {color} | | {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 2s{color} | {color:blue} The patch file was not named according to hadoop's naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute for instructions. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 49s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 25s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 26s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 27s{color} | {color:red} hadoop-hdfs-project in the patch failed. {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 27s{color} | {color:red} hadoop-hdfs-project in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 27s{color} | {color:red} hadoop-hdfs-project in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 34s{color} | {color:orange} hadoop-hdfs-project: The patch generated 4 new + 99 unchanged - 1 fixed = 103 total (was 100) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 26s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 27s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 3m 15s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 20s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 29s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 46s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 29s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 29s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | |
[jira] [Commented] (HDFS-12882) Support full open(PathHandle) contract in HDFS
[ https://issues.apache.org/jira/browse/HDFS-12882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277157#comment-16277157 ] Chris Douglas commented on HDFS-12882: -- bq. My only concern is changing getFileInfo(String src) into getFileInfo(String src, boolean needLocation) for ClientProtocol. How incompatible is this? {{ClientProtocol}} is private/evolving, so this should be safe. Clients writing against {{FileSystem}}/{{DistributedFileSystem}} shouldn't notice. Because it's private/evolving, I thought changing all the occurrences was preferable to an overload. Old clients implicitly get the correct semantics from new servers because {{needLocation}} defaults to false. New clients talking to old servers will get errors because the location field was ignored. We might be able to improve the error message, here. As an aside, this should make it possible to implement {{DistributedFileSystem::getLocatedFileStatus}}. This would make it easier for globbing to consistently query with locations, avoiding some RPCs (MAPREDUCE-7016). bq. What about other projects like MapReduce, any breaks there? There shouldn't be, since all the existing calls should pass through the existing code. I'll run the unit tests. > Support full open(PathHandle) contract in HDFS > -- > > Key: HDFS-12882 > URL: https://issues.apache.org/jira/browse/HDFS-12882 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Reporter: Chris Douglas >Assignee: Chris Douglas > Attachments: HDFS-12882.00.patch, HDFS-12882.00.salient.txt > > > HDFS-7878 added support for {{open(PathHandle)}}, but it only partially > implemented the semantics specified in the contract (i.e., open-by-inodeID). > HDFS should implement all permutations of the default options for > {{PathHandle}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12883) RBF: Document Router and State Store metrics
[ https://issues.apache.org/jira/browse/HDFS-12883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277151#comment-16277151 ] Íñigo Goiri commented on HDFS-12883: Thanks [~linyiqun] for the patch. Not sure {{router context}} should be a section; {{JournalNode}} and {{datanode}} are subsections of {{dfs context}} right now. I would also make it a subsection and the rest a subsubsection. > RBF: Document Router and State Store metrics > > > Key: HDFS-12883 > URL: https://issues.apache.org/jira/browse/HDFS-12883 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: documentation >Affects Versions: 3.0.0-alpha3 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Labels: RBF > Attachments: HDFS-12883.001.patch, metric-screen-shot.jpg > > > Document Router and State Store metrics in doc. This will be helpful for > users to monitor RBF. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12882) Support full open(PathHandle) contract in HDFS
[ https://issues.apache.org/jira/browse/HDFS-12882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277139#comment-16277139 ] Íñigo Goiri commented on HDFS-12882: [~chris.douglas], thanks for [^HDFS-12882.00.salient.txt]. It makes the review much easier (we should do this more often :)). My only concern is changing {{getFileInfo(String src)}} into {{getFileInfo(String src, boolean needLocation)}} for {{ClientProtocol}}. How incompatible is this? I guess targetting 3.1 makes this change OK. Another option would be to keep the old method and have two paths for the same; I don't like this path though, I would prefer to fully do the change. What about other projects like MapReduce, any breaks there? > Support full open(PathHandle) contract in HDFS > -- > > Key: HDFS-12882 > URL: https://issues.apache.org/jira/browse/HDFS-12882 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Reporter: Chris Douglas >Assignee: Chris Douglas > Attachments: HDFS-12882.00.patch, HDFS-12882.00.salient.txt > > > HDFS-7878 added support for {{open(PathHandle)}}, but it only partially > implemented the semantics specified in the contract (i.e., open-by-inodeID). > HDFS should implement all permutations of the default options for > {{PathHandle}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12396) Webhdfs file system should get delegation token from kms provider.
[ https://issues.apache.org/jira/browse/HDFS-12396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277115#comment-16277115 ] Daryn Sharp commented on HDFS-12396: +1 on final patches. [~xiaochen], you ok with them? > Webhdfs file system should get delegation token from kms provider. > -- > > Key: HDFS-12396 > URL: https://issues.apache.org/jira/browse/HDFS-12396 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption, kms, webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah > Attachments: HDFS-12396-branch-2.001.patch, > HDFS-12396-branch-2.002.patch, HDFS-12396-branch-2.8.001.patch, > HDFS-12396-branch-2.8.002.patch, HDFS-12396-branch-2.8.patch, > HDFS-12396-branch-2.patch, HDFS-12396.001.patch, HDFS-12396.002.patch, > HDFS-12396.003.patch, HDFS-12396.004.patch, HDFS-12396.005.patch, > HDFS-12396.006.patch, HDFS-12396.007.patch, HDFS-12396.008.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12882) Support full open(PathHandle) contract in HDFS
[ https://issues.apache.org/jira/browse/HDFS-12882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Douglas updated HDFS-12882: - Attachment: HDFS-12882.00.salient.txt bq. Goes too much into the depths of HDFS for me to be a safe reviewer, I'm afraid. Trust me: it's better that way Patch size aside, this is a minor change. Attaching [^HDFS-12882.00.salient.txt] that omits all the automated refactoring changes i.e., propagating the flag through the protocol and tests. The logic for the handle is similarly unsurprising. || HandleOpt || allow mod || allow move || open by || check || | exact | 0 | 0 | path | mtime, inode | | content | 0 | 1 | inode | mtime | | path | 1 | 0 | path | inode | | ref | 1 | 1 | inode | -- | > Support full open(PathHandle) contract in HDFS > -- > > Key: HDFS-12882 > URL: https://issues.apache.org/jira/browse/HDFS-12882 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Reporter: Chris Douglas >Assignee: Chris Douglas > Attachments: HDFS-12882.00.patch, HDFS-12882.00.salient.txt > > > HDFS-7878 added support for {{open(PathHandle)}}, but it only partially > implemented the semantics specified in the contract (i.e., open-by-inodeID). > HDFS should implement all permutations of the default options for > {{PathHandle}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12741) ADD support for KSM --createObjectStore command
[ https://issues.apache.org/jira/browse/HDFS-12741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-12741: --- Attachment: HDFS-12741-HDFS-7240.003.patch Thanks [~linyiqun] [~nandakumar131] for the review comments. patch v3 addresses the same. Please have a look. > ADD support for KSM --createObjectStore command > --- > > Key: HDFS-12741 > URL: https://issues.apache.org/jira/browse/HDFS-12741 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-7240 >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee > Fix For: HDFS-7240 > > Attachments: HDFS-12741-HDFS-7240.001.patch, > HDFS-12741-HDFS-7240.002.patch, HDFS-12741-HDFS-7240.003.patch > > > KSM --createObjectStore command reads the ozone configuration information and > creates the KSM version file and reads the SCM version file from the SCM > specified. > > The SCM version file is stored in the KSM metadata directory and before > communicating with an SCM KSM verifies that it is communicating with an SCM > where the relationship has been established via createObjectStore command. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12396) Webhdfs file system should get delegation token from kms provider.
[ https://issues.apache.org/jira/browse/HDFS-12396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16276947#comment-16276947 ] Rushabh S Shah commented on HDFS-12396: --- The javac warning in branch-2 and branch-2.8 patch is unrelated to my patch. Many of the failed tests in branch-2 and branch-2.8 are due to {{java.lang.OutOfMemoryError: unable to create new native thread}}. Re-reun of the tests that failed in branch-2.8 {noformat} --- T E S T S --- --- T E S T S --- Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0 Running org.apache.hadoop.hdfs.TestDatanodeLayoutUpgrade Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.688 sec - in org.apache.hadoop.hdfs.TestDatanodeLayoutUpgrade Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0 Running org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.964 sec - in org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0 Running org.apache.hadoop.hdfs.TestDFSMkdirs Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.205 sec - in org.apache.hadoop.hdfs.TestDFSMkdirs Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0 Running org.apache.hadoop.hdfs.TestDFSOutputStream Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.214 sec - in org.apache.hadoop.hdfs.TestDFSOutputStream Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0 Running org.apache.hadoop.hdfs.TestDFSShell Tests run: 47, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 59.771 sec - in org.apache.hadoop.hdfs.TestDFSShell Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0 Running org.apache.hadoop.hdfs.TestDistributedFileSystem Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 57.417 sec - in org.apache.hadoop.hdfs.TestDistributedFileSystem Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0 Running org.apache.hadoop.hdfs.TestFileAppendRestart Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.681 sec - in org.apache.hadoop.hdfs.TestFileAppendRestart Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0 Running org.apache.hadoop.hdfs.TestLeaseRecovery2 Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 70.039 sec - in org.apache.hadoop.hdfs.TestLeaseRecovery2 Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0 Running org.apache.hadoop.hdfs.TestReplaceDatanodeFailureReplication Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 60.845 sec - in org.apache.hadoop.hdfs.TestReplaceDatanodeFailureReplication Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0 Running org.apache.hadoop.hdfs.web.TestWebHDFS Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 111.016 sec - in org.apache.hadoop.hdfs.web.TestWebHDFS Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0 Running org.apache.hadoop.hdfs.web.TestWebHDFSForHA Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.36 sec - in org.apache.hadoop.hdfs.web.TestWebHDFSForHA Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0 Running org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.438 sec - in org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0 Running org.apache.hadoop.hdfs.web.TestWebHdfsWithRestCsrfPreventionFilter Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.385 sec - in org.apache.hadoop.hdfs.web.TestWebHdfsWithRestCsrfPreventionFilter Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0 Running org.apache.hadoop.hdfs.web.TestWebHDFSXAttr Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.906 sec - in org.apache.hadoop.hdfs.web.TestWebHDFSXAttr Results : Tests run: 177, Failures: 0, Errors: 0, Skipped: 0 {noformat} Re-reun of the tests that failed in branch-2 {noformat}
[jira] [Commented] (HDFS-12882) Support full open(PathHandle) contract in HDFS
[ https://issues.apache.org/jira/browse/HDFS-12882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16276806#comment-16276806 ] Steve Loughran commented on HDFS-12882: --- Goes too much into the depths of HDFS for me to be a safe reviewer, I'm afraid. Trust me: it's better that way > Support full open(PathHandle) contract in HDFS > -- > > Key: HDFS-12882 > URL: https://issues.apache.org/jira/browse/HDFS-12882 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Reporter: Chris Douglas >Assignee: Chris Douglas > Attachments: HDFS-12882.00.patch > > > HDFS-7878 added support for {{open(PathHandle)}}, but it only partially > implemented the semantics specified in the contract (i.e., open-by-inodeID). > HDFS should implement all permutations of the default options for > {{PathHandle}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12883) RBF: Document Router and State Store metrics
[ https://issues.apache.org/jira/browse/HDFS-12883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-12883: - Attachment: metric-screen-shot.jpg Screenshot attached. > RBF: Document Router and State Store metrics > > > Key: HDFS-12883 > URL: https://issues.apache.org/jira/browse/HDFS-12883 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: documentation >Affects Versions: 3.0.0-alpha3 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Labels: RBF > Attachments: HDFS-12883.001.patch, metric-screen-shot.jpg > > > Document Router and State Store metrics in doc. This will be helpful for > users to monitor RBF. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12883) RBF: Document Router and State Store metrics
[ https://issues.apache.org/jira/browse/HDFS-12883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-12883: - Attachment: HDFS-12883.001.patch Initial patch attached. > RBF: Document Router and State Store metrics > > > Key: HDFS-12883 > URL: https://issues.apache.org/jira/browse/HDFS-12883 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: documentation >Affects Versions: 3.0.0-alpha3 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Labels: RBF > Attachments: HDFS-12883.001.patch > > > Document Router and State Store metrics in doc. This will be helpful for > users to monitor RBF. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12883) RBF: Document Router and State Store metrics
[ https://issues.apache.org/jira/browse/HDFS-12883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-12883: - Status: Patch Available (was: Open) > RBF: Document Router and State Store metrics > > > Key: HDFS-12883 > URL: https://issues.apache.org/jira/browse/HDFS-12883 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: documentation >Affects Versions: 3.0.0-alpha3 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Labels: RBF > > Document Router and State Store metrics in doc. This will be helpful for > users to monitor RBF. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12883) RBF: Document Router and State Store metrics
Yiqun Lin created HDFS-12883: Summary: RBF: Document Router and State Store metrics Key: HDFS-12883 URL: https://issues.apache.org/jira/browse/HDFS-12883 Project: Hadoop HDFS Issue Type: Sub-task Components: documentation Affects Versions: 3.0.0-alpha3 Reporter: Yiqun Lin Assignee: Yiqun Lin Document Router and State Store metrics in doc. This will be helpful for users to monitor RBF. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org