[jira] [Commented] (HDFS-9928) Make HDFS commands guide up to date
[ https://issues.apache.org/jira/browse/HDFS-9928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188841#comment-15188841 ] Hadoop QA commented on HDFS-9928: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 33s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 8m 54s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ca8df7 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12792462/HDFS-9928.001.patch | | JIRA Issue | HDFS-9928 | | Optional Tests | asflicense mvnsite | | uname | Linux 1b26be0d607e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 2e040d3 | | whitespace | https://builds.apache.org/job/PreCommit-HDFS-Build/14775/artifact/patchprocess/whitespace-eol.txt | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/14775/console | | Powered by | Apache Yetus 0.2.0 http://yetus.apache.org | This message was automatically generated. > Make HDFS commands guide up to date > --- > > Key: HDFS-9928 > URL: https://issues.apache.org/jira/browse/HDFS-9928 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Labels: documentation, supportability > Attachments: HDFS-9928.001.patch > > > A few HDFS subcommands and options are missing in the documentation. > # envvars: display computed Hadoop environment variables > I also noticed (in HDFS-9927) that a few OIV options are missing, and I'll be > looking for other missing options as well. > Filling this JIRA to fix them all. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9928) Make HDFS commands guide up to date
[ https://issues.apache.org/jira/browse/HDFS-9928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDFS-9928: -- Status: Patch Available (was: Open) > Make HDFS commands guide up to date > --- > > Key: HDFS-9928 > URL: https://issues.apache.org/jira/browse/HDFS-9928 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Labels: documentation, supportability > Attachments: HDFS-9928.001.patch > > > A few HDFS subcommands and options are missing in the documentation. > # envvars: display computed Hadoop environment variables > I also noticed (in HDFS-9927) that a few OIV options are missing, and I'll be > looking for other missing options as well. > Filling this JIRA to fix them all. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9928) Make HDFS commands guide up to date
[ https://issues.apache.org/jira/browse/HDFS-9928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDFS-9928: -- Attachment: HDFS-9928.001.patch Updated all missing commands/subcommand/options. > Make HDFS commands guide up to date > --- > > Key: HDFS-9928 > URL: https://issues.apache.org/jira/browse/HDFS-9928 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Labels: documentation, supportability > Attachments: HDFS-9928.001.patch > > > A few HDFS subcommands and options are missing in the documentation. > # envvars: display computed Hadoop environment variables > I also noticed (in HDFS-9927) that a few OIV options are missing, and I'll be > looking for other missing options as well. > Filling this JIRA to fix them all. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9937) Update dfsadmin command line help
[ https://issues.apache.org/jira/browse/HDFS-9937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDFS-9937: -- Description: dfsadmin command line top-level help menu is not consistent with detailed help menu. * -safemode missed options -wait and -forceExit * -restoreFailedStorage options are not described consistently (true/false/check, or Set/Unset/Check?) * -setSpaceQuota optionally takes a -storageType parameter, but it's not clear what are the available options. (Seems to be (SSD, DISK, ARCHIVE), from HdfsQuotaAdminGuide.html) * -reconfig seems to also take namenode as parameter was: A few dfsadmin subcommand help is not up-to-date and inconsistent. -safemode missed options -wait and -forceExit -restoreFailedStorage options are not described consistently (true/false/check, or Set/Unset/Check?) -setSpaceQuota optionally takes a -storageType parameter, but it's not clear what are the available options. (Seems to be (SSD, DISK, ARCHIVE), from HdfsQuotaAdminGuide.html) -reconfig seems to also take namenode as parameter > Update dfsadmin command line help > - > > Key: HDFS-9937 > URL: https://issues.apache.org/jira/browse/HDFS-9937 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Priority: Minor > Labels: commandline, supportability > > dfsadmin command line top-level help menu is not consistent with detailed > help menu. > * -safemode missed options -wait and -forceExit > * -restoreFailedStorage options are not described consistently > (true/false/check, or Set/Unset/Check?) > * -setSpaceQuota optionally takes a -storageType parameter, but it's not > clear what are the available options. (Seems to be (SSD, DISK, ARCHIVE), from > HdfsQuotaAdminGuide.html) > * -reconfig seems to also take namenode as parameter -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9935) Remove LEASE_{SOFTLIMIT,HARDLIMIT}_PERIOD constants from HdfsServerConstants
[ https://issues.apache.org/jira/browse/HDFS-9935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188779#comment-15188779 ] Hadoop QA commented on HDFS-9935: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 36s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s {color} | {color:green} trunk passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 54s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s {color} | {color:green} trunk passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 48s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s {color} | {color:green} the patch passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 41s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 10s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s {color} | {color:green} the patch passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 1s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_74. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 37s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 167m 1s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_74 Failed junit tests | hadoop.hdfs.TestHFlush | | JDK v1.7.0_95 Failed junit tests | hadoop.hdfs.TestHFlush | | | hadoop.hdfs.TestCrcCorruption | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ca8df7 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12792428/HDFS-9935.001.patch | | JIRA Issue | HDFS-9935 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 7335013adccb 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed
[jira] [Commented] (HDFS-9694) Make existing DFSClient#getFileChecksum() work for striped blocks
[ https://issues.apache.org/jira/browse/HDFS-9694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188771#comment-15188771 ] Kai Zheng commented on HDFS-9694: - Thanks [~cnauroth] for the nice hint! > Make existing DFSClient#getFileChecksum() work for striped blocks > - > > Key: HDFS-9694 > URL: https://issues.apache.org/jira/browse/HDFS-9694 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Kai Zheng >Assignee: Kai Zheng > Attachments: HDFS-9694-v1.patch, HDFS-9694-v2.patch, > HDFS-9694-v3.patch, HDFS-9694-v4.patch, HDFS-9694-v5.patch > > > This is a sub-task of HDFS-8430 and will get the existing API > {{FileSystem#getFileChecksum(path)}} work for striped files. It will also > refactor existing codes and layout basic work for subsequent tasks like > support of the new API proposed there. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-9937) Update dfsadmin command line help
Wei-Chiu Chuang created HDFS-9937: - Summary: Update dfsadmin command line help Key: HDFS-9937 URL: https://issues.apache.org/jira/browse/HDFS-9937 Project: Hadoop HDFS Issue Type: Improvement Affects Versions: 3.0.0 Reporter: Wei-Chiu Chuang Priority: Minor A few dfsadmin subcommand help is not up-to-date and inconsistent. -safemode missed options -wait and -forceExit -restoreFailedStorage options are not described consistently (true/false/check, or Set/Unset/Check?) -setSpaceQuota optionally takes a -storageType parameter, but it's not clear what are the available options. (Seems to be (SSD, DISK, ARCHIVE), from HdfsQuotaAdminGuide.html) -reconfig seems to also take namenode as parameter -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HDFS-8475) Exception in createBlockOutputStream java.io.EOFException: Premature EOF: no length prefix available
[ https://issues.apache.org/jira/browse/HDFS-8475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Harsh J resolved HDFS-8475. --- Resolution: Not A Bug I don't see a bug reported here - the report says the write was done with a single replica and that the single replica was manually corrupted. Please post to u...@hadoop.apache.org for problems observed in usage. If you plan to reopen this, please post precise steps of how the bug may be reproduced. I'd recommend looking at your NN and DN logs to trace further on what's happening. > Exception in createBlockOutputStream java.io.EOFException: Premature EOF: no > length prefix available > > > Key: HDFS-8475 > URL: https://issues.apache.org/jira/browse/HDFS-8475 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.2.0 >Reporter: Vinod Valecha >Priority: Blocker > > Scenraio: > = > write a file > corrupt block manually > Exception stack trace- > 2015-05-24 02:31:55.291 INFO [T-33716795] > [org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer] Exception in > createBlockOutputStream > java.io.EOFException: Premature EOF: no length prefix available > at > org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1492) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1155) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1088) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514) > [5/24/15 2:31:55:291 UTC] 02027a3b DFSClient I > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer createBlockOutputStream > Exception in createBlockOutputStream > java.io.EOFException: Premature EOF: no > length prefix available > at > org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1492) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1155) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1088) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514) > 2015-05-24 02:31:55.291 INFO [T-33716795] > [org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer] Abandoning > BP-176676314-10.108.106.59-1402620296713:blk_1404621403_330880579 > [5/24/15 2:31:55:291 UTC] 02027a3b DFSClient I > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer nextBlockOutputStream > Abandoning BP-176676314-10.108.106.59-1402620296713:blk_1404621403_330880579 > 2015-05-24 02:31:55.299 INFO [T-33716795] > [org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer] Excluding datanode > 10.108.106.59:50010 > [5/24/15 2:31:55:299 UTC] 02027a3b DFSClient I > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer nextBlockOutputStream > Excluding datanode 10.108.106.59:50010 > 2015-05-24 02:31:55.300 WARNING [T-33716795] > [org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer] DataStreamer Exception > org.apache.hadoop.ipc.RemoteException(java.io.IOException): File > /var/db/opera/files/B4889CCDA75F9751DDBB488E5AAB433E/BE4DAEF290B7136ED6EF3D4B157441A2/BE4DAEF290B7136ED6EF3D4B157441A2-4.pag > could only be replicated to 0 nodes instead of minReplication (=1). There > are 1 datanode(s) running and 1 node(s) are excluded in this operation. > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2477) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555) > [5/24/15 2:31:55:300 UTC] 02027a3b DFSClient W > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer run DataStreamer Exception > > org.apache.hadoop.ipc.RemoteException(java.io.IOException): File > /var/db/opera/files/B4889CCDA75F9751DDBB488E5AAB433E/BE4DAEF290B7136ED6EF3D4B157441A2/BE4DAEF290B7136ED6EF3D4B157441A2-4.pag > could only be replicated to 0 nodes instead of minReplication (=1). There > are 1 datanode(s) running and 1 node(s) are excluded in this operation. > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2477) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555) > at >
[jira] [Updated] (HDFS-9936) Remove unused import in HdfsServerConstants
[ https://issues.apache.org/jira/browse/HDFS-9936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lin Yiqun updated HDFS-9936: Status: Patch Available (was: Open) Attach a simple patch. > Remove unused import in HdfsServerConstants > --- > > Key: HDFS-9936 > URL: https://issues.apache.org/jira/browse/HDFS-9936 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Lin Yiqun >Assignee: Lin Yiqun >Priority: Minor > Attachments: HDFS-9936.001.patch > > > In HDFS-9134, it moved the > {{LEASE_SOFTLIMIT_PERIOD}},{{LEASE_HARDLIMIT_PERIOD}} constants from > {{HdfsServerConstants}}. But in its fixed patch, it import a unused import > {{import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys}} in > {{HdfsServerConstants}}. The code As follow: > {code} > --- > a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java > +++ > b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java > @@ -25,6 +25,7 @@ > > import org.apache.hadoop.classification.InterfaceAudience; > import org.apache.hadoop.hdfs.DFSUtil; > +import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys; > import org.apache.hadoop.hdfs.protocol.HdfsConstants; > import org.apache.hadoop.hdfs.server.datanode.DataNodeLayoutVersion; > import org.apache.hadoop.hdfs.server.namenode.FSDirectory; > @@ -42,28 +43,14 @@ > @InterfaceAudience.Private > public interface HdfsServerConstants { >int MIN_BLOCKS_FOR_WRITE = 1; > + >/** > - * For a HDFS client to write to a file, a lease is granted; During the > lease > - * period, no other client can write to the file. The writing client can > - * periodically renew the lease. When the file is closed, the lease is > - * revoked. The lease duration is bound by this soft limit and a > - * {@link HdfsServerConstants#LEASE_HARDLIMIT_PERIOD hard limit}. Until the > - * soft limit expires, the writer has sole write access to the file. If the > - * soft limit expires and the client fails to close the file or renew the > - * lease, another client can preempt the lease. > - */ > - long LEASE_SOFTLIMIT_PERIOD = 60 * 1000; > - /** > - * For a HDFS client to write to a file, a lease is granted; During the > lease > - * period, no other client can write to the file. The writing client can > - * periodically renew the lease. When the file is closed, the lease is > - * revoked. The lease duration is bound by a > - * {@link HdfsServerConstants#LEASE_SOFTLIMIT_PERIOD soft limit} and this > hard > - * limit. If after the hard limit expires and the client has failed to > renew > - * the lease, HDFS assumes that the client has quit and will automatically > - * close the file on behalf of the writer, and recover the lease. > + * Please see {@link HdfsConstants#LEASE_SOFTLIMIT_PERIOD} and > + * {@link HdfsConstants#LEASE_HARDLIMIT_PERIOD} for more information. > */ > - long LEASE_HARDLIMIT_PERIOD = 60 * LEASE_SOFTLIMIT_PERIOD; > + long LEASE_SOFTLIMIT_PERIOD = HdfsConstants.LEASE_SOFTLIMIT_PERIOD; > + long LEASE_HARDLIMIT_PERIOD = HdfsConstants.LEASE_HARDLIMIT_PERIOD; > + >long LEASE_RECOVER_PERIOD = 10 * 1000; // in ms > {code} > It has already a import for {{HdfsConstants}}. We can remove unused import > here. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9901) Move block validation out of the heartbeat thread
[ https://issues.apache.org/jira/browse/HDFS-9901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hua Liu updated HDFS-9901: -- Status: Patch Available (was: Open) > Move block validation out of the heartbeat thread > - > > Key: HDFS-9901 > URL: https://issues.apache.org/jira/browse/HDFS-9901 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Hua Liu >Assignee: Hua Liu > Attachments: > 0001-HDFS-9901-Move-block-validation-out-of-the-heartbeat.patch, > 0002-HDFS-9901-Move-block-validation-out-of-the-heartbeat.patch > > > During heavy disk IO, we noticed hearbeat thread hangs on checkBlock method, > which checks the existence and length of a block before spins off a thread to > do the actual transferring. In extreme cases, the heartbeat thread hang more > than 10 minutes so the namenode marked the datanode as dead and started > replicating its blocks, which caused more disk IO on other nodes and can > potentially brought them down. > The patch contains two changes: > 1. Makes DF asynchronous when monitoring the disk by creating a thread that > checks the disk and updates the disk status periodically. When the heartbeat > threads generates storage report, it then reads disk usage information from > memory so that the heartbeat thread won't get blocked during heavy diskIO. > 2. Makes the checks (which required disk accesses) in transferBlock() in > DataNode into a separate thread so the heartbeat does not have to wait for > this when heartbeating. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9936) Remove unused import in HdfsServerConstants
[ https://issues.apache.org/jira/browse/HDFS-9936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lin Yiqun updated HDFS-9936: Attachment: HDFS-9936.001.patch > Remove unused import in HdfsServerConstants > --- > > Key: HDFS-9936 > URL: https://issues.apache.org/jira/browse/HDFS-9936 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Lin Yiqun >Assignee: Lin Yiqun >Priority: Minor > Attachments: HDFS-9936.001.patch > > > In HDFS-9134, it moved the > {{LEASE_SOFTLIMIT_PERIOD}},{{LEASE_HARDLIMIT_PERIOD}} constants from > {{HdfsServerConstants}}. But in its fixed patch, it import a unused import > {{import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys}} in > {{HdfsServerConstants}}. The code As follow: > {code} > --- > a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java > +++ > b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java > @@ -25,6 +25,7 @@ > > import org.apache.hadoop.classification.InterfaceAudience; > import org.apache.hadoop.hdfs.DFSUtil; > +import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys; > import org.apache.hadoop.hdfs.protocol.HdfsConstants; > import org.apache.hadoop.hdfs.server.datanode.DataNodeLayoutVersion; > import org.apache.hadoop.hdfs.server.namenode.FSDirectory; > @@ -42,28 +43,14 @@ > @InterfaceAudience.Private > public interface HdfsServerConstants { >int MIN_BLOCKS_FOR_WRITE = 1; > + >/** > - * For a HDFS client to write to a file, a lease is granted; During the > lease > - * period, no other client can write to the file. The writing client can > - * periodically renew the lease. When the file is closed, the lease is > - * revoked. The lease duration is bound by this soft limit and a > - * {@link HdfsServerConstants#LEASE_HARDLIMIT_PERIOD hard limit}. Until the > - * soft limit expires, the writer has sole write access to the file. If the > - * soft limit expires and the client fails to close the file or renew the > - * lease, another client can preempt the lease. > - */ > - long LEASE_SOFTLIMIT_PERIOD = 60 * 1000; > - /** > - * For a HDFS client to write to a file, a lease is granted; During the > lease > - * period, no other client can write to the file. The writing client can > - * periodically renew the lease. When the file is closed, the lease is > - * revoked. The lease duration is bound by a > - * {@link HdfsServerConstants#LEASE_SOFTLIMIT_PERIOD soft limit} and this > hard > - * limit. If after the hard limit expires and the client has failed to > renew > - * the lease, HDFS assumes that the client has quit and will automatically > - * close the file on behalf of the writer, and recover the lease. > + * Please see {@link HdfsConstants#LEASE_SOFTLIMIT_PERIOD} and > + * {@link HdfsConstants#LEASE_HARDLIMIT_PERIOD} for more information. > */ > - long LEASE_HARDLIMIT_PERIOD = 60 * LEASE_SOFTLIMIT_PERIOD; > + long LEASE_SOFTLIMIT_PERIOD = HdfsConstants.LEASE_SOFTLIMIT_PERIOD; > + long LEASE_HARDLIMIT_PERIOD = HdfsConstants.LEASE_HARDLIMIT_PERIOD; > + >long LEASE_RECOVER_PERIOD = 10 * 1000; // in ms > {code} > It has already a import for {{HdfsConstants}}. We can remove unused import > here. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9901) Move block validation out of the heartbeat thread
[ https://issues.apache.org/jira/browse/HDFS-9901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hua Liu updated HDFS-9901: -- Attachment: 0002-HDFS-9901-Move-block-validation-out-of-the-heartbeat.patch > Move block validation out of the heartbeat thread > - > > Key: HDFS-9901 > URL: https://issues.apache.org/jira/browse/HDFS-9901 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Hua Liu >Assignee: Hua Liu > Attachments: > 0001-HDFS-9901-Move-block-validation-out-of-the-heartbeat.patch, > 0002-HDFS-9901-Move-block-validation-out-of-the-heartbeat.patch > > > During heavy disk IO, we noticed hearbeat thread hangs on checkBlock method, > which checks the existence and length of a block before spins off a thread to > do the actual transferring. In extreme cases, the heartbeat thread hang more > than 10 minutes so the namenode marked the datanode as dead and started > replicating its blocks, which caused more disk IO on other nodes and can > potentially brought them down. > The patch contains two changes: > 1. Makes DF asynchronous when monitoring the disk by creating a thread that > checks the disk and updates the disk status periodically. When the heartbeat > threads generates storage report, it then reads disk usage information from > memory so that the heartbeat thread won't get blocked during heavy diskIO. > 2. Makes the checks (which required disk accesses) in transferBlock() in > DataNode into a separate thread so the heartbeat does not have to wait for > this when heartbeating. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9901) Move block validation out of the heartbeat thread
[ https://issues.apache.org/jira/browse/HDFS-9901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hua Liu updated HDFS-9901: -- Status: Open (was: Patch Available) > Move block validation out of the heartbeat thread > - > > Key: HDFS-9901 > URL: https://issues.apache.org/jira/browse/HDFS-9901 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Hua Liu >Assignee: Hua Liu > Attachments: > 0001-HDFS-9901-Move-block-validation-out-of-the-heartbeat.patch, > 0002-HDFS-9901-Move-block-validation-out-of-the-heartbeat.patch > > > During heavy disk IO, we noticed hearbeat thread hangs on checkBlock method, > which checks the existence and length of a block before spins off a thread to > do the actual transferring. In extreme cases, the heartbeat thread hang more > than 10 minutes so the namenode marked the datanode as dead and started > replicating its blocks, which caused more disk IO on other nodes and can > potentially brought them down. > The patch contains two changes: > 1. Makes DF asynchronous when monitoring the disk by creating a thread that > checks the disk and updates the disk status periodically. When the heartbeat > threads generates storage report, it then reads disk usage information from > memory so that the heartbeat thread won't get blocked during heavy diskIO. > 2. Makes the checks (which required disk accesses) in transferBlock() in > DataNode into a separate thread so the heartbeat does not have to wait for > this when heartbeating. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-9936) Remove unused import in HdfsServerConstants
Lin Yiqun created HDFS-9936: --- Summary: Remove unused import in HdfsServerConstants Key: HDFS-9936 URL: https://issues.apache.org/jira/browse/HDFS-9936 Project: Hadoop HDFS Issue Type: Bug Reporter: Lin Yiqun Assignee: Lin Yiqun Priority: Minor In HDFS-9134, it moved the {{LEASE_SOFTLIMIT_PERIOD}},{{LEASE_HARDLIMIT_PERIOD}} constants from {{HdfsServerConstants}}. But in its fixed patch, it import a unused import {{import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys}} in {{HdfsServerConstants}}. The code As follow: {code} --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java @@ -25,6 +25,7 @@ import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.hdfs.DFSUtil; +import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys; import org.apache.hadoop.hdfs.protocol.HdfsConstants; import org.apache.hadoop.hdfs.server.datanode.DataNodeLayoutVersion; import org.apache.hadoop.hdfs.server.namenode.FSDirectory; @@ -42,28 +43,14 @@ @InterfaceAudience.Private public interface HdfsServerConstants { int MIN_BLOCKS_FOR_WRITE = 1; + /** - * For a HDFS client to write to a file, a lease is granted; During the lease - * period, no other client can write to the file. The writing client can - * periodically renew the lease. When the file is closed, the lease is - * revoked. The lease duration is bound by this soft limit and a - * {@link HdfsServerConstants#LEASE_HARDLIMIT_PERIOD hard limit}. Until the - * soft limit expires, the writer has sole write access to the file. If the - * soft limit expires and the client fails to close the file or renew the - * lease, another client can preempt the lease. - */ - long LEASE_SOFTLIMIT_PERIOD = 60 * 1000; - /** - * For a HDFS client to write to a file, a lease is granted; During the lease - * period, no other client can write to the file. The writing client can - * periodically renew the lease. When the file is closed, the lease is - * revoked. The lease duration is bound by a - * {@link HdfsServerConstants#LEASE_SOFTLIMIT_PERIOD soft limit} and this hard - * limit. If after the hard limit expires and the client has failed to renew - * the lease, HDFS assumes that the client has quit and will automatically - * close the file on behalf of the writer, and recover the lease. + * Please see {@link HdfsConstants#LEASE_SOFTLIMIT_PERIOD} and + * {@link HdfsConstants#LEASE_HARDLIMIT_PERIOD} for more information. */ - long LEASE_HARDLIMIT_PERIOD = 60 * LEASE_SOFTLIMIT_PERIOD; + long LEASE_SOFTLIMIT_PERIOD = HdfsConstants.LEASE_SOFTLIMIT_PERIOD; + long LEASE_HARDLIMIT_PERIOD = HdfsConstants.LEASE_HARDLIMIT_PERIOD; + long LEASE_RECOVER_PERIOD = 10 * 1000; // in ms {code} It has already a import for {{HdfsConstants}}. We can remove unused import here. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9405) When starting a file, NameNode should generate EDEK in a separate thread
[ https://issues.apache.org/jira/browse/HDFS-9405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188698#comment-15188698 ] Hadoop QA commented on HDFS-9405: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 38s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s {color} | {color:green} trunk passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 54s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s {color} | {color:green} trunk passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 45s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s {color} | {color:green} the patch passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s {color} | {color:green} the patch passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 55m 40s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_74. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 6s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 133m 38s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_74 Failed junit tests | hadoop.hdfs.TestHFlush | | | hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs | | | hadoop.hdfs.TestDFSUpgradeFromImage | | | hadoop.hdfs.server.datanode.TestFsDatasetCache | | | hadoop.hdfs.TestFileAppend | | JDK v1.7.0_95 Failed junit tests | hadoop.hdfs.TestHFlush | | | hadoop.hdfs.TestParallelShortCircuitReadUnCached | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ca8df7 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12792424/HDFS-9405.02.patch | | JIRA Issue | HDFS-9405 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs
[jira] [Commented] (HDFS-9895) Remove all cached configuration from DataNode
[ https://issues.apache.org/jira/browse/HDFS-9895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188685#comment-15188685 ] Hadoop QA commented on HDFS-9895: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 3s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s {color} | {color:green} trunk passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 57s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s {color} | {color:green} trunk passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 56s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 49s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s {color} | {color:green} the patch passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 20s {color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: patch generated 9 new + 203 unchanged - 15 fixed = 212 total (was 218) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s {color} | {color:green} the patch passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 55s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m 29s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_74. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 11s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 188m 55s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_74 Failed junit tests | hadoop.hdfs.TestDFSUpgradeFromImage | | | hadoop.hdfs.TestHFlush | | | hadoop.hdfs.security.TestDelegationTokenForProxyUser | | | hadoop.hdfs.server.datanode.TestDataNodeLifeline | | | hadoop.hdfs.TestFileAppend | | | hadoop.hdfs.server.namenode.TestEditLog | | JDK v1.7.0_95 Failed junit tests | hadoop.hdfs.TestPersistBlocks | | | hadoop.hdfs.shortcircuit.TestShortCircuitCache | | | hadoop.hdfs.TestSafeMode | | | hadoop.hdfs.server.mover.TestStorageMover | | | hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs | | |
[jira] [Commented] (HDFS-9934) ReverseXML oiv processor should bail out if the XML file's layoutVersion doesn't match oiv's
[ https://issues.apache.org/jira/browse/HDFS-9934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188679#comment-15188679 ] Hadoop QA commented on HDFS-9934: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 54s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s {color} | {color:green} trunk passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 53s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s {color} | {color:green} trunk passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 48s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 45s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s {color} | {color:green} the patch passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s {color} | {color:green} the patch passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 56m 28s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_74. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 54m 21s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 136m 12s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_74 Failed junit tests | hadoop.hdfs.server.datanode.TestFsDatasetCache | | | hadoop.hdfs.TestHFlush | | JDK v1.7.0_95 Failed junit tests | hadoop.hdfs.server.namenode.TestEditLog | | | hadoop.hdfs.TestHFlush | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ca8df7 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12792410/HDFS-9934.001.patch | | JIRA Issue | HDFS-9934 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux f1e8c08c82d5 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC
[jira] [Commented] (HDFS-9901) Move block validation out of the heartbeat thread
[ https://issues.apache.org/jira/browse/HDFS-9901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188664#comment-15188664 ] Hua Liu commented on HDFS-9901: --- Hi [~elgoiri], thanks for helping explain the approach. I've added it to the "Description" section. > Move block validation out of the heartbeat thread > - > > Key: HDFS-9901 > URL: https://issues.apache.org/jira/browse/HDFS-9901 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Hua Liu >Assignee: Hua Liu > Attachments: > 0001-HDFS-9901-Move-block-validation-out-of-the-heartbeat.patch > > > During heavy disk IO, we noticed hearbeat thread hangs on checkBlock method, > which checks the existence and length of a block before spins off a thread to > do the actual transferring. In extreme cases, the heartbeat thread hang more > than 10 minutes so the namenode marked the datanode as dead and started > replicating its blocks, which caused more disk IO on other nodes and can > potentially brought them down. > The patch contains two changes: > 1. Makes DF asynchronous when monitoring the disk by creating a thread that > checks the disk and updates the disk status periodically. When the heartbeat > threads generates storage report, it then reads disk usage information from > memory so that the heartbeat thread won't get blocked during heavy diskIO. > 2. Makes the checks (which required disk accesses) in transferBlock() in > DataNode into a separate thread so the heartbeat does not have to wait for > this when heartbeating. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9901) Move block validation out of the heartbeat thread
[ https://issues.apache.org/jira/browse/HDFS-9901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188663#comment-15188663 ] Hua Liu commented on HDFS-9901: --- Hi [~arpiagariu], I extended the "Description" section with some detailed information about the approach. > Move block validation out of the heartbeat thread > - > > Key: HDFS-9901 > URL: https://issues.apache.org/jira/browse/HDFS-9901 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Hua Liu >Assignee: Hua Liu > Attachments: > 0001-HDFS-9901-Move-block-validation-out-of-the-heartbeat.patch > > > During heavy disk IO, we noticed hearbeat thread hangs on checkBlock method, > which checks the existence and length of a block before spins off a thread to > do the actual transferring. In extreme cases, the heartbeat thread hang more > than 10 minutes so the namenode marked the datanode as dead and started > replicating its blocks, which caused more disk IO on other nodes and can > potentially brought them down. > The patch contains two changes: > 1. Makes DF asynchronous when monitoring the disk by creating a thread that > checks the disk and updates the disk status periodically. When the heartbeat > threads generates storage report, it then reads disk usage information from > memory so that the heartbeat thread won't get blocked during heavy diskIO. > 2. Makes the checks (which required disk accesses) in transferBlock() in > DataNode into a separate thread so the heartbeat does not have to wait for > this when heartbeating. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9901) Move block validation out of the heartbeat thread
[ https://issues.apache.org/jira/browse/HDFS-9901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hua Liu updated HDFS-9901: -- Description: During heavy disk IO, we noticed hearbeat thread hangs on checkBlock method, which checks the existence and length of a block before spins off a thread to do the actual transferring. In extreme cases, the heartbeat thread hang more than 10 minutes so the namenode marked the datanode as dead and started replicating its blocks, which caused more disk IO on other nodes and can potentially brought them down. The patch contains two changes: 1. Makes DF asynchronous when monitoring the disk by creating a thread that checks the disk and updates the disk status periodically. When the heartbeat threads generates storage report, it then reads disk usage information from memory so that the heartbeat thread won't get blocked during heavy diskIO. 2. Makes the checks (which required disk accesses) in transferBlock() in DataNode into a separate thread so the heartbeat does not have to wait for this when heartbeating. was: During heavy disk IO, we noticed hearbeat thread hangs on checkBlock method, which checks the existence and length of a block before spins off a thread to do the actual transferring. In extreme cases, the heartbeat thread hang more than 10 minutes so the namenode marked the datanode as dead and started replicating its blocks, which caused more disk IO on other nodes and can potentially brought them down. The patch contains two changes: 1. Makes DF asynchronous when monitoring the disk by creating a thread that checks the disk and updates the disk status periodically. Then the FsVolumeImpl reads the values that are collected asynchronously. 2. > Move block validation out of the heartbeat thread > - > > Key: HDFS-9901 > URL: https://issues.apache.org/jira/browse/HDFS-9901 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Hua Liu >Assignee: Hua Liu > Attachments: > 0001-HDFS-9901-Move-block-validation-out-of-the-heartbeat.patch > > > During heavy disk IO, we noticed hearbeat thread hangs on checkBlock method, > which checks the existence and length of a block before spins off a thread to > do the actual transferring. In extreme cases, the heartbeat thread hang more > than 10 minutes so the namenode marked the datanode as dead and started > replicating its blocks, which caused more disk IO on other nodes and can > potentially brought them down. > The patch contains two changes: > 1. Makes DF asynchronous when monitoring the disk by creating a thread that > checks the disk and updates the disk status periodically. When the heartbeat > threads generates storage report, it then reads disk usage information from > memory so that the heartbeat thread won't get blocked during heavy diskIO. > 2. Makes the checks (which required disk accesses) in transferBlock() in > DataNode into a separate thread so the heartbeat does not have to wait for > this when heartbeating. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9901) Move block validation out of the heartbeat thread
[ https://issues.apache.org/jira/browse/HDFS-9901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hua Liu updated HDFS-9901: -- Description: During heavy disk IO, we noticed hearbeat thread hangs on checkBlock method, which checks the existence and length of a block before spins off a thread to do the actual transferring. In extreme cases, the heartbeat thread hang more than 10 minutes so the namenode marked the datanode as dead and started replicating its blocks, which caused more disk IO on other nodes and can potentially brought them down. The patch contains two changes: 1. Makes DF asynchronous when monitoring the disk by creating a thread that checks the disk and updates the disk status periodically. Then the FsVolumeImpl reads the values that are collected asynchronously. 2. was: During heavy disk IO, we noticed hearbeat thread hangs on checkBlock method, which checks the existence and length of a block before spins off a thread to do the actual transferring. In extreme cases, the heartbeat thread hang more than 10 minutes so the namenode marked the datanode as dead and started replicating its blocks, which caused more disk IO on other nodes and can potentially brought them down. The patch contains two changes: 1. > Move block validation out of the heartbeat thread > - > > Key: HDFS-9901 > URL: https://issues.apache.org/jira/browse/HDFS-9901 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Hua Liu >Assignee: Hua Liu > Attachments: > 0001-HDFS-9901-Move-block-validation-out-of-the-heartbeat.patch > > > During heavy disk IO, we noticed hearbeat thread hangs on checkBlock method, > which checks the existence and length of a block before spins off a thread to > do the actual transferring. In extreme cases, the heartbeat thread hang more > than 10 minutes so the namenode marked the datanode as dead and started > replicating its blocks, which caused more disk IO on other nodes and can > potentially brought them down. > The patch contains two changes: > 1. Makes DF asynchronous when monitoring the disk by creating a thread that > checks the disk and updates the disk status periodically. Then the > FsVolumeImpl reads the values that are collected asynchronously. > 2. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9901) Move block validation out of the heartbeat thread
[ https://issues.apache.org/jira/browse/HDFS-9901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hua Liu updated HDFS-9901: -- Description: During heavy disk IO, we noticed hearbeat thread hangs on checkBlock method, which checks the existence and length of a block before spins off a thread to do the actual transferring. In extreme cases, the heartbeat thread hang more than 10 minutes so the namenode marked the datanode as dead and started replicating its blocks, which caused more disk IO on other nodes and can potentially brought them down. The patch contains two changes: 1. was:During heavy disk IO, we noticed hearbeat thread hangs on checkBlock method, which checks the existence and length of a block before spins off a thread to do the actual transferring. In extreme cases, the heartbeat thread hang more than 10 minutes so the namenode marked the datanode as dead and started replicating its blocks, which caused more disk IO on other nodes and can potentially brought them down. > Move block validation out of the heartbeat thread > - > > Key: HDFS-9901 > URL: https://issues.apache.org/jira/browse/HDFS-9901 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Hua Liu >Assignee: Hua Liu > Attachments: > 0001-HDFS-9901-Move-block-validation-out-of-the-heartbeat.patch > > > During heavy disk IO, we noticed hearbeat thread hangs on checkBlock method, > which checks the existence and length of a block before spins off a thread to > do the actual transferring. In extreme cases, the heartbeat thread hang more > than 10 minutes so the namenode marked the datanode as dead and started > replicating its blocks, which caused more disk IO on other nodes and can > potentially brought them down. > The patch contains two changes: > 1. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9933) ReverseXML should be capitalized in oiv usage message
[ https://issues.apache.org/jira/browse/HDFS-9933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188637#comment-15188637 ] Hadoop QA commented on HDFS-9933: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 56s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s {color} | {color:green} trunk passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 10s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s {color} | {color:green} trunk passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 5s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 57s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s {color} | {color:green} the patch passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 2s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 48s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 30s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s {color} | {color:green} the patch passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 4s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 41s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_74. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 17s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 20s {color} | {color:red} Patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 106m 31s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_74 Failed junit tests | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.hdfs.security.TestDelegationTokenForProxyUser | | | hadoop.hdfs.TestFileAppend | | | hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock | | JDK v1.8.0_74 Timed out junit tests | org.apache.hadoop.hdfs.TestReplication | | | org.apache.hadoop.hdfs.TestDecommission | | | org.apache.hadoop.hdfs.TestDFSStripedOutputStream | | |
[jira] [Commented] (HDFS-9427) HDFS should not default to ephemeral ports
[ https://issues.apache.org/jira/browse/HDFS-9427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188632#comment-15188632 ] Hadoop QA commented on HDFS-9427: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 6 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 56s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 17s {color} | {color:green} trunk passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 50s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 13s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 58s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 34s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 8m 1s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 28s {color} | {color:green} trunk passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 46s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 18s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 16s {color} | {color:green} the patch passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 16s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 2s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 2s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 15s {color} | {color:green} root: patch generated 0 new + 575 unchanged - 6 fixed = 575 total (was 581) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 3s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 42s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 1 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 10m 26s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 46s {color} | {color:green} the patch passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 0s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 0s {color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_74. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 56s {color} | {color:green} hadoop-hdfs-client in the patch passed with JDK v1.8.0_74. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 50s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_74. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 15s {color} | {color:green} hadoop-yarn-registry in the patch passed with JDK v1.8.0_74. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 15s {color} | {color:red} hadoop-mapreduce-client-core in the
[jira] [Commented] (HDFS-7166) SbNN Web UI shows #Under replicated blocks and #pending deletion blocks
[ https://issues.apache.org/jira/browse/HDFS-7166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188619#comment-15188619 ] Haohui Mai commented on HDFS-7166: -- LGTM. +1. For unit tests I will file another jira to track it. > SbNN Web UI shows #Under replicated blocks and #pending deletion blocks > --- > > Key: HDFS-7166 > URL: https://issues.apache.org/jira/browse/HDFS-7166 > Project: Hadoop HDFS > Issue Type: Bug > Components: ha >Reporter: Juan Yu >Assignee: Wei-Chiu Chuang > Attachments: HDFS-7166.001.patch > > > I believe that's an regression of HDFS-5333 > According to HDFS-2901 and HDFS-6178 > The Standby Namenode doesn't compute replication queues, we shouldn't show > under-replicated/missing blocks or corrupt files. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9935) Remove LEASE_{SOFTLIMIT,HARDLIMIT}_PERIOD constants from HdfsServerConstants
[ https://issues.apache.org/jira/browse/HDFS-9935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lin Yiqun updated HDFS-9935: Attachment: HDFS-9935.001.patch > Remove LEASE_{SOFTLIMIT,HARDLIMIT}_PERIOD constants from HdfsServerConstants > > > Key: HDFS-9935 > URL: https://issues.apache.org/jira/browse/HDFS-9935 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Lin Yiqun >Assignee: Lin Yiqun >Priority: Minor > Attachments: HDFS-9935.001.patch > > > In HDFS-9134, it has moved the > {{LEASE_SOFTLIMIT_PERIOD}},{{LEASE_HARDLIMIT_PERIOD}} constants from > {{HdfsServerConstants}} to {{HdfsConstants}} because these two constants are > used by {{DFSClient}} which is moved to {{hadoop-hdfs-client}}. And constants > in {{HdfsConstants}} can be both used by client and server side. In addition, > I have checked that these two constants in {{HdfsServerConstants}} has > already not been used in project now and were all replaced by > {{HdfsConstants.LEASE_SOFTLIMIT_PERIOD}},{{HdfsConstants.LEASE_HARDLIMIT_PERIOD}}. > So I think we can remove these unused constant values in > {{HdfsServerConstants}} completely. Instead of we can use them in > {{HdfsConstants}} if we want to use them in the future. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9935) Remove LEASE_{SOFTLIMIT,HARDLIMIT}_PERIOD constants from HdfsServerConstants
[ https://issues.apache.org/jira/browse/HDFS-9935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lin Yiqun updated HDFS-9935: Status: Patch Available (was: Open) Attach a simple patch, kindly review. > Remove LEASE_{SOFTLIMIT,HARDLIMIT}_PERIOD constants from HdfsServerConstants > > > Key: HDFS-9935 > URL: https://issues.apache.org/jira/browse/HDFS-9935 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Lin Yiqun >Assignee: Lin Yiqun >Priority: Minor > Attachments: HDFS-9935.001.patch > > > In HDFS-9134, it has moved the > {{LEASE_SOFTLIMIT_PERIOD}},{{LEASE_HARDLIMIT_PERIOD}} constants from > {{HdfsServerConstants}} to {{HdfsConstants}} because these two constants are > used by {{DFSClient}} which is moved to {{hadoop-hdfs-client}}. And constants > in {{HdfsConstants}} can be both used by client and server side. In addition, > I have checked that these two constants in {{HdfsServerConstants}} has > already not been used in project now and were all replaced by > {{HdfsConstants.LEASE_SOFTLIMIT_PERIOD}},{{HdfsConstants.LEASE_HARDLIMIT_PERIOD}}. > So I think we can remove these unused constant values in > {{HdfsServerConstants}} completely. Instead of we can use them in > {{HdfsConstants}} if we want to use them in the future. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9935) Remove LEASE_{SOFTLIMIT,HARDLIMIT}_PERIOD constants from HdfsServerConstants
[ https://issues.apache.org/jira/browse/HDFS-9935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lin Yiqun updated HDFS-9935: Description: In HDFS-9134, it has moved the {{LEASE_SOFTLIMIT_PERIOD}},{{LEASE_HARDLIMIT_PERIOD}} constants from {{HdfsServerConstants}} to {{HdfsConstants}} because these two constants are used by {{DFSClient}} which is moved to {{hadoop-hdfs-client}}. And constants in {{HdfsConstants}} can be both used by client and server side. In addition, I have checked that these two constants in {{HdfsServerConstants}} has already not been used in project now and were all replaced by {{HdfsConstants.LEASE_SOFTLIMIT_PERIOD}},{{HdfsConstants.LEASE_HARDLIMIT_PERIOD}}. So I think we can remove these unused constant values in {{HdfsServerConstants}} completely. Instead of we can use them in {{HdfsConstants}} if we want to use them in the future. (was: In HDFS-9134, it has moved the LEASE_{SOFTLIMIT,HARDLIMIT}_PERIOD constants from {{HdfsServerConstants}} to {{HdfsConstants}} because these two constants are used by {{DFSClient}} which is moved to {{hadoop-hdfs-client}}. And constants in {{HdfsConstants}} can be both used by client and server side. In addition, I have checked that these two constants in {{HdfsServerConstants}} has already not been used in project now and were all replaced by {{HdfsConstants.LEASE_{SOFTLIMIT,HARDLIMIT}_PERIOD}}. So I think we can remove these unused constant values in {{HdfsServerConstants}} completely. Instead of we can use them in {{HdfsConstants}} if we want to use them in the future.) > Remove LEASE_{SOFTLIMIT,HARDLIMIT}_PERIOD constants from HdfsServerConstants > > > Key: HDFS-9935 > URL: https://issues.apache.org/jira/browse/HDFS-9935 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Lin Yiqun >Assignee: Lin Yiqun >Priority: Minor > > In HDFS-9134, it has moved the > {{LEASE_SOFTLIMIT_PERIOD}},{{LEASE_HARDLIMIT_PERIOD}} constants from > {{HdfsServerConstants}} to {{HdfsConstants}} because these two constants are > used by {{DFSClient}} which is moved to {{hadoop-hdfs-client}}. And constants > in {{HdfsConstants}} can be both used by client and server side. In addition, > I have checked that these two constants in {{HdfsServerConstants}} has > already not been used in project now and were all replaced by > {{HdfsConstants.LEASE_SOFTLIMIT_PERIOD}},{{HdfsConstants.LEASE_HARDLIMIT_PERIOD}}. > So I think we can remove these unused constant values in > {{HdfsServerConstants}} completely. Instead of we can use them in > {{HdfsConstants}} if we want to use them in the future. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-9935) Remove LEASE_{SOFTLIMIT,HARDLIMIT}_PERIOD constants from HdfsServerConstants
Lin Yiqun created HDFS-9935: --- Summary: Remove LEASE_{SOFTLIMIT,HARDLIMIT}_PERIOD constants from HdfsServerConstants Key: HDFS-9935 URL: https://issues.apache.org/jira/browse/HDFS-9935 Project: Hadoop HDFS Issue Type: Bug Reporter: Lin Yiqun Assignee: Lin Yiqun Priority: Minor In HDFS-9134, it has moved the LEASE_{SOFTLIMIT,HARDLIMIT}_PERIOD constants from {{HdfsServerConstants}} to {{HdfsConstants}} because these two constants are used by {{DFSClient}} which is moved to {{hadoop-hdfs-client}}. And constants in {{HdfsConstants}} can be both used by client and server side. In addition, I have checked that these two constants in {{HdfsServerConstants}} has already not been used in project now and were all replaced by {{HdfsConstants.LEASE_{SOFTLIMIT,HARDLIMIT}_PERIOD}}. So I think we can remove these unused constant values in {{HdfsServerConstants}} completely. Instead of we can use them in {{HdfsConstants}} if we want to use them in the future. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-3702) Add an option for NOT writing the blocks locally if there is a datanode on the same box as the client
[ https://issues.apache.org/jira/browse/HDFS-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188579#comment-15188579 ] Hadoop QA commented on HDFS-3702: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 21 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 47s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 5s {color} | {color:green} trunk passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 57s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 16s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 30s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 41s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 11s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 24s {color} | {color:green} trunk passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 18s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 1s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 12s {color} | {color:green} the patch passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 59s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 59s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 59s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 2m 4s {color} | {color:red} root: patch generated 9 new + 684 unchanged - 7 fixed = 693 total (was 691) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 28s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 40s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 1s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 5m 2s {color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-client-jdk1.8.0_74 with JDK v1.8.0_74 generated 7 new + 1 unchanged - 0 fixed = 8 total (was 1) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 26s {color} | {color:green} the patch passed with JDK v1.8.0_74 {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 8m 39s {color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-client-jdk1.7.0_95 with JDK v1.7.0_95 generated 7 new + 1 unchanged - 0 fixed = 8 total (was 1) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 17s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 19s {color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_74. {color} | | {color:green}+1{color} | {color:green} unit {color} |
[jira] [Commented] (HDFS-9926) ozone : Add volume commands to CLI
[ https://issues.apache.org/jira/browse/HDFS-9926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188561#comment-15188561 ] Hadoop QA commented on HDFS-9926: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 32s {color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s {color} | {color:green} HDFS-7240 passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s {color} | {color:green} HDFS-7240 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s {color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s {color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} HDFS-7240 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 6s {color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 31s {color} | {color:green} HDFS-7240 passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 33s {color} | {color:green} HDFS-7240 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s {color} | {color:green} the patch passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 18s {color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 28s {color} | {color:green} the patch passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 34s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 31s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_74. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 52m 1s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 133m 18s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_74 Failed junit tests | hadoop.hdfs.server.datanode.TestBPOfferService | | | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency | | | hadoop.hdfs.TestFileAppend | | | hadoop.hdfs.server.datanode.TestDataXceiverLazyPersistHint | | JDK v1.7.0_95 Failed junit tests | hadoop.hdfs.server.datanode.TestBPOfferService | | |
[jira] [Updated] (HDFS-9405) When starting a file, NameNode should generate EDEK in a separate thread
[ https://issues.apache.org/jira/browse/HDFS-9405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-9405: Attachment: HDFS-9405.02.patch Thanks [~asuresh] for the comment, good point. Patch 2 retries every 1 second for up to 1 minute. > When starting a file, NameNode should generate EDEK in a separate thread > > > Key: HDFS-9405 > URL: https://issues.apache.org/jira/browse/HDFS-9405 > Project: Hadoop HDFS > Issue Type: Improvement > Components: encryption, namenode >Affects Versions: 2.7.1 >Reporter: Zhe Zhang >Assignee: Xiao Chen > Attachments: HDFS-9405.01.patch, HDFS-9405.02.patch > > > {{generateEncryptedDataEncryptionKey}} involves a non-trivial I/O operation > to the key provider, which could be slow or cause timeout. It should be done > as a separate thread so as to return a proper error message to the RPC caller. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9405) When starting a file, NameNode should generate EDEK in a separate thread
[ https://issues.apache.org/jira/browse/HDFS-9405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-9405: Status: Patch Available (was: Open) > When starting a file, NameNode should generate EDEK in a separate thread > > > Key: HDFS-9405 > URL: https://issues.apache.org/jira/browse/HDFS-9405 > Project: Hadoop HDFS > Issue Type: Improvement > Components: encryption, namenode >Affects Versions: 2.7.1 >Reporter: Zhe Zhang >Assignee: Xiao Chen > Attachments: HDFS-9405.01.patch, HDFS-9405.02.patch > > > {{generateEncryptedDataEncryptionKey}} involves a non-trivial I/O operation > to the key provider, which could be slow or cause timeout. It should be done > as a separate thread so as to return a proper error message to the RPC caller. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9927) Document the new OIV ReverseXML processor
[ https://issues.apache.org/jira/browse/HDFS-9927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188520#comment-15188520 ] Hadoop QA commented on HDFS-9927: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 26s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 11m 15s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ca8df7 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12792327/HDFS-9927.001.patch | | JIRA Issue | HDFS-9927 | | Optional Tests | asflicense mvnsite | | uname | Linux c11349f3f9f1 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 2e040d3 | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/14768/console | | Powered by | Apache Yetus 0.2.0 http://yetus.apache.org | This message was automatically generated. > Document the new OIV ReverseXML processor > - > > Key: HDFS-9927 > URL: https://issues.apache.org/jira/browse/HDFS-9927 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Affects Versions: 2.8.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Minor > Labels: documentation, supportability > Attachments: HDFS-9927.001.patch > > > HDFS-9835 added a new ReverseXML processor which reconstructs an fsimage from > an XML file. > This new feature should be documented, and perhaps label it as "experimental" > in command line. > Also, OIV section in HDFSCommands.md should be updated too, to include new > processors options and it should also include links to OIV page. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9874) Long living DataXceiver threads cause volume shutdown to block.
[ https://issues.apache.org/jira/browse/HDFS-9874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188508#comment-15188508 ] Hadoop QA commented on HDFS-9874: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 35s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s {color} | {color:green} trunk passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s {color} | {color:green} trunk passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 13s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 56s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s {color} | {color:green} the patch passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 57s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 5s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 30s {color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: patch generated 1 new + 145 unchanged - 0 fixed = 146 total (was 145) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 21s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 19s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 31s {color} | {color:green} the patch passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 25s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 15s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_74. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 24s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 180m 17s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_74 Failed junit tests | hadoop.hdfs.server.datanode.TestTriggerBlockReport | | | hadoop.hdfs.server.namenode.TestEditLog | | | hadoop.hdfs.TestFileAppend | | | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA | | | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport | | JDK v1.7.0_95 Failed junit tests | hadoop.hdfs.server.namenode.TestEditLog | | | hadoop.hdfs.TestDFSUpgradeFromImage | | | hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ca8df7 | | JIRA Patch URL |
[jira] [Commented] (HDFS-9901) Move block validation out of the heartbeat thread
[ https://issues.apache.org/jira/browse/HDFS-9901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188501#comment-15188501 ] Inigo Goiri commented on HDFS-9901: --- The first version of the patch: # Makes {{DF}} asynchronous when monitoring the disk by creating a thread that checks the disk and updates the disk status periodically. Then the {{FsVolumeImpl}} reads the values that are collected asynchronously. # Makes the checks (which required disk accesses) in {{transferBlock()}} in {{DataNode}} into a separate thread so the heartbeat does not have to wait for this when heartbeating. > Move block validation out of the heartbeat thread > - > > Key: HDFS-9901 > URL: https://issues.apache.org/jira/browse/HDFS-9901 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Hua Liu >Assignee: Hua Liu > Attachments: > 0001-HDFS-9901-Move-block-validation-out-of-the-heartbeat.patch > > > During heavy disk IO, we noticed hearbeat thread hangs on checkBlock method, > which checks the existence and length of a block before spins off a thread to > do the actual transferring. In extreme cases, the heartbeat thread hang more > than 10 minutes so the namenode marked the datanode as dead and started > replicating its blocks, which caused more disk IO on other nodes and can > potentially brought them down. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9901) Move block validation out of the heartbeat thread
[ https://issues.apache.org/jira/browse/HDFS-9901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188448#comment-15188448 ] Hadoop QA commented on HDFS-9901: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 50s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 58s {color} | {color:green} trunk passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 34s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 11s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 55s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 28s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 50s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 5s {color} | {color:green} trunk passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 58s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 43s {color} | {color:green} the patch passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 43s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 30s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 30s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 7s {color} | {color:red} root: patch generated 5 new + 187 unchanged - 1 fixed = 192 total (was 188) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 49s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 28s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 23s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 12s {color} | {color:green} the patch passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 54s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 16s {color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_74. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 0s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_74. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 38s {color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 55m 36s {color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.7.0_95. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s {color} | {color:green} Patch does not generate ASF License warnings. {color} | |
[jira] [Commented] (HDFS-9934) ReverseXML oiv processor should bail out if the XML file's layoutVersion doesn't match oiv's
[ https://issues.apache.org/jira/browse/HDFS-9934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188442#comment-15188442 ] Aaron T. Myers commented on HDFS-9934: -- +1 pending Jenkins, the patch looks good to me. > ReverseXML oiv processor should bail out if the XML file's layoutVersion > doesn't match oiv's > > > Key: HDFS-9934 > URL: https://issues.apache.org/jira/browse/HDFS-9934 > Project: Hadoop HDFS > Issue Type: Bug > Components: tools >Affects Versions: 2.8.0 >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe > Attachments: HDFS-9934.001.patch > > > ReverseXML oiv processor should bail out if the XML file's layoutVersion > doesn't match oiv's -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9933) ReverseXML should be capitalized in oiv usage message
[ https://issues.apache.org/jira/browse/HDFS-9933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188441#comment-15188441 ] Aaron T. Myers commented on HDFS-9933: -- +1, patch looks good to me. > ReverseXML should be capitalized in oiv usage message > - > > Key: HDFS-9933 > URL: https://issues.apache.org/jira/browse/HDFS-9933 > Project: Hadoop HDFS > Issue Type: Bug > Components: tools >Affects Versions: 2.8.0 >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe >Priority: Minor > Attachments: HDFS-9933.001.patch > > > ReverseXML should be capitalized in oiv usage message -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9934) ReverseXML oiv processor should bail out if the XML file's layoutVersion doesn't match oiv's
[ https://issues.apache.org/jira/browse/HDFS-9934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin Patrick McCabe updated HDFS-9934: --- Attachment: HDFS-9934.001.patch > ReverseXML oiv processor should bail out if the XML file's layoutVersion > doesn't match oiv's > > > Key: HDFS-9934 > URL: https://issues.apache.org/jira/browse/HDFS-9934 > Project: Hadoop HDFS > Issue Type: Bug > Components: tools >Affects Versions: 2.8.0 >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe > Attachments: HDFS-9934.001.patch > > > ReverseXML oiv processor should bail out if the XML file's layoutVersion > doesn't match oiv's -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9934) ReverseXML oiv processor should bail out if the XML file's layoutVersion doesn't match oiv's
[ https://issues.apache.org/jira/browse/HDFS-9934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin Patrick McCabe updated HDFS-9934: --- Status: Patch Available (was: Open) > ReverseXML oiv processor should bail out if the XML file's layoutVersion > doesn't match oiv's > > > Key: HDFS-9934 > URL: https://issues.apache.org/jira/browse/HDFS-9934 > Project: Hadoop HDFS > Issue Type: Bug > Components: tools >Affects Versions: 2.8.0 >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe > Attachments: HDFS-9934.001.patch > > > ReverseXML oiv processor should bail out if the XML file's layoutVersion > doesn't match oiv's -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-9934) ReverseXML oiv processor should bail out if the XML file's layoutVersion doesn't match oiv's
Colin Patrick McCabe created HDFS-9934: -- Summary: ReverseXML oiv processor should bail out if the XML file's layoutVersion doesn't match oiv's Key: HDFS-9934 URL: https://issues.apache.org/jira/browse/HDFS-9934 Project: Hadoop HDFS Issue Type: Bug Components: tools Affects Versions: 2.8.0 Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe ReverseXML oiv processor should bail out if the XML file's layoutVersion doesn't match oiv's -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9895) Remove all cached configuration from DataNode
[ https://issues.apache.org/jira/browse/HDFS-9895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-9895: Attachment: HDFS-9895.000.patch The patch V000 is posted here for review, thanks! > Remove all cached configuration from DataNode > - > > Key: HDFS-9895 > URL: https://issues.apache.org/jira/browse/HDFS-9895 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-9895.000.patch > > > Since DataNode inherits ReconfigurableBase with Configured as base class > where configuration is maintained, all cached configurations in DataNode > should be removed for brevity and consistency purpose. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9895) Remove all cached configuration from DataNode
[ https://issues.apache.org/jira/browse/HDFS-9895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-9895: Status: Patch Available (was: Open) > Remove all cached configuration from DataNode > - > > Key: HDFS-9895 > URL: https://issues.apache.org/jira/browse/HDFS-9895 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-9895.000.patch > > > Since DataNode inherits ReconfigurableBase with Configured as base class > where configuration is maintained, all cached configurations in DataNode > should be removed for brevity and consistency purpose. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9927) Document the new OIV ReverseXML processor
[ https://issues.apache.org/jira/browse/HDFS-9927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188399#comment-15188399 ] Colin Patrick McCabe commented on HDFS-9927: +1 pending Jenkins. Thanks, [~jojochuang]. > Document the new OIV ReverseXML processor > - > > Key: HDFS-9927 > URL: https://issues.apache.org/jira/browse/HDFS-9927 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Affects Versions: 2.8.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Minor > Labels: documentation, supportability > Attachments: HDFS-9927.001.patch > > > HDFS-9835 added a new ReverseXML processor which reconstructs an fsimage from > an XML file. > This new feature should be documented, and perhaps label it as "experimental" > in command line. > Also, OIV section in HDFSCommands.md should be updated too, to include new > processors options and it should also include links to OIV page. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9927) Document the new OIV ReverseXML processor
[ https://issues.apache.org/jira/browse/HDFS-9927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin Patrick McCabe updated HDFS-9927: --- Affects Version/s: (was: 2.9.0) 2.8.0 Target Version/s: 2.8.0 Status: Patch Available (was: Open) > Document the new OIV ReverseXML processor > - > > Key: HDFS-9927 > URL: https://issues.apache.org/jira/browse/HDFS-9927 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Affects Versions: 2.8.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Minor > Labels: documentation, supportability > Attachments: HDFS-9927.001.patch > > > HDFS-9835 added a new ReverseXML processor which reconstructs an fsimage from > an XML file. > This new feature should be documented, and perhaps label it as "experimental" > in command line. > Also, OIV section in HDFSCommands.md should be updated too, to include new > processors options and it should also include links to OIV page. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9835) OIV: add ReverseXML processor which reconstructs an fsimage from an XML file
[ https://issues.apache.org/jira/browse/HDFS-9835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin Patrick McCabe updated HDFS-9835: --- Target Version/s: 2.8.0 (was: 2.9.0) Fix Version/s: (was: 2.9.0) 2.8.0 > OIV: add ReverseXML processor which reconstructs an fsimage from an XML file > > > Key: HDFS-9835 > URL: https://issues.apache.org/jira/browse/HDFS-9835 > Project: Hadoop HDFS > Issue Type: New Feature > Components: tools >Affects Versions: 2.0.0-alpha >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe > Fix For: 2.8.0 > > Attachments: HDFS-9835.001.patch, HDFS-9835.002.patch, > HDFS-9835.003.patch, HDFS-9835.004.patch, HDFS-9835.005.patch, > HDFS-9835.006.patch > > > OIV: add ReverseXML processor which reconstructs an fsimage from an XML file. > This will make it easy to create fsimages for testing, and manually edit > fsimages when there is corruption. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HDFS-9835) OIV: add ReverseXML processor which reconstructs an fsimage from an XML file
[ https://issues.apache.org/jira/browse/HDFS-9835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15177000#comment-15177000 ] Colin Patrick McCabe edited comment on HDFS-9835 at 3/10/16 12:42 AM: -- committed to 2.8 was (Author: cmccabe): committed to 2.9 > OIV: add ReverseXML processor which reconstructs an fsimage from an XML file > > > Key: HDFS-9835 > URL: https://issues.apache.org/jira/browse/HDFS-9835 > Project: Hadoop HDFS > Issue Type: New Feature > Components: tools >Affects Versions: 2.0.0-alpha >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe > Fix For: 2.8.0 > > Attachments: HDFS-9835.001.patch, HDFS-9835.002.patch, > HDFS-9835.003.patch, HDFS-9835.004.patch, HDFS-9835.005.patch, > HDFS-9835.006.patch > > > OIV: add ReverseXML processor which reconstructs an fsimage from an XML file. > This will make it easy to create fsimages for testing, and manually edit > fsimages when there is corruption. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9933) ReverseXML should be capitalized in oiv usage message
[ https://issues.apache.org/jira/browse/HDFS-9933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin Patrick McCabe updated HDFS-9933: --- Attachment: HDFS-9933.001.patch > ReverseXML should be capitalized in oiv usage message > - > > Key: HDFS-9933 > URL: https://issues.apache.org/jira/browse/HDFS-9933 > Project: Hadoop HDFS > Issue Type: Bug > Components: tools >Affects Versions: 2.8.0 >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe >Priority: Minor > Attachments: HDFS-9933.001.patch > > > ReverseXML should be capitalized in oiv usage message -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9933) ReverseXML should be capitalized in oiv usage message
[ https://issues.apache.org/jira/browse/HDFS-9933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin Patrick McCabe updated HDFS-9933: --- Status: Patch Available (was: Open) > ReverseXML should be capitalized in oiv usage message > - > > Key: HDFS-9933 > URL: https://issues.apache.org/jira/browse/HDFS-9933 > Project: Hadoop HDFS > Issue Type: Bug > Components: tools >Affects Versions: 2.8.0 >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe >Priority: Minor > Attachments: HDFS-9933.001.patch > > > ReverseXML should be capitalized in oiv usage message -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-9933) ReverseXML should be capitalized in oiv usage message
Colin Patrick McCabe created HDFS-9933: -- Summary: ReverseXML should be capitalized in oiv usage message Key: HDFS-9933 URL: https://issues.apache.org/jira/browse/HDFS-9933 Project: Hadoop HDFS Issue Type: Bug Components: tools Affects Versions: 2.8.0 Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Priority: Minor ReverseXML should be capitalized in oiv usage message -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9925) Ozone: Add Ozone Client lib for bucket handling
[ https://issues.apache.org/jira/browse/HDFS-9925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188385#comment-15188385 ] Anu Engineer commented on HDFS-9925: Test failures are not related to this patch. > Ozone: Add Ozone Client lib for bucket handling > --- > > Key: HDFS-9925 > URL: https://issues.apache.org/jira/browse/HDFS-9925 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer > Fix For: HDFS-7240 > > Attachments: HDFS-9925-HDFS-7240.001.patch, > HDFS-9925-HDFS-7240.002.patch > > > Add bucket handling lib code and tests. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9925) Ozone: Add Ozone Client lib for bucket handling
[ https://issues.apache.org/jira/browse/HDFS-9925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188384#comment-15188384 ] Hadoop QA commented on HDFS-9925: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 55s {color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s {color} | {color:green} HDFS-7240 passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s {color} | {color:green} HDFS-7240 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s {color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s {color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} HDFS-7240 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 7s {color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 31s {color} | {color:green} HDFS-7240 passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 33s {color} | {color:green} HDFS-7240 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s {color} | {color:green} the patch passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 18s {color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: patch generated 2 new + 2 unchanged - 0 fixed = 4 total (was 2) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 19s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s {color} | {color:green} the patch passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 32s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 8s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_74. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 50m 53s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 132m 12s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_74 Failed junit tests | hadoop.hdfs.server.datanode.TestBPOfferService | | | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency | | | hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness | | | hadoop.hdfs.server.datanode.TestDataXceiverLazyPersistHint | | JDK v1.7.0_95 Failed junit tests | hadoop.hdfs.server.datanode.TestBPOfferService | | | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency | | |
[jira] [Commented] (HDFS-3702) Add an option for NOT writing the blocks locally if there is a datanode on the same box as the client
[ https://issues.apache.org/jira/browse/HDFS-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188362#comment-15188362 ] Hadoop QA commented on HDFS-3702: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 21 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 39s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 30s {color} | {color:green} trunk passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 49s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 10s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 20s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 41s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 13s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 22s {color} | {color:green} trunk passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 14s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 0s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 5s {color} | {color:green} the patch passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 5s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 5s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 45s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 45s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 45s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 11s {color} | {color:red} root: patch generated 9 new + 684 unchanged - 7 fixed = 693 total (was 691) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 19s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 40s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 50s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 4m 46s {color} | {color:red} hadoop-common-project_hadoop-common-jdk1.8.0_74 with JDK v1.8.0_74 generated 6 new + 1 unchanged - 0 fixed = 7 total (was 1) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 18s {color} | {color:green} the patch passed with JDK v1.8.0_74 {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 8m 18s {color} | {color:red} hadoop-common-project_hadoop-common-jdk1.7.0_95 with JDK v1.7.0_95 generated 6 new + 13 unchanged - 0 fixed = 19 total (was 13) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 11s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 57s {color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_74. {color} | | {color:green}+1{color} | {color:green} unit {color} |
[jira] [Commented] (HDFS-9719) Refactoring ErasureCodingWorker into smaller reusable constructs
[ https://issues.apache.org/jira/browse/HDFS-9719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188354#comment-15188354 ] Kai Zheng commented on HDFS-9719: - Thanks [~umamaheswararao] for the review and great comments! bq. I am seeing some variable names still like “toRecoverLen”. Can we take chance in this patch to change them like toReconstructLen ? Yes. I will fix the variable and check one more time for such instances. bq. doReadMinimum : this method name looks to be wrong. Its actually reading from minimum required sourced data nodes. But this name looks like it is reading minimum data length/what? The concern seems reasonable. I will rename: doReadMinimum => doReadMinimumSources, and the same, readMinimum => readMinimumSources. bq. Also I assume StripedReader itself should handle multiple chunk/cell readers. Yeah, *Striped* should itself already be of the meaning of handling multiple units, source datanodes or target datanodes. bq. how about renaming class name like StripedReaders -> StripedReader and StripedReader -> StripedChunkReader and the same comment applies for StripedWriter* Pretty good suggestions for more readable component names! Thanks a lot! Will update the patch accordingly soon. ... > Refactoring ErasureCodingWorker into smaller reusable constructs > > > Key: HDFS-9719 > URL: https://issues.apache.org/jira/browse/HDFS-9719 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Kai Zheng >Assignee: Kai Zheng > Attachments: HDFS-9719-v1.patch, HDFS-9719-v2.patch, > HDFS-9719-v3.patch, HDFS-9719-v4.patch, HDFS-9719-v5.patch, HDFS-9719-v6.patch > > > This would suggest and refactor {{ErasureCodingWorker}} into smaller > constructs to be reused in other places like block group checksum computing > in datanode side. As discussed in HDFS-8430 and implemented in HDFS-9694 > patch, checksum computing for striped block groups would be distributed to > datanode in the group, where data block data should be able to be > reconstructed when missed/corrupted to recompute the block checksum. The most > needed codes are in the current ErasureCodingWorker and could be reused in > order to avoid duplication. Fortunately, we have very good and complete > tests, which would make the refactoring much easier. The refactoring will > also help a lot for subsequent tasks in phase II for non-striping erasure > coded files and blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9427) HDFS should not default to ephemeral ports
[ https://issues.apache.org/jira/browse/HDFS-9427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188345#comment-15188345 ] Colin Patrick McCabe commented on HDFS-9427: Thanks, [~xiaobingo]. Looks great. > HDFS should not default to ephemeral ports > -- > > Key: HDFS-9427 > URL: https://issues.apache.org/jira/browse/HDFS-9427 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs-client, namenode >Affects Versions: 3.0.0 >Reporter: Arpit Agarwal >Assignee: Xiaobing Zhou >Priority: Critical > Labels: Incompatible > Attachments: HDFS-9427.000.patch, HDFS-9427.001.patch, > HDFS-9427.002.patch, HDFS-9427.003.patch > > > HDFS defaults to ephemeral ports for the some HTTP/RPC endpoints. This can > cause bind exceptions on service startup if the port is in use. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9901) Move block validation out of the heartbeat thread
[ https://issues.apache.org/jira/browse/HDFS-9901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188327#comment-15188327 ] Arpit Agarwal commented on HDFS-9901: - Hi [~hualiu], thanks for taking this up. Would you consider posting a short description of your approach to help with the code review? > Move block validation out of the heartbeat thread > - > > Key: HDFS-9901 > URL: https://issues.apache.org/jira/browse/HDFS-9901 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Hua Liu >Assignee: Hua Liu > Attachments: > 0001-HDFS-9901-Move-block-validation-out-of-the-heartbeat.patch > > > During heavy disk IO, we noticed hearbeat thread hangs on checkBlock method, > which checks the existence and length of a block before spins off a thread to > do the actual transferring. In extreme cases, the heartbeat thread hang more > than 10 minutes so the namenode marked the datanode as dead and started > replicating its blocks, which caused more disk IO on other nodes and can > potentially brought them down. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-3702) Add an option for NOT writing the blocks locally if there is a datanode on the same box as the client
[ https://issues.apache.org/jira/browse/HDFS-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188323#comment-15188323 ] Andrew Wang commented on HDFS-3702: --- Thanks for the clarification Eddy, sounds good, just some nitty doc things before the final commit, otherwise +1 pending: * ClientProtocol and AddBlockFlags, the javadoc still talks about block allocation flags and BlockManager, but really these are just generic AddBlock flags. Currently we only use them to pass to BPPDefault, but in the future the flags could be used for anything. * Same comment applies to name of variable {{allocFlags}}, rename to {{addBlockFlags}} to be more generic? > Add an option for NOT writing the blocks locally if there is a datanode on > the same box as the client > - > > Key: HDFS-3702 > URL: https://issues.apache.org/jira/browse/HDFS-3702 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Affects Versions: 2.5.1 >Reporter: Nicolas Liochon >Assignee: Lei (Eddy) Xu >Priority: Minor > Labels: BB2015-05-TBR > Attachments: HDFS-3702.000.patch, HDFS-3702.001.patch, > HDFS-3702.002.patch, HDFS-3702.003.patch, HDFS-3702.004.patch, > HDFS-3702.005.patch, HDFS-3702.006.patch > > > This is useful for Write-Ahead-Logs: these files are writen for recovery > only, and are not read when there are no failures. > Taking HBase as an example, these files will be read only if the process that > wrote them (the 'HBase regionserver') dies. This will likely come from a > hardware failure, hence the corresponding datanode will be dead as well. So > we're writing 3 replicas, but in reality only 2 of them are really useful. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9926) ozone : Add volume commands to CLI
[ https://issues.apache.org/jira/browse/HDFS-9926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-9926: --- Attachment: HDFS-9926-HDFS-7240.002.patch fix findbug issues. > ozone : Add volume commands to CLI > -- > > Key: HDFS-9926 > URL: https://issues.apache.org/jira/browse/HDFS-9926 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer > Fix For: HDFS-7240 > > Attachments: HDFS-9926-HDFS-7240.001.patch, > HDFS-9926-HDFS-7240.002.patch > > > Adds a cli tool which supports volume commands -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9179) fs.defaultFS should not be used on the server side
[ https://issues.apache.org/jira/browse/HDFS-9179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188302#comment-15188302 ] Daniel Templeton commented on HDFS-9179: The justification was that the NN service URI configuration is more complicated than makes sense. First we look for a servicerpc-address. Failing at that, we look for an rpc-address. Failing at that we fall back to the defaultFS. And things just get worse with HA. I also find it awkward that we set the server's address according to a client-side configuration setting. This param gets used in the scenario where the server can't figure out who it's supposed to be (no proper servicerpc-address or rpc-address), so it falls back to whom it thinks the client thinks it should be. That seems off to me. I grant that those are fairly weak reasons to do something that's going to cause lots of things to break, but it would make the configuration logic cleaner. I did mark it as an incompatible change. :) [~atm], anything you want to add? > fs.defaultFS should not be used on the server side > -- > > Key: HDFS-9179 > URL: https://issues.apache.org/jira/browse/HDFS-9179 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 2.7.1 >Reporter: Daniel Templeton >Assignee: Daniel Templeton > > Currently the namenode will bind to the address given by defaultFS if no > rpc-address is given. That behavior is an evolutionary artifact and should > be removed. Instead, the rpc-address should be a required setting for the > server side configuration. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9926) ozone : Add volume commands to CLI
[ https://issues.apache.org/jira/browse/HDFS-9926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188255#comment-15188255 ] Hadoop QA commented on HDFS-9926: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 31s {color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 24s {color} | {color:green} HDFS-7240 passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s {color} | {color:green} HDFS-7240 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s {color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s {color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 19s {color} | {color:green} HDFS-7240 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 45s {color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 18s {color} | {color:green} HDFS-7240 passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 43s {color} | {color:green} HDFS-7240 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 12s {color} | {color:green} the patch passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 23s {color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 52s {color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 6 new + 1 unchanged - 0 fixed = 7 total (was 1) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 13s {color} | {color:green} the patch passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 48s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 105m 39s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_74. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 15s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 234m 37s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs | | | Format string should use %n rather than n in org.apache.hadoop.ozone.web.ozShell.volume.CreateVolumeHandler.execute(CommandLine) At CreateVolumeHandler.java:rather than n in org.apache.hadoop.ozone.web.ozShell.volume.CreateVolumeHandler.execute(CommandLine)
[jira] [Updated] (HDFS-9932) libhdfs++: find a URI parsing library
[ https://issues.apache.org/jira/browse/HDFS-9932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bob Hansen updated HDFS-9932: - Attachment: HDFS-9932.HDFS-8707.000.patch Pulled in uriparser2 from https://github.com/bnoordhuis/uriparser2 Code is MIT / BSD licensed. > libhdfs++: find a URI parsing library > - > > Key: HDFS-9932 > URL: https://issues.apache.org/jira/browse/HDFS-9932 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Bob Hansen >Assignee: Bob Hansen > Attachments: HDFS-9932.HDFS-8707.000.patch > > > The URI parsing implementation in HDFS-9556 using regex requires gcc 4.9+, > which seems a bit too steep at the moment. Find some code to parse URIs so > we don't have to roll our own. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9932) libhdfs++: find a URI parsing library
[ https://issues.apache.org/jira/browse/HDFS-9932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bob Hansen updated HDFS-9932: - Description: The URI parsing implementation in HDFS-9556 using regex requires gcc 4.9+, which seems a bit too steep at the moment. Find some code to parse URIs so we don't have to roll our own. (was: The URI parsing using regex requires gcc 4.9+, which seems a bit too steep at the moment. Find some code to parse URIs so we don't have to roll our own.) > libhdfs++: find a URI parsing library > - > > Key: HDFS-9932 > URL: https://issues.apache.org/jira/browse/HDFS-9932 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Bob Hansen >Assignee: Bob Hansen > > The URI parsing implementation in HDFS-9556 using regex requires gcc 4.9+, > which seems a bit too steep at the moment. Find some code to parse URIs so > we don't have to roll our own. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-9932) libhdfs++: find a URI parsing library
Bob Hansen created HDFS-9932: Summary: libhdfs++: find a URI parsing library Key: HDFS-9932 URL: https://issues.apache.org/jira/browse/HDFS-9932 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Bob Hansen Assignee: Bob Hansen The URI parsing using regex requires gcc 4.9+, which seems a bit too steep at the moment. Find some code to parse URIs so we don't have to roll our own. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9703) DiskBalancer : getBandwidth implementation
[ https://issues.apache.org/jira/browse/HDFS-9703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188219#comment-15188219 ] Hadoop QA commented on HDFS-9703: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 11s {color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s {color} | {color:green} HDFS-1312 passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s {color} | {color:green} HDFS-1312 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s {color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s {color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s {color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 40s {color} | {color:green} HDFS-1312 passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 39s {color} | {color:green} HDFS-1312 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 50s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s {color} | {color:green} the patch passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 51s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 21s {color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: patch generated 2 new + 180 unchanged - 0 fixed = 182 total (was 180) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 32s {color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 28s {color} | {color:green} the patch passed with JDK v1.8.0_74 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 15s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 105m 54s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_74. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 43s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 233m 23s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs | | | Inconsistent synchronization of org.apache.hadoop.hdfs.server.datanode.DataNode.diskBalancer; locked 50% of time Unsynchronized access at DataNode.java:50% of time Unsynchronized access at DataNode.java:[line 3356] | | JDK v1.8.0_74 Failed junit tests | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.TestPersistBlocks | | | hadoop.hdfs.security.TestDelegationTokenForProxyUser | | |
[jira] [Commented] (HDFS-9925) Ozone: Add Ozone Client lib for bucket handling
[ https://issues.apache.org/jira/browse/HDFS-9925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188213#comment-15188213 ] Chris Nauroth commented on HDFS-9925: - [~anu], thank you for the update. Patch v002 tested well locally for me. Let's wait for a pre-commit run. > Ozone: Add Ozone Client lib for bucket handling > --- > > Key: HDFS-9925 > URL: https://issues.apache.org/jira/browse/HDFS-9925 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer > Fix For: HDFS-7240 > > Attachments: HDFS-9925-HDFS-7240.001.patch, > HDFS-9925-HDFS-7240.002.patch > > > Add bucket handling lib code and tests. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9895) Remove all cached configuration from DataNode
[ https://issues.apache.org/jira/browse/HDFS-9895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-9895: Component/s: (was: namenode) > Remove all cached configuration from DataNode > - > > Key: HDFS-9895 > URL: https://issues.apache.org/jira/browse/HDFS-9895 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > > Since DataNode inherits ReconfigurableBase with Configured as base class > where configuration is maintained, all cached configurations in DataNode > should be removed for brevity and consistency purpose. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-9931) Remove all cached configuration from NameNode
Xiaobing Zhou created HDFS-9931: --- Summary: Remove all cached configuration from NameNode Key: HDFS-9931 URL: https://issues.apache.org/jira/browse/HDFS-9931 Project: Hadoop HDFS Issue Type: Sub-task Components: datanode, namenode Reporter: Xiaobing Zhou Assignee: Xiaobing Zhou Since DataNode inherits ReconfigurableBase with Configured as base class where configuration is maintained, all cached configurations in DataNode should be removed for brevity and consistency purpose. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9895) Remove all cached configuration from DataNode
[ https://issues.apache.org/jira/browse/HDFS-9895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-9895: Description: Since DataNode inherits ReconfigurableBase with Configured as base class where configuration is maintained, all cached configurations in DataNode should be removed for brevity and consistency purpose. (was: Since NN/DN inherits ReconfigurableBase with Configured as base class where configuration is maintained, all cached configurations in DataNode should be removed for brevity and consistency purpose.) > Remove all cached configuration from DataNode > - > > Key: HDFS-9895 > URL: https://issues.apache.org/jira/browse/HDFS-9895 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > > Since DataNode inherits ReconfigurableBase with Configured as base class > where configuration is maintained, all cached configurations in DataNode > should be removed for brevity and consistency purpose. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9931) Remove all cached configuration from NameNode
[ https://issues.apache.org/jira/browse/HDFS-9931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-9931: Description: Since NameNode inherits ReconfigurableBase with Configured as base class where configuration is maintained, all cached configurations in NameNode should be removed for brevity and consistency purpose. (was: Since DataNode inherits ReconfigurableBase with Configured as base class where configuration is maintained, all cached configurations in DataNode should be removed for brevity and consistency purpose.) > Remove all cached configuration from NameNode > - > > Key: HDFS-9931 > URL: https://issues.apache.org/jira/browse/HDFS-9931 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > > Since NameNode inherits ReconfigurableBase with Configured as base class > where configuration is maintained, all cached configurations in NameNode > should be removed for brevity and consistency purpose. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9931) Remove all cached configuration from NameNode
[ https://issues.apache.org/jira/browse/HDFS-9931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-9931: Component/s: (was: datanode) > Remove all cached configuration from NameNode > - > > Key: HDFS-9931 > URL: https://issues.apache.org/jira/browse/HDFS-9931 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > > Since DataNode inherits ReconfigurableBase with Configured as base class > where configuration is maintained, all cached configurations in DataNode > should be removed for brevity and consistency purpose. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9895) Remove all cached configuration from DataNode
[ https://issues.apache.org/jira/browse/HDFS-9895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-9895: Summary: Remove all cached configuration from DataNode (was: Remove all cached configuration from NN/DN) > Remove all cached configuration from DataNode > - > > Key: HDFS-9895 > URL: https://issues.apache.org/jira/browse/HDFS-9895 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > > Since NN/DN inherits ReconfigurableBase with Configured as base class where > configuration is maintained, all cached configurations in NN/DN should be > removed for brevity and consistency purpose. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9895) Remove all cached configuration from DataNode
[ https://issues.apache.org/jira/browse/HDFS-9895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-9895: Description: Since NN/DN inherits ReconfigurableBase with Configured as base class where configuration is maintained, all cached configurations in DataNode should be removed for brevity and consistency purpose. (was: Since NN/DN inherits ReconfigurableBase with Configured as base class where configuration is maintained, all cached configurations in NN/DN should be removed for brevity and consistency purpose.) > Remove all cached configuration from DataNode > - > > Key: HDFS-9895 > URL: https://issues.apache.org/jira/browse/HDFS-9895 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > > Since NN/DN inherits ReconfigurableBase with Configured as base class where > configuration is maintained, all cached configurations in DataNode should be > removed for brevity and consistency purpose. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-3702) Add an option for NOT writing the blocks locally if there is a datanode on the same box as the client
[ https://issues.apache.org/jira/browse/HDFS-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei (Eddy) Xu updated HDFS-3702: Attachment: HDFS-3702.006.patch Thanks much for the quick reviews, [~andrew.wang]. bq. Can we hook into BlockPlacementPolicyDefault the same way as HDFS-4946? As we discussed offline, the sampling logic here is different to HDFS-4946. In HDFS-4946, it tries to obtain a local node at first, if not success, it randomly picks from the entire DN pool. So the chosen DN is still possible to be local node. However, this patch requires the chosen DN _exclusively_ from the rest of the DN pool. So HDFS-4946 logic does not apply here, mostly due to its fallback code. Updated the patch to address the rest of the comments. > Add an option for NOT writing the blocks locally if there is a datanode on > the same box as the client > - > > Key: HDFS-3702 > URL: https://issues.apache.org/jira/browse/HDFS-3702 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Affects Versions: 2.5.1 >Reporter: Nicolas Liochon >Assignee: Lei (Eddy) Xu >Priority: Minor > Labels: BB2015-05-TBR > Attachments: HDFS-3702.000.patch, HDFS-3702.001.patch, > HDFS-3702.002.patch, HDFS-3702.003.patch, HDFS-3702.004.patch, > HDFS-3702.005.patch, HDFS-3702.006.patch > > > This is useful for Write-Ahead-Logs: these files are writen for recovery > only, and are not read when there are no failures. > Taking HBase as an example, these files will be read only if the process that > wrote them (the 'HBase regionserver') dies. This will likely come from a > hardware failure, hence the corresponding datanode will be dead as well. So > we're writing 3 replicas, but in reality only 2 of them are really useful. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9427) HDFS should not default to ephemeral ports
[ https://issues.apache.org/jira/browse/HDFS-9427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188123#comment-15188123 ] Xiaobing Zhou commented on HDFS-9427: - Thanks everyone for the feedback. I posted V003 with the mapping: {noformat} Namenode ports 50070 --> 9803 50470 --> 9804 Secondary NN ports --- 50090 --> 9805 50091 --> 9806 backup NN ports --- 50100 --> 9807 50105 --> 9808 Datanode ports --- 50010 --> 9809 50020 --> 9810 50075 --> 9811 50475 --> 9812 {noformat} 9803-9874 is a wider range unassigned, any upcoming ports can be fit into this range. > HDFS should not default to ephemeral ports > -- > > Key: HDFS-9427 > URL: https://issues.apache.org/jira/browse/HDFS-9427 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs-client, namenode >Affects Versions: 3.0.0 >Reporter: Arpit Agarwal >Assignee: Xiaobing Zhou >Priority: Critical > Labels: Incompatible > Attachments: HDFS-9427.000.patch, HDFS-9427.001.patch, > HDFS-9427.002.patch, HDFS-9427.003.patch > > > HDFS defaults to ephemeral ports for the some HTTP/RPC endpoints. This can > cause bind exceptions on service startup if the port is in use. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9427) HDFS should not default to ephemeral ports
[ https://issues.apache.org/jira/browse/HDFS-9427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-9427: Attachment: HDFS-9427.003.patch > HDFS should not default to ephemeral ports > -- > > Key: HDFS-9427 > URL: https://issues.apache.org/jira/browse/HDFS-9427 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs-client, namenode >Affects Versions: 3.0.0 >Reporter: Arpit Agarwal >Assignee: Xiaobing Zhou >Priority: Critical > Labels: Incompatible > Attachments: HDFS-9427.000.patch, HDFS-9427.001.patch, > HDFS-9427.002.patch, HDFS-9427.003.patch > > > HDFS defaults to ephemeral ports for the some HTTP/RPC endpoints. This can > cause bind exceptions on service startup if the port is in use. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9874) Long living DataXceiver threads cause volume shutdown to block.
[ https://issues.apache.org/jira/browse/HDFS-9874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rushabh S Shah updated HDFS-9874: - Status: Patch Available (was: Open) Thanks Daryn for the valuable comments. {quote} The synchronization on FSDatasetImpl#stopAllDataxceiverThreads is a bit concerning. Stopping xceiver threads uses a default timeout of 1min. That's a long time for the DN to block if threads don't exit immediately. {quote} Addressed the issue by interrupting the BlockReceiver thread. {quote} The iteration of replicas might not be safe. The correct locking model isn't immediately clear but ReplicaMap#replicas has the comment which other code doesn't appear to follow: {quote} Since all the calls to ReplicaMap#replicas are synchronized on FsDatasetImpl class, I did the same way. {quote} For the test, I'd assert the volume actually has a non-zero ref count before trying to interrupt. Instead of triggering an async check and sleeping, which inevitable creates flaky race conditions, the disk check should be invoked non-async. Should verify that the client stream fails after the volume is failed. {quote} That's a good suggestion to write good test cases. Thanks a lot. Addressed all the comments in this section. Please review the revised patch. > Long living DataXceiver threads cause volume shutdown to block. > --- > > Key: HDFS-9874 > URL: https://issues.apache.org/jira/browse/HDFS-9874 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 2.7.0 >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Critical > Attachments: HDFS-9874-trunk-1.patch, HDFS-9874-trunk.patch > > > One of the failed volume shutdown took 3 days to complete. > Below are the relevant datanode logs while shutting down a volume (due to > disk failure) > {noformat} > 2016-02-21 10:12:55,333 [Thread-49277] WARN impl.FsDatasetImpl: Removing > failed volume volumeA/current: > org.apache.hadoop.util.DiskChecker$DiskErrorException: Directory is not > writable: volumeA/current/BP-1788428031-nnIp-1351700107344/current/finalized > at > org.apache.hadoop.util.DiskChecker.checkAccessByFileMethods(DiskChecker.java:194) > at > org.apache.hadoop.util.DiskChecker.checkDirAccess(DiskChecker.java:174) > at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:108) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.checkDirs(BlockPoolSlice.java:308) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.checkDirs(FsVolumeImpl.java:786) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList.checkDirs(FsVolumeList.java:242) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.checkDataDir(FsDatasetImpl.java:2011) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.checkDiskError(DataNode.java:3145) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.access$800(DataNode.java:243) > at > org.apache.hadoop.hdfs.server.datanode.DataNode$7.run(DataNode.java:3178) > at java.lang.Thread.run(Thread.java:745) > 2016-02-21 10:12:55,334 [Thread-49277] INFO datanode.BlockScanner: Removing > scanner for volume volumeA (StorageID DS-cd2ea223-bab3-4361-a567-5f3f27a5dd23) > 2016-02-21 10:12:55,334 [VolumeScannerThread(volumeA)] INFO > datanode.VolumeScanner: VolumeScanner(volumeA, > DS-cd2ea223-bab3-4361-a567-5f3f27a5dd23) exiting. > 2016-02-21 10:12:55,335 [VolumeScannerThread(volumeA)] WARN > datanode.VolumeScanner: VolumeScanner(volumeA, > DS-cd2ea223-bab3-4361-a567-5f3f27a5dd23): error saving > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl$BlockIteratorImpl@4169ad8b. > java.io.FileNotFoundException: > volumeA/current/BP-1788428031-nnIp-1351700107344/scanner.cursor.tmp > (Read-only file system) > at java.io.FileOutputStream.open(Native Method) > at java.io.FileOutputStream.(FileOutputStream.java:213) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl$BlockIteratorImpl.save(FsVolumeImpl.java:669) > at > org.apache.hadoop.hdfs.server.datanode.VolumeScanner.saveBlockIterator(VolumeScanner.java:314) > at > org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:633) > 2016-02-24 16:05:53,285 [Thread-49277] WARN impl.FsDatasetImpl: Failed to > delete old dfsUsed file in > volumeA/current/BP-1788428031-nnIp-1351700107344/current > 2016-02-24 16:05:53,286 [Thread-49277] WARN impl.FsDatasetImpl: Failed to > write dfsUsed to > volumeA/current/BP-1788428031-nnIp-1351700107344/current/dfsUsed > java.io.FileNotFoundException: >
[jira] [Updated] (HDFS-9874) Long living DataXceiver threads cause volume shutdown to block.
[ https://issues.apache.org/jira/browse/HDFS-9874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rushabh S Shah updated HDFS-9874: - Attachment: HDFS-9874-trunk-1.patch > Long living DataXceiver threads cause volume shutdown to block. > --- > > Key: HDFS-9874 > URL: https://issues.apache.org/jira/browse/HDFS-9874 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 2.7.0 >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Critical > Attachments: HDFS-9874-trunk-1.patch, HDFS-9874-trunk.patch > > > One of the failed volume shutdown took 3 days to complete. > Below are the relevant datanode logs while shutting down a volume (due to > disk failure) > {noformat} > 2016-02-21 10:12:55,333 [Thread-49277] WARN impl.FsDatasetImpl: Removing > failed volume volumeA/current: > org.apache.hadoop.util.DiskChecker$DiskErrorException: Directory is not > writable: volumeA/current/BP-1788428031-nnIp-1351700107344/current/finalized > at > org.apache.hadoop.util.DiskChecker.checkAccessByFileMethods(DiskChecker.java:194) > at > org.apache.hadoop.util.DiskChecker.checkDirAccess(DiskChecker.java:174) > at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:108) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.checkDirs(BlockPoolSlice.java:308) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.checkDirs(FsVolumeImpl.java:786) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList.checkDirs(FsVolumeList.java:242) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.checkDataDir(FsDatasetImpl.java:2011) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.checkDiskError(DataNode.java:3145) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.access$800(DataNode.java:243) > at > org.apache.hadoop.hdfs.server.datanode.DataNode$7.run(DataNode.java:3178) > at java.lang.Thread.run(Thread.java:745) > 2016-02-21 10:12:55,334 [Thread-49277] INFO datanode.BlockScanner: Removing > scanner for volume volumeA (StorageID DS-cd2ea223-bab3-4361-a567-5f3f27a5dd23) > 2016-02-21 10:12:55,334 [VolumeScannerThread(volumeA)] INFO > datanode.VolumeScanner: VolumeScanner(volumeA, > DS-cd2ea223-bab3-4361-a567-5f3f27a5dd23) exiting. > 2016-02-21 10:12:55,335 [VolumeScannerThread(volumeA)] WARN > datanode.VolumeScanner: VolumeScanner(volumeA, > DS-cd2ea223-bab3-4361-a567-5f3f27a5dd23): error saving > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl$BlockIteratorImpl@4169ad8b. > java.io.FileNotFoundException: > volumeA/current/BP-1788428031-nnIp-1351700107344/scanner.cursor.tmp > (Read-only file system) > at java.io.FileOutputStream.open(Native Method) > at java.io.FileOutputStream.(FileOutputStream.java:213) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl$BlockIteratorImpl.save(FsVolumeImpl.java:669) > at > org.apache.hadoop.hdfs.server.datanode.VolumeScanner.saveBlockIterator(VolumeScanner.java:314) > at > org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:633) > 2016-02-24 16:05:53,285 [Thread-49277] WARN impl.FsDatasetImpl: Failed to > delete old dfsUsed file in > volumeA/current/BP-1788428031-nnIp-1351700107344/current > 2016-02-24 16:05:53,286 [Thread-49277] WARN impl.FsDatasetImpl: Failed to > write dfsUsed to > volumeA/current/BP-1788428031-nnIp-1351700107344/current/dfsUsed > java.io.FileNotFoundException: > volumeA/current/BP-1788428031-nnIp-1351700107344/current/dfsUsed (Read-only > file system) > at java.io.FileOutputStream.open(Native Method) > at java.io.FileOutputStream.(FileOutputStream.java:213) > at java.io.FileOutputStream.(FileOutputStream.java:162) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.saveDfsUsed(BlockPoolSlice.java:247) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.shutdown(BlockPoolSlice.java:698) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.shutdown(FsVolumeImpl.java:815) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList.removeVolume(FsVolumeList.java:328) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList.checkDirs(FsVolumeList.java:250) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.checkDataDir(FsDatasetImpl.java:2011) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.checkDiskError(DataNode.java:3145) >
[jira] [Updated] (HDFS-9874) Long living DataXceiver threads cause volume shutdown to block.
[ https://issues.apache.org/jira/browse/HDFS-9874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rushabh S Shah updated HDFS-9874: - Status: Open (was: Patch Available) Cancelling the patch to address Daryn's comment. > Long living DataXceiver threads cause volume shutdown to block. > --- > > Key: HDFS-9874 > URL: https://issues.apache.org/jira/browse/HDFS-9874 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 2.7.0 >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Critical > Attachments: HDFS-9874-trunk.patch > > > One of the failed volume shutdown took 3 days to complete. > Below are the relevant datanode logs while shutting down a volume (due to > disk failure) > {noformat} > 2016-02-21 10:12:55,333 [Thread-49277] WARN impl.FsDatasetImpl: Removing > failed volume volumeA/current: > org.apache.hadoop.util.DiskChecker$DiskErrorException: Directory is not > writable: volumeA/current/BP-1788428031-nnIp-1351700107344/current/finalized > at > org.apache.hadoop.util.DiskChecker.checkAccessByFileMethods(DiskChecker.java:194) > at > org.apache.hadoop.util.DiskChecker.checkDirAccess(DiskChecker.java:174) > at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:108) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.checkDirs(BlockPoolSlice.java:308) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.checkDirs(FsVolumeImpl.java:786) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList.checkDirs(FsVolumeList.java:242) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.checkDataDir(FsDatasetImpl.java:2011) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.checkDiskError(DataNode.java:3145) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.access$800(DataNode.java:243) > at > org.apache.hadoop.hdfs.server.datanode.DataNode$7.run(DataNode.java:3178) > at java.lang.Thread.run(Thread.java:745) > 2016-02-21 10:12:55,334 [Thread-49277] INFO datanode.BlockScanner: Removing > scanner for volume volumeA (StorageID DS-cd2ea223-bab3-4361-a567-5f3f27a5dd23) > 2016-02-21 10:12:55,334 [VolumeScannerThread(volumeA)] INFO > datanode.VolumeScanner: VolumeScanner(volumeA, > DS-cd2ea223-bab3-4361-a567-5f3f27a5dd23) exiting. > 2016-02-21 10:12:55,335 [VolumeScannerThread(volumeA)] WARN > datanode.VolumeScanner: VolumeScanner(volumeA, > DS-cd2ea223-bab3-4361-a567-5f3f27a5dd23): error saving > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl$BlockIteratorImpl@4169ad8b. > java.io.FileNotFoundException: > volumeA/current/BP-1788428031-nnIp-1351700107344/scanner.cursor.tmp > (Read-only file system) > at java.io.FileOutputStream.open(Native Method) > at java.io.FileOutputStream.(FileOutputStream.java:213) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl$BlockIteratorImpl.save(FsVolumeImpl.java:669) > at > org.apache.hadoop.hdfs.server.datanode.VolumeScanner.saveBlockIterator(VolumeScanner.java:314) > at > org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:633) > 2016-02-24 16:05:53,285 [Thread-49277] WARN impl.FsDatasetImpl: Failed to > delete old dfsUsed file in > volumeA/current/BP-1788428031-nnIp-1351700107344/current > 2016-02-24 16:05:53,286 [Thread-49277] WARN impl.FsDatasetImpl: Failed to > write dfsUsed to > volumeA/current/BP-1788428031-nnIp-1351700107344/current/dfsUsed > java.io.FileNotFoundException: > volumeA/current/BP-1788428031-nnIp-1351700107344/current/dfsUsed (Read-only > file system) > at java.io.FileOutputStream.open(Native Method) > at java.io.FileOutputStream.(FileOutputStream.java:213) > at java.io.FileOutputStream.(FileOutputStream.java:162) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.saveDfsUsed(BlockPoolSlice.java:247) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.shutdown(BlockPoolSlice.java:698) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.shutdown(FsVolumeImpl.java:815) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList.removeVolume(FsVolumeList.java:328) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList.checkDirs(FsVolumeList.java:250) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.checkDataDir(FsDatasetImpl.java:2011) > at >
[jira] [Updated] (HDFS-9925) Ozone: Add Ozone Client lib for bucket handling
[ https://issues.apache.org/jira/browse/HDFS-9925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-9925: --- Attachment: HDFS-9925-HDFS-7240.002.patch [~cnauroth] Thanks for code review and comments. bq. but I am seeing failures in TestBuckets Fixed, you are right the OzoneAcls needed a Ctor and the list operation was returning an unmodifiable list and we were trying to remove a bucket and re-add in the local test path. > Ozone: Add Ozone Client lib for bucket handling > --- > > Key: HDFS-9925 > URL: https://issues.apache.org/jira/browse/HDFS-9925 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer > Fix For: HDFS-7240 > > Attachments: HDFS-9925-HDFS-7240.001.patch, > HDFS-9925-HDFS-7240.002.patch > > > Add bucket handling lib code and tests. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9901) Move block validation out of the heartbeat thread
[ https://issues.apache.org/jira/browse/HDFS-9901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hua Liu updated HDFS-9901: -- Status: Patch Available (was: In Progress) > Move block validation out of the heartbeat thread > - > > Key: HDFS-9901 > URL: https://issues.apache.org/jira/browse/HDFS-9901 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Hua Liu >Assignee: Hua Liu > Attachments: > 0001-HDFS-9901-Move-block-validation-out-of-the-heartbeat.patch > > > During heavy disk IO, we noticed hearbeat thread hangs on checkBlock method, > which checks the existence and length of a block before spins off a thread to > do the actual transferring. In extreme cases, the heartbeat thread hang more > than 10 minutes so the namenode marked the datanode as dead and started > replicating its blocks, which caused more disk IO on other nodes and can > potentially brought them down. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9901) Move block validation out of the heartbeat thread
[ https://issues.apache.org/jira/browse/HDFS-9901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hua Liu updated HDFS-9901: -- Attachment: 0001-HDFS-9901-Move-block-validation-out-of-the-heartbeat.patch > Move block validation out of the heartbeat thread > - > > Key: HDFS-9901 > URL: https://issues.apache.org/jira/browse/HDFS-9901 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Hua Liu >Assignee: Hua Liu > Attachments: > 0001-HDFS-9901-Move-block-validation-out-of-the-heartbeat.patch > > > During heavy disk IO, we noticed hearbeat thread hangs on checkBlock method, > which checks the existence and length of a block before spins off a thread to > do the actual transferring. In extreme cases, the heartbeat thread hang more > than 10 minutes so the namenode marked the datanode as dead and started > replicating its blocks, which caused more disk IO on other nodes and can > potentially brought them down. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9780) RollingFileSystemSink doesn't work on secure clusters
[ https://issues.apache.org/jira/browse/HDFS-9780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15188037#comment-15188037 ] Daniel Templeton commented on HDFS-9780: Hey, [~kasha], ping! > RollingFileSystemSink doesn't work on secure clusters > - > > Key: HDFS-9780 > URL: https://issues.apache.org/jira/browse/HDFS-9780 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.9.0 >Reporter: Daniel Templeton >Assignee: Daniel Templeton >Priority: Critical > Attachments: HADOOP-12775.001.patch, HADOOP-12775.002.patch, > HADOOP-12775.003.patch, HDFS-9780.004.patch, HDFS-9780.005.patch, > HDFS-9780.006.patch, HDFS-9780.006.patch, HDFS-9780.007.patch, > HDFS-9780.008.patch > > > If HDFS has kerberos enabled, the sink cannot write its logs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-3702) Add an option for NOT writing the blocks locally if there is a datanode on the same box as the client
[ https://issues.apache.org/jira/browse/HDFS-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15187937#comment-15187937 ] Andrew Wang commented on HDFS-3702: --- Hey Eddy, thanks for reworking, a few comments: * Is BlockPlacementFlag being used in hadoop-common? Seems like it should go in hadoop-hdfs instead. * Can we name BlockPlacementFlag "AddBlockFlag" instead? That's more future-proof, since it doesn't restrict us to just BPP-related flags. * Can we hook into BlockPlacementPolicyDefault the same way as HDFS-4946? i.e. where the {{preferLocalNode}} boolean is used. It'd be good to implement these two features the same way, though it does require threading the state all the way down. * Nit: ClientProtocol "advice" -> "advise", though this might change after renaming to AddBlockFlag. > Add an option for NOT writing the blocks locally if there is a datanode on > the same box as the client > - > > Key: HDFS-3702 > URL: https://issues.apache.org/jira/browse/HDFS-3702 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Affects Versions: 2.5.1 >Reporter: Nicolas Liochon >Assignee: Lei (Eddy) Xu >Priority: Minor > Labels: BB2015-05-TBR > Attachments: HDFS-3702.000.patch, HDFS-3702.001.patch, > HDFS-3702.002.patch, HDFS-3702.003.patch, HDFS-3702.004.patch, > HDFS-3702.005.patch > > > This is useful for Write-Ahead-Logs: these files are writen for recovery > only, and are not read when there are no failures. > Taking HBase as an example, these files will be read only if the process that > wrote them (the 'HBase regionserver') dies. This will likely come from a > hardware failure, hence the corresponding datanode will be dead as well. So > we're writing 3 replicas, but in reality only 2 of them are really useful. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9118) Add logging system for libdhfs++
[ https://issues.apache.org/jira/browse/HDFS-9118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15187915#comment-15187915 ] Bob Hansen commented on HDFS-9118: -- Great. If we can, let's implement the plugins for the C-API glue logger and the stream/stderr logger. If we want to be super-cool, we could include example plug-ins for google logging (but we don't want to have a dependency here). What data types to we need to support in the first pass? I would say all the fundamental types http://en.cppreference.com/w/cpp/language/types and special handling for const char * and const std::string &. I think we should be able to have a fallback << foo that calls std::to_string(foo) that covers many cases. > Add logging system for libdhfs++ > > > Key: HDFS-9118 > URL: https://issues.apache.org/jira/browse/HDFS-9118 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Affects Versions: HDFS-8707 >Reporter: Bob Hansen >Assignee: James Clampffer > Attachments: HDFS-9118.HDFS-8707.000.patch, > HDFS-9118.HDFS-8707.001.patch > > > With HDFS-9505, we've starting logging data from libhdfs++. Consumers of the > library are going to have their own logging infrastructure that we're going > to want to provide data to. > libhdfs++ should have a logging library that: > * Is overridable and can provide sufficient information to work well with > common C++ logging frameworks > * Has a rational default implementation > * Is performant -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9924) [umbrella] Asynchronous HDFS Access
[ https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15187886#comment-15187886 ] Tsz Wo Nicholas Sze commented on HDFS-9924: --- > ... It seems like you should be able to comfortably dispatch that many > operations from a few thousand client threads ... As mentioned in the description, it is inefficient if a client needs to create a large number of threads to invoke the calls. Indeed, there is a limit of the number of threads in a JVM. It is wasting resource to create threads and use them for waiting. > [umbrella] Asynchronous HDFS Access > --- > > Key: HDFS-9924 > URL: https://issues.apache.org/jira/browse/HDFS-9924 > Project: Hadoop HDFS > Issue Type: New Feature > Components: fs >Reporter: Tsz Wo Nicholas Sze >Assignee: Xiaobing Zhou > > This is an umbrella JIRA for supporting Asynchronous HDFS Access. > Currently, all the API methods are blocking calls -- the caller is blocked > until the method returns. It is very slow if a client makes a large number > of independent calls in a single thread since each call has to wait until the > previous call is finished. It is inefficient if a client needs to create a > large number of threads to invoke the calls. > We propose adding a new API to support asynchronous calls, i.e. the caller is > not blocked. The methods in the new API immediately return a Java Future > object. The return value can be obtained by the usual Future.get() method. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9405) When starting a file, NameNode should generate EDEK in a separate thread
[ https://issues.apache.org/jira/browse/HDFS-9405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15187861#comment-15187861 ] Arun Suresh commented on HDFS-9405: --- Thanks for the patch [~xiaochen].. gave it a fly-by.. One suggestion is to maybe retry warmup if the first attempt fails (Since the provider might be started up later than the NN).. in the {{EDEKCacheLoader#run}} method, in the event of an IOException, sleep and retry a couple of times ? > When starting a file, NameNode should generate EDEK in a separate thread > > > Key: HDFS-9405 > URL: https://issues.apache.org/jira/browse/HDFS-9405 > Project: Hadoop HDFS > Issue Type: Improvement > Components: encryption, namenode >Affects Versions: 2.7.1 >Reporter: Zhe Zhang >Assignee: Xiao Chen > Attachments: HDFS-9405.01.patch > > > {{generateEncryptedDataEncryptionKey}} involves a non-trivial I/O operation > to the key provider, which could be slow or cause timeout. It should be done > as a separate thread so as to return a proper error message to the RPC caller. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-3702) Add an option for NOT writing the blocks locally if there is a datanode on the same box as the client
[ https://issues.apache.org/jira/browse/HDFS-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei (Eddy) Xu updated HDFS-3702: Attachment: HDFS-3702.005.patch Fix relevant checkstyles and javadoc warnings. Thanks a lot for the comments, [~nkeywal]. Glad to help out. > Add an option for NOT writing the blocks locally if there is a datanode on > the same box as the client > - > > Key: HDFS-3702 > URL: https://issues.apache.org/jira/browse/HDFS-3702 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Affects Versions: 2.5.1 >Reporter: Nicolas Liochon >Assignee: Lei (Eddy) Xu >Priority: Minor > Labels: BB2015-05-TBR > Attachments: HDFS-3702.000.patch, HDFS-3702.001.patch, > HDFS-3702.002.patch, HDFS-3702.003.patch, HDFS-3702.004.patch, > HDFS-3702.005.patch > > > This is useful for Write-Ahead-Logs: these files are writen for recovery > only, and are not read when there are no failures. > Taking HBase as an example, these files will be read only if the process that > wrote them (the 'HBase regionserver') dies. This will likely come from a > hardware failure, hence the corresponding datanode will be dead as well. So > we're writing 3 replicas, but in reality only 2 of them are really useful. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9118) Add logging system for libdhfs++
[ https://issues.apache.org/jira/browse/HDFS-9118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15187815#comment-15187815 ] James Clampffer commented on HDFS-9118: --- Thanks for the feedback [~bobthansen], I hadn't thought about some of these things. {quote} Add a TRACE level for finer than Debug. My general mapping is ERROR: Something went wrong that is catastrophic (invalid internal states, etc.) WARN: Something went wrong that you'll always want to know about, but doesn't halt operations (recoverable errors, messages out of order, etc.) INFO: Here's a high-level list of what's going on with libhdfs++ (opening and closing files, retrying operations, etc.) DEBUG: Here's a low-level stream of libhfds++ operations (operation by operation) TRACE: Here's a packet-by-packet stream of libhdfs++ operations (function entry/exit, each packet sent, etc.) {quote} I'll do this, right now there aren't a whole lot of messages that I'd consider trace level, but I'll go through and push anything down that looks like it's more detail than needed for typical debugging. Either way I'll add the appropriate stuff to handle a trace level. {quote} Add a LogEnabled(level, component)-->bool function that can be checked before constructing the LogMessage. This allows client code to circumvent expensive logging operations. Make it pluggable so that we can hook it into the current Google logging settings, etc. {quote} Yea it looks like I can have the macro instantiate a dummy object if logging isn't enabled. I was thinking of putting this off until I had a chance to do some decent profiling, but it doesn't look hard to extend the macros. This still ends up evaluating any function calls in the log, but at least gets rid of the stringstream allocations and LogManager::Write (which grabs a lock) calls. I'll look into having the LogManager delegate to a plugin. Depending on how large the patch starts getting I might save that for later. {quote} Logging should be disabled by default. This is a low-level library; it shouldn't spew if not asked to {quote} Good point. It was getting annoying to look through all the logs for the tests I'm writing. {quote} Can we make the LogMessage a real ostream? That way consumers could do LogMessage(...) << hex << my_pointer rather than LogHelpers::PointerToHex(...). It's a bit more idiomatic, but I don't want to create a lot more work. {quote} It looks implementing a new ostream correctly involves a fair amount of work (according to stackoverflow). For now I think I can just overload operator<<(void*) to deal with hex, and put in a operator<<(const char *) to specialize for c style strings. Does that sound like a decent solution? {quote} Rather than require the consumers to call hdfsFreeLogData, always have the library free the object. If the consumer wants to retain the data, they can copy the structure. We could provide an hdfsCopyLogData method if we wanted to be very nice. {quote} This sounds like a good improvement, I assume most people just want to get the info to stick into their own log systems on the C side. This will also help cut back on heap allocations with short lives. > Add logging system for libdhfs++ > > > Key: HDFS-9118 > URL: https://issues.apache.org/jira/browse/HDFS-9118 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Affects Versions: HDFS-8707 >Reporter: Bob Hansen >Assignee: James Clampffer > Attachments: HDFS-9118.HDFS-8707.000.patch, > HDFS-9118.HDFS-8707.001.patch > > > With HDFS-9505, we've starting logging data from libhdfs++. Consumers of the > library are going to have their own logging infrastructure that we're going > to want to provide data to. > libhdfs++ should have a logging library that: > * Is overridable and can provide sufficient information to work well with > common C++ logging frameworks > * Has a rational default implementation > * Is performant -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8786) Erasure coding: use simple replication for internal blocks on decommissioning datanodes
[ https://issues.apache.org/jira/browse/HDFS-8786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15187807#comment-15187807 ] Rakesh R commented on HDFS-8786: Thanks [~jingzhao] for the great support in resolving this. > Erasure coding: use simple replication for internal blocks on decommissioning > datanodes > --- > > Key: HDFS-8786 > URL: https://issues.apache.org/jira/browse/HDFS-8786 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Zhe Zhang >Assignee: Rakesh R > Fix For: 3.0.0 > > Attachments: HDFS-8786-001.patch, HDFS-8786-002.patch, > HDFS-8786-003.patch, HDFS-8786-004.patch, HDFS-8786-005.patch, > HDFS-8786-006.patch, HDFS-8786-draft.patch > > > Per [discussion | > https://issues.apache.org/jira/browse/HDFS-8697?focusedCommentId=14609004=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14609004] > under HDFS-8697, it's too expensive to reconstruct block groups for decomm > purpose. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9792) libhdfs++: EACCES not setting errno correctly
[ https://issues.apache.org/jira/browse/HDFS-9792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bob Hansen updated HDFS-9792: - Resolution: Fixed Status: Resolved (was: Patch Available) > libhdfs++: EACCES not setting errno correctly > - > > Key: HDFS-9792 > URL: https://issues.apache.org/jira/browse/HDFS-9792 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Bob Hansen >Assignee: Bob Hansen > Attachments: HDFS-9792.HDFS-8707.000.patch, > HDFS-9792.HDFS-8707.000.patch > > > When libhdfs++ gets a permissions error, it is failing to initialize errnum. > Due to changes passing in the night, the code in hdfs.cc that reads > {code} > case Status::Code::kPermissionDenied: > if (!stat.ToString().empty()) > ReportError(EACCES, stat.ToString().c_str()); > else > ReportError(EACCES, "Permission denied"); > break; > {code} > should read > {code} > case Status::Code::kPermissionDenied: > errnum = EACCES; > default_message = "Permission denied"; > break; > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9927) Document the new OIV ReverseXML processor
[ https://issues.apache.org/jira/browse/HDFS-9927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDFS-9927: -- Attachment: HDFS-9927.001.patch Added ReverseXML processor, and also changed a few out-dated pieces. > Document the new OIV ReverseXML processor > - > > Key: HDFS-9927 > URL: https://issues.apache.org/jira/browse/HDFS-9927 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Affects Versions: 2.9.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Minor > Labels: documentation, supportability > Attachments: HDFS-9927.001.patch > > > HDFS-9835 added a new ReverseXML processor which reconstructs an fsimage from > an XML file. > This new feature should be documented, and perhaps label it as "experimental" > in command line. > Also, OIV section in HDFSCommands.md should be updated too, to include new > processors options and it should also include links to OIV page. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9792) libhdfs++: EACCES not setting errno correctly
[ https://issues.apache.org/jira/browse/HDFS-9792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15187763#comment-15187763 ] Bob Hansen commented on HDFS-9792: -- Committed with 8da3bbd. Thanks for the review, [~James Clampffer]. > libhdfs++: EACCES not setting errno correctly > - > > Key: HDFS-9792 > URL: https://issues.apache.org/jira/browse/HDFS-9792 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Bob Hansen >Assignee: Bob Hansen > Attachments: HDFS-9792.HDFS-8707.000.patch, > HDFS-9792.HDFS-8707.000.patch > > > When libhdfs++ gets a permissions error, it is failing to initialize errnum. > Due to changes passing in the night, the code in hdfs.cc that reads > {code} > case Status::Code::kPermissionDenied: > if (!stat.ToString().empty()) > ReportError(EACCES, stat.ToString().c_str()); > else > ReportError(EACCES, "Permission denied"); > break; > {code} > should read > {code} > case Status::Code::kPermissionDenied: > errnum = EACCES; > default_message = "Permission denied"; > break; > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9924) [umbrella] Asynchronous HDFS Access
[ https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15187755#comment-15187755 ] Colin Patrick McCabe commented on HDFS-9924: Currently the NameNode can handle between 10k and 100k operations per second, depending on configuration and the nature of the operations. It seems like you should be able to comfortably dispatch that many operations from a few thousand client threads performing synchronous RPC calls... bearing in mind that each operation will take a few milliseconds on average. This is assuming that you want to consume all the available NN RPC bandwidth from a single client node. Perhaps I'm missing something, but I don't see how async operations will improve performance here. The overhead of a few thousand threads on the client is small, and certainly not what is limiting HDFS performance. Rather, performance is limited by considerations like the locking on the NameNode, Java garbage collections on the NameNode, and serialization/deserialization overheads. Please keep in mind that you don't need async operations to reuse connections and sockets... we do that already via mechanisms like the {{PeerCache}} (formerly {{SocketCache}}). Clearly, Hive can also dispatch operations in parallel using standard mechanisms like an Executor or ThreadPool. I certainly don't object to implementing this, but if the goal is better performance, I think you are going to be disappointed. Perhaps I have missed something, though... I'm curious if there are reasons for implementing this that I have not considered. > [umbrella] Asynchronous HDFS Access > --- > > Key: HDFS-9924 > URL: https://issues.apache.org/jira/browse/HDFS-9924 > Project: Hadoop HDFS > Issue Type: New Feature > Components: fs >Reporter: Tsz Wo Nicholas Sze >Assignee: Xiaobing Zhou > > This is an umbrella JIRA for supporting Asynchronous HDFS Access. > Currently, all the API methods are blocking calls -- the caller is blocked > until the method returns. It is very slow if a client makes a large number > of independent calls in a single thread since each call has to wait until the > previous call is finished. It is inefficient if a client needs to create a > large number of threads to invoke the calls. > We propose adding a new API to support asynchronous calls, i.e. the caller is > not blocked. The methods in the new API immediately return a Java Future > object. The return value can be obtained by the usual Future.get() method. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9924) [umbrella] Asynchronous HDFS Access
[ https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15187742#comment-15187742 ] Bob Hansen commented on HDFS-9924: -- Dandy. I wanted to capture the use case and leave it up to you fine, smart people to come up with a great solution. I look forward to seeing your progress and stealing your best ideas. > [umbrella] Asynchronous HDFS Access > --- > > Key: HDFS-9924 > URL: https://issues.apache.org/jira/browse/HDFS-9924 > Project: Hadoop HDFS > Issue Type: New Feature > Components: fs >Reporter: Tsz Wo Nicholas Sze >Assignee: Xiaobing Zhou > > This is an umbrella JIRA for supporting Asynchronous HDFS Access. > Currently, all the API methods are blocking calls -- the caller is blocked > until the method returns. It is very slow if a client makes a large number > of independent calls in a single thread since each call has to wait until the > previous call is finished. It is inefficient if a client needs to create a > large number of threads to invoke the calls. > We propose adding a new API to support asynchronous calls, i.e. the caller is > not blocked. The methods in the new API immediately return a Java Future > object. The return value can be obtained by the usual Future.get() method. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9924) [umbrella] Asynchronous HDFS Access
[ https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15187732#comment-15187732 ] Tsz Wo Nicholas Sze commented on HDFS-9924: --- Thanks [~bobhansen], supporting callback is a good idea. We may allow user to register callbacks when it makes an async call. For example, we may support [ListenableFuture|http://docs.guava-libraries.googlecode.com/git/javadoc/com/google/common/util/concurrent/ListenableFuture.html]. Let me put it as a subtask. > [umbrella] Asynchronous HDFS Access > --- > > Key: HDFS-9924 > URL: https://issues.apache.org/jira/browse/HDFS-9924 > Project: Hadoop HDFS > Issue Type: New Feature > Components: fs >Reporter: Tsz Wo Nicholas Sze >Assignee: Xiaobing Zhou > > This is an umbrella JIRA for supporting Asynchronous HDFS Access. > Currently, all the API methods are blocking calls -- the caller is blocked > until the method returns. It is very slow if a client makes a large number > of independent calls in a single thread since each call has to wait until the > previous call is finished. It is inefficient if a client needs to create a > large number of threads to invoke the calls. > We propose adding a new API to support asynchronous calls, i.e. the caller is > not blocked. The methods in the new API immediately return a Java Future > object. The return value can be obtained by the usual Future.get() method. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9926) ozone : Add volume commands to CLI
[ https://issues.apache.org/jira/browse/HDFS-9926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-9926: --- Attachment: HDFS-9926-HDFS-7240.001.patch > ozone : Add volume commands to CLI > -- > > Key: HDFS-9926 > URL: https://issues.apache.org/jira/browse/HDFS-9926 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer > Fix For: HDFS-7240 > > Attachments: HDFS-9926-HDFS-7240.001.patch > > > Adds a cli tool which supports volume commands -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9926) ozone : Add volume commands to CLI
[ https://issues.apache.org/jira/browse/HDFS-9926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-9926: --- Status: Patch Available (was: Open) > ozone : Add volume commands to CLI > -- > > Key: HDFS-9926 > URL: https://issues.apache.org/jira/browse/HDFS-9926 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer > Fix For: HDFS-7240 > > Attachments: HDFS-9926-HDFS-7240.001.patch > > > Adds a cli tool which supports volume commands -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9703) DiskBalancer : getBandwidth implementation
[ https://issues.apache.org/jira/browse/HDFS-9703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15187632#comment-15187632 ] Arpit Agarwal commented on HDFS-9703: - +1 thanks [~anu]. > DiskBalancer : getBandwidth implementation > -- > > Key: HDFS-9703 > URL: https://issues.apache.org/jira/browse/HDFS-9703 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Affects Versions: HDFS-1312 >Reporter: Anu Engineer >Assignee: Anu Engineer > Fix For: HDFS-1312 > > Attachments: HDFS-9703-HDFS-1312.001.patch, > HDFS-9703-HDFS-1312.002.patch > > > Add getBandwidth call -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9792) libhdfs++: EACCES not setting errno correctly
[ https://issues.apache.org/jira/browse/HDFS-9792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15187634#comment-15187634 ] James Clampffer commented on HDFS-9792: --- Just looked at this again, disregard my last comment. For some reason I thought the C API could end up with two different permission denied error strings. +1 > libhdfs++: EACCES not setting errno correctly > - > > Key: HDFS-9792 > URL: https://issues.apache.org/jira/browse/HDFS-9792 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Bob Hansen >Assignee: Bob Hansen > Attachments: HDFS-9792.HDFS-8707.000.patch, > HDFS-9792.HDFS-8707.000.patch > > > When libhdfs++ gets a permissions error, it is failing to initialize errnum. > Due to changes passing in the night, the code in hdfs.cc that reads > {code} > case Status::Code::kPermissionDenied: > if (!stat.ToString().empty()) > ReportError(EACCES, stat.ToString().c_str()); > else > ReportError(EACCES, "Permission denied"); > break; > {code} > should read > {code} > case Status::Code::kPermissionDenied: > errnum = EACCES; > default_message = "Permission denied"; > break; > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)