[jira] [Commented] (HDFS-10683) Make class Token$PrivateToken private
[ https://issues.apache.org/jira/browse/HDFS-10683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531839#comment-15531839 ] Wei-Chiu Chuang commented on HDFS-10683: +1 > Make class Token$PrivateToken private > - > > Key: HDFS-10683 > URL: https://issues.apache.org/jira/browse/HDFS-10683 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 2.9.0 >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Minor > Labels: fs, ha, security, security_token > Attachments: HDFS-10683.001.patch, HDFS-10683.002.patch > > > Avoid {{instanceof}} or typecasting of {{Toke.PrivateToken}} by introducing > an interface method in {{Token}}. Make class {{Toke.PrivateToken}} private. > Use a factory method instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10910) HDFS Erasure Coding doc should state its currently supported erasure coding policies
[ https://issues.apache.org/jira/browse/HDFS-10910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531781#comment-15531781 ] SammiChen commented on HDFS-10910: -- Thanks Yiqun for update the patch. The v2 patch seems good to me. > HDFS Erasure Coding doc should state its currently supported erasure coding > policies > > > Key: HDFS-10910 > URL: https://issues.apache.org/jira/browse/HDFS-10910 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation, erasure-coding >Affects Versions: 3.0.0-alpha1 >Reporter: Wei-Chiu Chuang >Assignee: Yiqun Lin > Attachments: HDFS-10910.001.patch, HDFS-10910.002.patch > > > While HDFS Erasure Coding doc states a variety of possible combinations of > algorithms, block group size and cell size, the code (as of 3.0.0-alpha1) > allows only three policies: RS_6_3_SCHEMA, RS_3_2_SCHEMA and > RS_6_3_LEGACY_SCHEMA. All with default cell size. I think this should be > documented. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10918) Add a tool to get FileEncryptionInfo from CLI
[ https://issues.apache.org/jira/browse/HDFS-10918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531709#comment-15531709 ] Hadoop QA commented on HDFS-10918: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 0s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 43s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 57s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 16s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}115m 7s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.cli.TestCryptoAdminCLI | | | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped | | | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-10918 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12830831/HDFS-10918.03.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 9f910d43f786 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 47f8092 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | unit |
[jira] [Commented] (HDFS-10922) Adding additional unit tests for Trash
[ https://issues.apache.org/jira/browse/HDFS-10922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531670#comment-15531670 ] Weiwei Yang commented on HDFS-10922: Hello [~xyao] I recently worked on some JIRAs about Trash, I can work on this ticket if you don't mind. > Adding additional unit tests for Trash > -- > > Key: HDFS-10922 > URL: https://issues.apache.org/jira/browse/HDFS-10922 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: test >Reporter: Xiaoyu Yao > > This ticket is opened to track adding unit tests for Trash. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10923) Make InstrumentedLock require ReentrantLock
[ https://issues.apache.org/jira/browse/HDFS-10923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531580#comment-15531580 ] Hadoop QA commented on HDFS-10923: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 24s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 5 new + 82 unchanged - 0 fixed = 87 total (was 82) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 11s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 83m 9s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSShell | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-10923 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12830829/HDFS-10923.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux f6e64a1ae5db 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 47f8092 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/16918/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | whitespace | https://builds.apache.org/job/PreCommit-HDFS-Build/16918/artifact/patchprocess/whitespace-eol.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/16918/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/16918/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/16918/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. >
[jira] [Commented] (HDFS-10913) Refactor BlockReceiver by introducing faults injector to enhance testability of detecting slow mirrors
[ https://issues.apache.org/jira/browse/HDFS-10913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531554#comment-15531554 ] Hadoop QA commented on HDFS-10913: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 24s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 74 unchanged - 1 fixed = 77 total (was 75) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 39s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 85m 59s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeUUID | | | hadoop.hdfs.server.datanode.TestFsDatasetCache | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-10913 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12830824/HDFS-10913.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 225ac8ffc967 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 47f8092 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/16917/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/16917/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/16917/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/16917/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Refactor BlockReceiver by introducing faults injector to enhance testability > of detecting slow mirrors >
[jira] [Commented] (HDFS-10629) Federation Router
[ https://issues.apache.org/jira/browse/HDFS-10629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531543#comment-15531543 ] Ming Ma commented on HDFS-10629: Sorry for [~elgoiri] and [~jakace] for the late reply. Yes let us take out JMX from this patch. Here are some comments. * FederationNamenodeServiceState#EXPIRED isn't used. Is it needed? * Is FederationNamenodeServiceState really needed, e.g. if HAServiceState can work? * Do we need NamenodeStatusReport? Per https://issues.apache.org/jira/browse/HDFS-10467?focusedCommentId=15382535=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15382535, this might be required only for the rebalancer scenario. We can add it later when necessary. * Router#getLocationsForPath command "Get the NN RPC client”, does it actually return RPC client? It seems to return the remote location. * It seems better exception handling will be handled by other jiras. For example, RetriableException can be thrown by active NN and should be handled properly. * Router.java has several mismatch between parameter name in the javadoc and actual name. * ConnectionPool has clean up task to reduce the connections. But If there are lots of users, there will be lots of ConnectionPool objects. Maybe not a major issue, but wonder if someone can launch attack on router. Or we can make the connection max size global across different pools. Have you checked out if Server#ConnectionManager can be used instead? * Is DFSUtil#getNamenodeWebAddr required for this jira? If yes, support for https is needed. * What is the use case of blockpool based lookup? For example updateBlockForPipeline and some other methods calls invokeMethod based on blockpool. Wonder if it can use the version used by many other methods. * reportBadBlocks mentions DatanodeProtocol in the comment. That can be removed given Router only serves as proxy between client and NN. * Given ActiveNamenodeLocator and other interfaces are likely to change, maybe mark it as @InterfaceStability.Evolving, also given they aren’t used by applications, @InterfaceAudience.Private seems more appropriate. > Federation Router > - > > Key: HDFS-10629 > URL: https://issues.apache.org/jira/browse/HDFS-10629 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Inigo Goiri >Assignee: Jason Kace > Attachments: HDFS-10629-HDFS-10467-002.patch, > HDFS-10629-HDFS-10467-003.patch, HDFS-10629.000.patch, HDFS-10629.001.patch > > > Component that routes calls from the clients to the right Namespace. It > implements {{ClientProtocol}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10918) Add a tool to get FileEncryptionInfo from CLI
[ https://issues.apache.org/jira/browse/HDFS-10918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-10918: - Attachment: HDFS-10918.03.patch I see sorry that was a stupid question, thanks a lot for explaining to me. Patch 3 attached to fix that, and rebase against latest trunk. > Add a tool to get FileEncryptionInfo from CLI > - > > Key: HDFS-10918 > URL: https://issues.apache.org/jira/browse/HDFS-10918 > Project: Hadoop HDFS > Issue Type: New Feature > Components: encryption >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HDFS-10918.01.patch, HDFS-10918.02.patch, > HDFS-10918.03.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9668) Optimize the locking in FsDatasetImpl
[ https://issues.apache.org/jira/browse/HDFS-9668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531447#comment-15531447 ] Arpit Agarwal commented on HDFS-9668: - Hi [~jingcheng...@intel.com], You can probably split out the read-write lock wrappers and instrumentation into a separate Hadoop common Jira. The DataNode changes can be kept in this Jira. # InstrumentedReadLock needs to be fixed to use a thread-local, see similar work done by [~xkrogen] in HDFS-10817. You won't need a ThreadLocal to instrument WriteLock as it is exclusive. # We should see if we can cut down on the number of new lock classes. I can help you with that part if you want to make it a separate Jira. > Optimize the locking in FsDatasetImpl > - > > Key: HDFS-9668 > URL: https://issues.apache.org/jira/browse/HDFS-9668 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Jingcheng Du >Assignee: Jingcheng Du > Attachments: HDFS-9668-1.patch, HDFS-9668-10.patch, > HDFS-9668-11.patch, HDFS-9668-12.patch, HDFS-9668-13.patch, > HDFS-9668-14.patch, HDFS-9668-14.patch, HDFS-9668-15.patch, > HDFS-9668-2.patch, HDFS-9668-3.patch, HDFS-9668-4.patch, HDFS-9668-5.patch, > HDFS-9668-6.patch, HDFS-9668-7.patch, HDFS-9668-8.patch, HDFS-9668-9.patch, > execution_time.png > > > During the HBase test on a tiered storage of HDFS (WAL is stored in > SSD/RAMDISK, and all other files are stored in HDD), we observe many > long-time BLOCKED threads on FsDatasetImpl in DataNode. The following is part > of the jstack result: > {noformat} > "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at > /192.168.50.16:48521 [Receiving block > BP-1042877462-192.168.50.13-1446173170517:blk_1073779272_40852]" - Thread > t@93336 >java.lang.Thread.State: BLOCKED > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:) > - waiting to lock <18324c9> (a > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl) owned by > "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at > /192.168.50.16:48520 [Receiving block > BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" t@93335 > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:113) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:183) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:615) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235) > at java.lang.Thread.run(Thread.java:745) >Locked ownable synchronizers: > - None > > "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at > /192.168.50.16:48520 [Receiving block > BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" - Thread > t@93335 >java.lang.Thread.State: RUNNABLE > at java.io.UnixFileSystem.createFileExclusively(Native Method) > at java.io.File.createNewFile(File.java:1012) > at > org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:66) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:271) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:286) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:1140) > - locked <18324c9> (a > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:113) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:183) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:615) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235) > at java.lang.Thread.run(Thread.java:745) >Locked ownable synchronizers: > - None > {noformat} > We measured the execution of some operations in FsDatasetImpl during the > test. Here following is the result. > !execution_time.png! > The operations of finalizeBlock, addBlock and createRbw on HDD in a heavy > load take a really long time. > It means one slow
[jira] [Updated] (HDFS-10923) Make InstrumentedLock require ReentrantLock
[ https://issues.apache.org/jira/browse/HDFS-10923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-10923: - Status: Patch Available (was: Open) > Make InstrumentedLock require ReentrantLock > --- > > Key: HDFS-10923 > URL: https://issues.apache.org/jira/browse/HDFS-10923 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Attachments: HDFS-10923.01.patch > > > Make InstrumentedLock use ReentrantLock instead of Lock, so nested > acquire/release calls can be instrumented correctly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10923) Make InstrumentedLock require ReentrantLock
[ https://issues.apache.org/jira/browse/HDFS-10923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-10923: - Attachment: HDFS-10923.01.patch The patch also renames InstrumentedLock to InstrumentedReentrantLock. > Make InstrumentedLock require ReentrantLock > --- > > Key: HDFS-10923 > URL: https://issues.apache.org/jira/browse/HDFS-10923 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Attachments: HDFS-10923.01.patch > > > Make InstrumentedLock use ReentrantLock instead of Lock, so nested > acquire/release calls can be instrumented correctly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10797) Disk usage summary of snapshots causes renamed blocks to get counted twice
[ https://issues.apache.org/jira/browse/HDFS-10797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531432#comment-15531432 ] Jing Zhao commented on HDFS-10797: -- [~mackrorysd], I agree it will be great to have a consistent and user-friendly semantic. To me a better semantic can be like this: if the renamed source (which is inside of some snapshot) and the renamed target are both under the same directory for counting, we count them once. Otherwise they will be counted separately. With this semantic maybe we only need to move your hashset to the context object passed from the beginning of the counting call, and use it to avoid duplicated counting. What do you think? > Disk usage summary of snapshots causes renamed blocks to get counted twice > -- > > Key: HDFS-10797 > URL: https://issues.apache.org/jira/browse/HDFS-10797 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Sean Mackrory >Assignee: Sean Mackrory > Attachments: HDFS-10797.001.patch, HDFS-10797.002.patch, > HDFS-10797.003.patch > > > DirectoryWithSnapshotFeature.computeContentSummary4Snapshot calculates how > much disk usage is used by a snapshot by tallying up the files in the > snapshot that have since been deleted (that way it won't overlap with regular > files whose disk usage is computed separately). However that is determined > from a diff that shows moved (to Trash or otherwise) or renamed files as a > deletion and a creation operation that may overlap with the list of blocks. > Only the deletion operation is taken into consideration, and this causes > those blocks to get represented twice in the disk usage tallying. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Moved] (HDFS-10923) Make InstrumentedLock require ReentrantLock
[ https://issues.apache.org/jira/browse/HDFS-10923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal moved HADOOP-13668 to HDFS-10923: --- Key: HDFS-10923 (was: HADOOP-13668) Project: Hadoop HDFS (was: Hadoop Common) > Make InstrumentedLock require ReentrantLock > --- > > Key: HDFS-10923 > URL: https://issues.apache.org/jira/browse/HDFS-10923 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > > Make InstrumentedLock use ReentrantLock instead of Lock, so nested > acquire/release calls can be instrumented correctly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10918) Add a tool to get FileEncryptionInfo from CLI
[ https://issues.apache.org/jira/browse/HDFS-10918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531405#comment-15531405 ] Hadoop QA commented on HDFS-10918: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 8s{color} | {color:red} HDFS-10918 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-10918 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12830817/HDFS-10918.02.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/16916/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Add a tool to get FileEncryptionInfo from CLI > - > > Key: HDFS-10918 > URL: https://issues.apache.org/jira/browse/HDFS-10918 > Project: Hadoop HDFS > Issue Type: New Feature > Components: encryption >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HDFS-10918.01.patch, HDFS-10918.02.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10918) Add a tool to get FileEncryptionInfo from CLI
[ https://issues.apache.org/jira/browse/HDFS-10918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531394#comment-15531394 ] Andrew Wang commented on HDFS-10918: I think we still are missing handling relative paths and symlinks. Let's look at getFileStatus: {code} public FileStatus getFileStatus(Path f) throws IOException { Path absF = fixRelativePart(f); // <--- uses the CWD to turn relative into absolute paths return new FileSystemLinkResolver() { // <--- class to keep following symlinks until path is fully resolved @Override public FileStatus doCall(final Path p) throws IOException { HdfsFileStatus fi = dfs.getFileInfo(getPathName(p)); if (fi != null) { return fi.makeQualified(getUri(), p); } else { throw new FileNotFoundException("File does not exist: " + p); } } @Override public FileStatus next(final FileSystem fs, final Path p) throws IOException { return fs.getFileStatus(p); } }.resolve(this, absF); } {code} Looking at this more closely myself, code sharing is pretty tough because we need that HdfsFileStatus which is only returned by an HDFS. There are other examples of ops that can only resolve symlinks through a DFS instance, e.g. {{renameSnapshot}}, that you can use as an example. > Add a tool to get FileEncryptionInfo from CLI > - > > Key: HDFS-10918 > URL: https://issues.apache.org/jira/browse/HDFS-10918 > Project: Hadoop HDFS > Issue Type: New Feature > Components: encryption >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HDFS-10918.01.patch, HDFS-10918.02.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10913) Refactor BlockReceiver by introducing faults injector to enhance testability of detecting slow mirrors
[ https://issues.apache.org/jira/browse/HDFS-10913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531397#comment-15531397 ] Xiaobing Zhou commented on HDFS-10913: -- [~xyao] thank you for reviews. I posted patch v002 with some enhanced tests. > Refactor BlockReceiver by introducing faults injector to enhance testability > of detecting slow mirrors > -- > > Key: HDFS-10913 > URL: https://issues.apache.org/jira/browse/HDFS-10913 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-10913.000.patch, HDFS-10913.001.patch, > HDFS-10913.002.patch > > > BlockReceiver#datanodeSlowLogThresholdMs is used as threshold to detect slow > mirrors. BlockReceiver only writes some warning logs. In order to better test > behaviors of slow mirrors, it necessitates introducing fault injectors. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10913) Refactor BlockReceiver by introducing faults injector to enhance testability of detecting slow mirrors
[ https://issues.apache.org/jira/browse/HDFS-10913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-10913: - Attachment: HDFS-10913.002.patch > Refactor BlockReceiver by introducing faults injector to enhance testability > of detecting slow mirrors > -- > > Key: HDFS-10913 > URL: https://issues.apache.org/jira/browse/HDFS-10913 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-10913.000.patch, HDFS-10913.001.patch, > HDFS-10913.002.patch > > > BlockReceiver#datanodeSlowLogThresholdMs is used as threshold to detect slow > mirrors. BlockReceiver only writes some warning logs. In order to better test > behaviors of slow mirrors, it necessitates introducing fault injectors. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10850) getEZForPath should NOT throw FNF
[ https://issues.apache.org/jira/browse/HDFS-10850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531377#comment-15531377 ] Andrew Wang commented on HDFS-10850: Agree about throwing a specific rename exception. I poked around our API to try and find precedent for this old behavior of returning null rather than throwing FNF. checkAccess was a possible candidate since we don't require existence when doing write operations (e.g. {{mkdirs}}), but it also throws FNF. Cache directives do not throw FNF, but that's not an API example I'd like to repeat. We should have attached them to the inode. In the name of compatibility, we should revert this from the branch-2s. I'd hope by the time 3.0 rolls around, we've either fixed Hive to call this on parent dirs instead, or better, moved over to "rename falling back to copy on special IOException" as Daryn proposed. [~spena] does this sound reasonable from the Hive side? > getEZForPath should NOT throw FNF > - > > Key: HDFS-10850 > URL: https://issues.apache.org/jira/browse/HDFS-10850 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.8.0 >Reporter: Daryn Sharp >Assignee: Rakesh R >Priority: Blocker > > HDFS-9433 made an incompatible change to the semantics of getEZForPath. It > used to return the EZ of the closest ancestor path. It never threw FNF. A > common use of getEZForPath to determining if a file can be renamed, or must > be copied due to mismatched EZs. Notably, this has broken hive. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10918) Add a tool to get FileEncryptionInfo from CLI
[ https://issues.apache.org/jira/browse/HDFS-10918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-10918: - Attachment: HDFS-10918.02.patch Thanks for the review and the interesting offline chat Andrew. :) Patch 2 addresses all comments (good catch) except #2: Can't 'cast' to HdfsFileStatus since it's a separate class. So didn't change any more towards code sharing. bq. Symlinks also. I think this handles symlink, since DFSClient will call with {{getFileInfo(src, true)}}. > Add a tool to get FileEncryptionInfo from CLI > - > > Key: HDFS-10918 > URL: https://issues.apache.org/jira/browse/HDFS-10918 > Project: Hadoop HDFS > Issue Type: New Feature > Components: encryption >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HDFS-10918.01.patch, HDFS-10918.02.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10629) Federation Router
[ https://issues.apache.org/jira/browse/HDFS-10629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531269#comment-15531269 ] Inigo Goiri commented on HDFS-10629: [~jakace], we should remove the JMX related code and leave it for another patch. Other than that, I don't think this can be made much smaller without just removing the whole RPC. I think we should try to push the Router with the RPC in this patch. > Federation Router > - > > Key: HDFS-10629 > URL: https://issues.apache.org/jira/browse/HDFS-10629 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Inigo Goiri >Assignee: Jason Kace > Attachments: HDFS-10629-HDFS-10467-002.patch, > HDFS-10629-HDFS-10467-003.patch, HDFS-10629.000.patch, HDFS-10629.001.patch > > > Component that routes calls from the clients to the right Namespace. It > implements {{ClientProtocol}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10897) Ozone: SCM: Add NodeManager
[ https://issues.apache.org/jira/browse/HDFS-10897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531259#comment-15531259 ] Jing Zhao commented on HDFS-10897: -- Thanks for the reply, Anu. bq. The reason of breaking up these data structures into 3 separate maps is to reduce the single lock contention we seem to run into in the current HDFS. I think we can still avoid lock contention with a single map. Also most stale nodes are temporary and dead nodes may be directly removed. So it may not be very helpful to have separate maps for them. bq. Just want to make sure that we are both on the same page on this one. In the current scheme, we get a heartbeat and insert it into a queue – with no time stamp. Here my concern is that we may need at least two threads for the work done by the current worker. Dead node detection work may need to be separate out and done by another thread (as today's HeartbeatMonitor) considering there may be a lot of following work after a dead node is detected (e.g., triggering re-replication of containers etc.). Putting all the work, including handling heartbeat msgs and scanning all the healthy/stale nodes, into a single loop may finally lead to limit throughput for handling heartbeats. I think currently most of my concerns have been or can be addressed your future patches. So I'm +1 on the current patch and we can continue the discussion. > Ozone: SCM: Add NodeManager > --- > > Key: HDFS-10897 > URL: https://issues.apache.org/jira/browse/HDFS-10897 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer > Attachments: HDFS-10897-HDFS-7240.001.patch, > HDFS-10897-HDFS-7240.002.patch, HDFS-10897-HDFS-7240.003.patch > > > Add a nodeManager class that will be used by Storage Controller Manager > eventually. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10690) Optimize insertion/removal of replica in ShortCircuitCache.java
[ https://issues.apache.org/jira/browse/HDFS-10690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531228#comment-15531228 ] stack commented on HDFS-10690: -- Skimmed. Patch LGTM. Unfortunate we leave behind some perf but agree on avoiding custom data structure unless large benefit. Nice work. > Optimize insertion/removal of replica in ShortCircuitCache.java > --- > > Key: HDFS-10690 > URL: https://issues.apache.org/jira/browse/HDFS-10690 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Affects Versions: 3.0.0-alpha2 >Reporter: Fenghua Hu >Assignee: Fenghua Hu > Attachments: HDFS-10690.001.patch, HDFS-10690.002.patch, > HDFS-10690.003.patch, HDFS-10690.004.patch, HDFS-10690.005.patch, > HDFS-10690.006.patch, ShortCircuitCache_LinkedMap.patch > > Original Estimate: 336h > Remaining Estimate: 336h > > Currently in ShortCircuitCache, two TreeMap objects are used to track the > cached replicas. > private final TreeMapevictable = new TreeMap<>(); > private final TreeMap evictableMmapped = new > TreeMap<>(); > TreeMap employs Red-Black tree for sorting. This isn't an issue when using > traditional HDD. But when using high-performance SSD/PCIe Flash, the cost > inserting/removing an entry becomes considerable. > To mitigate it, we designed a new list-based for replica tracking. > The list is a double-linked FIFO. FIFO is time-based, thus insertion is a > very low cost operation. On the other hand, list is not lookup-friendly. To > address this issue, we introduce two references into ShortCircuitReplica > object. > ShortCircuitReplica next = null; > ShortCircuitReplica prev = null; > In this way, lookup is not needed when removing a replica from the list. We > only need to modify its predecessor's and successor's references in the lists. > Our tests showed up to 15-50% performance improvement when using PCIe flash > as storage media. > The original patch is against 2.6.4, now I am porting to Hadoop trunk, and > patch will be posted soon. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9390) Block management for maintenance states
[ https://issues.apache.org/jira/browse/HDFS-9390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531219#comment-15531219 ] Lei (Eddy) Xu commented on HDFS-9390: - Hi, [~mingma] Thanks so much for posting this patch. It looks good to me overall. Some small nits: In {{HeartbeatManager.java#heartbeatCheck()}}: {code} try { dm.removeDeadDatanode(dead, !dead.isMaintenance()); } {code} If we change it to the following code, we can undo most of the {{DatanodeManager.java}} changes, of which the motivation of these changes are not clear to me in the first sight. {code} if (!dead.isMaintenance()) { dm.removeDeadDatanode(dead); } {code} Can you elaborate a little bit more about the following code? {code} } else if (blockManager.getMinReplicationToBeInMaintenance() == 0) { LOG.info("MinReplicationToBeInMaintenance is set to zero. " + node + " is put in maintenance state" + " immediately."); node.setInMaintenance(); } else { stats.subtract(node); node.startMaintenance(); stats.add(node); } {code} Why it does not re-calculate {{stats}} when {{minReplicationToBeInMaintanence == 0}}? In {{DecommissionManager#startmaintance()}} {code} // hbManager.startDecommission will set dead node to decommissioned. {code} Is the comment correct in the context? One related question is that, why {{startMaintenance()}} and {{stopMaintenance()}} are in {{DecommissionManager}}. In {{NumberReplicas.java}}, you might want consider rename {{int maintenance()}} to {{int maintenanceReplicas}}, so is {{liveEnteringMaintence()}}. Thanks. > Block management for maintenance states > --- > > Key: HDFS-9390 > URL: https://issues.apache.org/jira/browse/HDFS-9390 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ming Ma >Assignee: Ming Ma > Attachments: HDFS-9390-2.patch, HDFS-9390.patch > > > When a node is transitioned to/stay in/transitioned out of maintenance state, > we need to make sure blocks w.r.t. that nodes are properly handled. > * When nodes are put into maintenance, it will first go to > ENTERING_MAINTENANCE, and make sure blocks are minimally replicated before > the nodes are transitioned to IN_MAINTENANCE. > * Do not replica blocks when nodes are in maintenance states. Maintenance > replica will remain in BlockMaps and thus is still considered valid from > block replication point of view. In other words, putting a node to > “maintenance” mode won’t trigger BlockManager to replicate its blocks. > * Do not invalidate replicas on node under maintenance. After any file's > replication factor is reduced, NN needs to invalidate some replicas. It > should exclude nodes under maintenance in the handling. > * Do not put IN_MAINTENANCE replicas in LocatedBlock for read operation. > * Do not allocate any new block on nodes under maintenance. > * Have Balancer exclude nodes under maintenance. > * Exclude nodes under maintenance for DN cache. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10797) Disk usage summary of snapshots causes renamed blocks to get counted twice
[ https://issues.apache.org/jira/browse/HDFS-10797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531199#comment-15531199 ] Sean Mackrory commented on HDFS-10797: -- Thanks for pointing that out [~jingzhao]. I added test cases to address some inter-directory renames. Of course, some of them are broken and still reported the wrong usage. I'd really like to come up with a way for the semantics to be both consistent and unsurprising to a user. I improved the situation somewhat by computing which nodes were deleted (as opposed to renames) in the context of all the diffs for a directory instead of each diff individually. So it's a step in the right direction but the real fix would be to have some global context when computing usage that ensures each INode in the hierarchy is counted exactly once. It looks to me like that's going to require some refactoring, since although the counts are cumulative, they can accumulate in multiple distinct objects before being combined. We would need to refactor some functions that so all counts were added directly to a single object, and that same object could prevent nodes from being counted twice, once because they were removed from a snapshotted directory, and again because of where they reside now. Thoughts on this approach before I go further? > Disk usage summary of snapshots causes renamed blocks to get counted twice > -- > > Key: HDFS-10797 > URL: https://issues.apache.org/jira/browse/HDFS-10797 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Sean Mackrory >Assignee: Sean Mackrory > Attachments: HDFS-10797.001.patch, HDFS-10797.002.patch, > HDFS-10797.003.patch > > > DirectoryWithSnapshotFeature.computeContentSummary4Snapshot calculates how > much disk usage is used by a snapshot by tallying up the files in the > snapshot that have since been deleted (that way it won't overlap with regular > files whose disk usage is computed separately). However that is determined > from a diff that shows moved (to Trash or otherwise) or renamed files as a > deletion and a creation operation that may overlap with the list of blocks. > Only the deletion operation is taken into consideration, and this causes > those blocks to get represented twice in the disk usage tallying. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10892) Add unit tests for HDFS command 'dfs -tail' and 'dfs -stat'
[ https://issues.apache.org/jira/browse/HDFS-10892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531189#comment-15531189 ] Hudson commented on HDFS-10892: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10509 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10509/]) HDFS-10892. Add unit tests for HDFS command 'dfs -tail' and 'dfs -stat'. (liuml07: rev 84c626407925e03ee2ef11faba9324d5c55b8e93) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java > Add unit tests for HDFS command 'dfs -tail' and 'dfs -stat' > --- > > Key: HDFS-10892 > URL: https://issues.apache.org/jira/browse/HDFS-10892 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs, shell, test >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HDFS-10892.000.patch, HDFS-10892.001.patch, > HDFS-10892.002.patch, HDFS-10892.003.patch, HDFS-10892.004.patch, > HDFS-10892.005.patch > > > I did not find unit test in {{trunk}} code for following cases: > - HDFS command {{dfs -tail}} > - HDFS command {{dfs -stat}} > I think it still merits to have one though the commands have served us for > years. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10914) Move remnants of oah.hdfs.client to hadoop-hdfs-client
[ https://issues.apache.org/jira/browse/HDFS-10914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531186#comment-15531186 ] Hudson commented on HDFS-10914: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10509 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10509/]) HDFS-10914. Move remnants of oah.hdfs.client to hadoop-hdfs-client. (wang: rev 92e5e9159850c01635091ea6ded0d8ee76691a9a) * (add) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/CreateEncryptionZoneFlag.java * (add) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsUtils.java * (delete) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/package-info.java * (delete) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsAdmin.java * (delete) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsUtils.java * (add) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsAdmin.java * (add) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/package-info.java * (delete) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/CreateEncryptionZoneFlag.java > Move remnants of oah.hdfs.client to hadoop-hdfs-client > -- > > Key: HDFS-10914 > URL: https://issues.apache.org/jira/browse/HDFS-10914 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.8.0 >Reporter: Andrew Wang >Assignee: Andrew Wang >Priority: Critical > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: hdfs-10914.001.patch, hdfs-10914.002.patch > > > Some remaining classes in the oah.hdfs.client package are still in > hadoop-hdfs rather than hadoop-hdfs-client. > This broke a client that depended on hadoop-client for HdfsAdmin. > hadoop-client now pulls in hadoop-hdfs-client rather than hadoop-hdfs, > meaning it lost access to HdfsAdmin. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10779) Rename does not need to re-solve destination
[ https://issues.apache.org/jira/browse/HDFS-10779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531185#comment-15531185 ] Hudson commented on HDFS-10779: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10509 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10509/]) HDFS-10779. Rename does not need to re-solve destination. Contributed by (kihwal: rev 5f34402adae191232fe78e62990396ca07f314bb) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java > Rename does not need to re-solve destination > > > Key: HDFS-10779 > URL: https://issues.apache.org/jira/browse/HDFS-10779 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Daryn Sharp >Assignee: Daryn Sharp > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HDFS-10779.patch > > > Rename uses {{FSDirectory.isDir(String)}} to determine if the destination is > a directory. This dissect the path, creates an IIP, checks if the last inode > is a directory. The rename operations already have the IIP and can check it > directly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10892) Add unit tests for HDFS command 'dfs -tail' and 'dfs -stat'
[ https://issues.apache.org/jira/browse/HDFS-10892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HDFS-10892: - Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Committed to {{trunk}} through {{branch-2.8}}. Thanks [~jnp] for review. > Add unit tests for HDFS command 'dfs -tail' and 'dfs -stat' > --- > > Key: HDFS-10892 > URL: https://issues.apache.org/jira/browse/HDFS-10892 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs, shell, test >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HDFS-10892.000.patch, HDFS-10892.001.patch, > HDFS-10892.002.patch, HDFS-10892.003.patch, HDFS-10892.004.patch, > HDFS-10892.005.patch > > > I did not find unit test in {{trunk}} code for following cases: > - HDFS command {{dfs -tail}} > - HDFS command {{dfs -stat}} > I think it still merits to have one though the commands have served us for > years. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10850) getEZForPath should NOT throw FNF
[ https://issues.apache.org/jira/browse/HDFS-10850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531151#comment-15531151 ] Daryn Sharp commented on HDFS-10850: I'll try to dig out the stacktrace, but the code in question was checking the encryption zone of both source and dest to determine if a rename is possible or must fallback to copy. This explained why I started seeing spikes of getEZForPath calls when we don't even use encryption zones! IMHO, rename should throw an IOE-derived IncompatibleEncryptionZonesException instead a bland IOE. This would allow a client to blindly attempt the rename, catch the specific exception, fallback to copy if required. In the vast majority of cases that removes 2 junk calls for non-existence EZs. In the name of compatibility, I lean towards reverting the incompatible change. This is one of the reasons 2.8 certification has ground to a halt. One could argue that most calls throw FNF because they query or manipulate a specific path. If it's not there, game over. But in the case of encryption zones and erasure coding, these features are tree-based. Properties are inherited by searching up the ancestor paths so it's more about asking "what _is or would_ the EC or EZ be for this path?". In any case, it now requires 3X rpcs to do a simple rename. I want specific exceptions to eliminate the 2 junk calls. The trend of new "always on" features... with direct or indirect non-trivial performance costs... for common operations... when not used... is becoming rather irritating. I'm sisyp...@hdfs.hadoop.org and I approve this -rant- message. > getEZForPath should NOT throw FNF > - > > Key: HDFS-10850 > URL: https://issues.apache.org/jira/browse/HDFS-10850 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.8.0 >Reporter: Daryn Sharp >Assignee: Rakesh R >Priority: Blocker > > HDFS-9433 made an incompatible change to the semantics of getEZForPath. It > used to return the EZ of the closest ancestor path. It never threw FNF. A > common use of getEZForPath to determining if a file can be renamed, or must > be copied due to mismatched EZs. Notably, this has broken hive. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10779) Rename does not need to re-solve destination
[ https://issues.apache.org/jira/browse/HDFS-10779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kihwal Lee updated HDFS-10779: -- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-alpha2 2.8.0 Status: Resolved (was: Patch Available) Committed this to trunk through branch-2.8. > Rename does not need to re-solve destination > > > Key: HDFS-10779 > URL: https://issues.apache.org/jira/browse/HDFS-10779 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Daryn Sharp >Assignee: Daryn Sharp > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HDFS-10779.patch > > > Rename uses {{FSDirectory.isDir(String)}} to determine if the destination is > a directory. This dissect the path, creates an IIP, checks if the last inode > is a directory. The rename operations already have the IIP and can check it > directly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10914) Move remnants of oah.hdfs.client to hadoop-hdfs-client
[ https://issues.apache.org/jira/browse/HDFS-10914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-10914: --- Resolution: Fixed Fix Version/s: 3.0.0-alpha2 2.8.0 Release Note: The remaining classes in the org.apache.hadoop.hdfs.client package have been moved from hadoop-hdfs to hadoop-hdfs-client. Status: Resolved (was: Patch Available) Thanks for reviewing Eddy, committed to trunk, branch-2, branch-2.8 > Move remnants of oah.hdfs.client to hadoop-hdfs-client > -- > > Key: HDFS-10914 > URL: https://issues.apache.org/jira/browse/HDFS-10914 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.8.0 >Reporter: Andrew Wang >Assignee: Andrew Wang >Priority: Critical > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: hdfs-10914.001.patch, hdfs-10914.002.patch > > > Some remaining classes in the oah.hdfs.client package are still in > hadoop-hdfs rather than hadoop-hdfs-client. > This broke a client that depended on hadoop-client for HdfsAdmin. > hadoop-client now pulls in hadoop-hdfs-client rather than hadoop-hdfs, > meaning it lost access to HdfsAdmin. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10824) MiniDFSCluster#storageCapacities has no effects on real capacity
[ https://issues.apache.org/jira/browse/HDFS-10824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-10824: - Resolution: Fixed Fix Version/s: 2.8.0 Status: Resolved (was: Patch Available) Thanks for the branch-2 patch [~xiaobingo]. Committed this for 2.8.0. > MiniDFSCluster#storageCapacities has no effects on real capacity > > > Key: HDFS-10824 > URL: https://issues.apache.org/jira/browse/HDFS-10824 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HDFS-10824-branch-2.006.patch, HDFS-10824.000.patch, > HDFS-10824.001.patch, HDFS-10824.002.patch, HDFS-10824.003.patch, > HDFS-10824.004.patch, HDFS-10824.005.patch, HDFS-10824.006.patch > > > It has been noticed MiniDFSCluster#storageCapacities has no effects on real > capacity. It can be reproduced by explicitly setting storageCapacities and > then call ClientProtocol#getDatanodeStorageReport(DatanodeReportType.LIVE) to > compare results. The following are storage report for one node with two > volumes after I set capacity as 300 * 1024. Apparently, the capacity is not > changed. > adminState|DatanodeInfo$AdminStates (id=6861) > |blockPoolUsed|215192| > |cacheCapacity|0| > |cacheUsed|0| > |capacity|998164971520| > |datanodeUuid|"839912e9-5bcb-45d1-81cf-9a9c9c02a00b" (id=6862)| > |dependentHostNames|LinkedList (id=6863)| > |dfsUsed|215192| > |hostName|"127.0.0.1" (id=6864)| > |infoPort|64222| > |infoSecurePort|0| > |ipAddr|"127.0.0.1" (id=6865)| > |ipcPort|64223| > |lastUpdate|1472682790948| > |lastUpdateMonotonic|209605640| > |level|0| > |location|"/default-rack" (id=6866)| > |maintenanceExpireTimeInMS|0| > |parent|null| > |peerHostName|null| > |remaining|20486512640| > |softwareVersion|null| > |upgradeDomain|null| > |xceiverCount|1| > |xferAddr|"127.0.0.1:64220" (id=6855)| > |xferPort|64220| > [0]StorageReport (id=6856) > |blockPoolUsed|4096| > |capacity|499082485760| > |dfsUsed|4096| > |failed|false| > |remaining|10243256320| > |storage|DatanodeStorage (id=6869)| > [1]StorageReport (id=6859) > |blockPoolUsed|211096| > |capacity|499082485760| > |dfsUsed|211096| > |failed|false| > |remaining|10243256320| > |storage|DatanodeStorage (id=6872)| -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10824) MiniDFSCluster#storageCapacities has no effects on real capacity
[ https://issues.apache.org/jira/browse/HDFS-10824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531104#comment-15531104 ] Hadoop QA commented on HDFS-10824: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 40s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 55s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 34s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 26s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 189 unchanged - 2 fixed = 190 total (was 191) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 34s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 47m 50s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_111. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}125m 40s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_101 Failed junit tests | hadoop.hdfs.server.namenode.ha.TestHAMetrics | | | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA | | JDK v1.7.0_111 Failed junit tests | hadoop.hdfs.TestDFSShell | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:b59b8b7 | | JIRA Issue | HDFS-10824 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12830781/HDFS-10824-branch-2.006.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 3a771e888331 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | |
[jira] [Commented] (HDFS-10779) Rename does not need to re-solve destination
[ https://issues.apache.org/jira/browse/HDFS-10779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531097#comment-15531097 ] Kihwal Lee commented on HDFS-10779: --- +1 nice improvement to not re-resolve the same thing multiple times. > Rename does not need to re-solve destination > > > Key: HDFS-10779 > URL: https://issues.apache.org/jira/browse/HDFS-10779 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Daryn Sharp >Assignee: Daryn Sharp > Attachments: HDFS-10779.patch > > > Rename uses {{FSDirectory.isDir(String)}} to determine if the destination is > a directory. This dissect the path, creates an IIP, checks if the last inode > is a directory. The rename operations already have the IIP and can check it > directly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10918) Add a tool to get FileEncryptionInfo from CLI
[ https://issues.apache.org/jira/browse/HDFS-10918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531081#comment-15531081 ] Andrew Wang commented on HDFS-10918: Thanks for working on this Xiao, a few comments: * I don't think we need that checkAccess call, since the NN will enforce permissions already when the client calls getFileInfo * Is it possible to call dfs.getFileStatus and cast to a HdfsFileStatus, or some other way of code sharing? Note that there's extra handling we need to do to handle relative paths which is not handled by the current getFeInfo implementation. Symlinks also. * Rather than "getFeInfo" in CryptoAdmin, can we expand to "getFileEncryptionInfo" for the user visible flag? * This probably also needs a doc update > Add a tool to get FileEncryptionInfo from CLI > - > > Key: HDFS-10918 > URL: https://issues.apache.org/jira/browse/HDFS-10918 > Project: Hadoop HDFS > Issue Type: New Feature > Components: encryption >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HDFS-10918.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10690) Optimize insertion/removal of replica in ShortCircuitCache.java
[ https://issues.apache.org/jira/browse/HDFS-10690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531057#comment-15531057 ] Fenghua Hu commented on HDFS-10690: --- [~xyao], thanks for the help! > Optimize insertion/removal of replica in ShortCircuitCache.java > --- > > Key: HDFS-10690 > URL: https://issues.apache.org/jira/browse/HDFS-10690 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Affects Versions: 3.0.0-alpha2 >Reporter: Fenghua Hu >Assignee: Fenghua Hu > Attachments: HDFS-10690.001.patch, HDFS-10690.002.patch, > HDFS-10690.003.patch, HDFS-10690.004.patch, HDFS-10690.005.patch, > HDFS-10690.006.patch, ShortCircuitCache_LinkedMap.patch > > Original Estimate: 336h > Remaining Estimate: 336h > > Currently in ShortCircuitCache, two TreeMap objects are used to track the > cached replicas. > private final TreeMapevictable = new TreeMap<>(); > private final TreeMap evictableMmapped = new > TreeMap<>(); > TreeMap employs Red-Black tree for sorting. This isn't an issue when using > traditional HDD. But when using high-performance SSD/PCIe Flash, the cost > inserting/removing an entry becomes considerable. > To mitigate it, we designed a new list-based for replica tracking. > The list is a double-linked FIFO. FIFO is time-based, thus insertion is a > very low cost operation. On the other hand, list is not lookup-friendly. To > address this issue, we introduce two references into ShortCircuitReplica > object. > ShortCircuitReplica next = null; > ShortCircuitReplica prev = null; > In this way, lookup is not needed when removing a replica from the list. We > only need to modify its predecessor's and successor's references in the lists. > Our tests showed up to 15-50% performance improvement when using PCIe flash > as storage media. > The original patch is against 2.6.4, now I am porting to Hadoop trunk, and > patch will be posted soon. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10912) Ozone:SCM: Add safe mode support to NodeManager.
[ https://issues.apache.org/jira/browse/HDFS-10912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531044#comment-15531044 ] Arpit Agarwal commented on HDFS-10912: -- bq. Arpit also made the same comment, and suggested that I use something like a "chill mode" My comment was not intended to be taken too seriously. :) > Ozone:SCM: Add safe mode support to NodeManager. > > > Key: HDFS-10912 > URL: https://issues.apache.org/jira/browse/HDFS-10912 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer > Attachments: HDFS-10912-HDFS-7240.001.patch > > > Add Safe mode support : That is add the ability to force exit or enter safe > mode. As well as get the current safe mode status from node manager. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10916) Switch from "raw" to "system" xattr namespace for erasure coding policy
[ https://issues.apache.org/jira/browse/HDFS-10916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531033#comment-15531033 ] Andrew Wang commented on HDFS-10916: Test failures look unrelated. > Switch from "raw" to "system" xattr namespace for erasure coding policy > --- > > Key: HDFS-10916 > URL: https://issues.apache.org/jira/browse/HDFS-10916 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Andrew Wang > Attachments: HDFS-10916.001.patch > > > Currently EC policy is stored as in the raw xattr namespace. It would be > better to store this in "system" like storage policy. > Raw is meant for attributes which need to be preserved across a distcp, like > encryption info. EC policy is more similar to replication factor or storage > policy, which can differ between the src and target of a distcp. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10916) Switch from "raw" to "system" xattr namespace for erasure coding policy
[ https://issues.apache.org/jira/browse/HDFS-10916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-10916: --- Summary: Switch from "raw" to "system" xattr namespace for erasure coding policy (was: Switch from "raw" to "system" namespace for erasure coding policy) > Switch from "raw" to "system" xattr namespace for erasure coding policy > --- > > Key: HDFS-10916 > URL: https://issues.apache.org/jira/browse/HDFS-10916 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Andrew Wang > Attachments: HDFS-10916.001.patch > > > Currently EC policy is stored as in the raw xattr namespace. It would be > better to store this in "system" like storage policy. > Raw is meant for attributes which need to be preserved across a distcp, like > encryption info. EC policy is more similar to replication factor or storage > policy, which can differ between the src and target of a distcp. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10912) Ozone:SCM: Add safe mode support to NodeManager.
[ https://issues.apache.org/jira/browse/HDFS-10912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530970#comment-15530970 ] Anu Engineer commented on HDFS-10912: - [~jingzhao] Thanks for your comments. bq. the first one is to make sure the SCM receives enough DN registration. If we persist the container-node mapping in SCM, we do not need to wait for full container reports. Also SCM does not take the responsibility for maintaining the container states/durability, thus this type of safemode is very lightweight compared with the current NN safemode. (maybe we can rename it ...) Completely agree, Arpit also made the same comment, and suggested that I use something like a "chill mode" -- to indicate we are just waiting for a minimal set of info instead of classical safe mode. Shall I rename it as "Chill Mode" to indicate this is something other than the safe mode. I am open to any naming suggestions you might have. bq. Therefore, to me forceExitSafeMode/forceEnterSafeMode/isInManualSafeMode can be moved to SCM level. forceExitSafeMode will reset both the manual safemode and the safemode in nodemanager. Absolutely, in fact when we have to the full SCM you will see that this call is used only by SCM and SCM in turn will expose this as a single call. We are just layering these calls for time being. When SCM code is in, you will see that a forceEnterSafeMode will put both container manager and node manager into safe mode via a single call. > Ozone:SCM: Add safe mode support to NodeManager. > > > Key: HDFS-10912 > URL: https://issues.apache.org/jira/browse/HDFS-10912 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer > Attachments: HDFS-10912-HDFS-7240.001.patch > > > Add Safe mode support : That is add the ability to force exit or enter safe > mode. As well as get the current safe mode status from node manager. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10922) Adding additional unit tests for Trash
[ https://issues.apache.org/jira/browse/HDFS-10922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530965#comment-15530965 ] Xiaoyu Yao commented on HDFS-10922: --- Propose to add the following test cases: # test users can delete their own trash directory # test users can delete an empty directory and the directory is moved to trash # test fs.trash.interval with invalid values such as 0 or negative # test fs.trash.interval with namenode restart. > Adding additional unit tests for Trash > -- > > Key: HDFS-10922 > URL: https://issues.apache.org/jira/browse/HDFS-10922 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: test >Reporter: Xiaoyu Yao > > This ticket is opened to track adding unit tests for Trash. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10897) Ozone: SCM: Add NodeManager
[ https://issues.apache.org/jira/browse/HDFS-10897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530949#comment-15530949 ] Anu Engineer commented on HDFS-10897: - [~jingzhao] Thanks for taking time out to review the code and provide comments. Please see my thought on your comments. bq. My main concern is about the current way tracking the heartbeat time for DataNodes. Instead of using 3 String-Long maps, I think it's better to use DatanodeInfo (or a simplified version) to store the latest heartbeat/report time. I did look at DatanodeInfo, in fact that is where I started working from. However, the current class has lot of HDFS baggage, hence as you suggested I wanted to create a new simplified class. In the future patches you will see it getting developed. String is the minimal placeholder until we have the right classes. bq. Later we still need to capture other information about DataNodes (its current load and state etc.) thus DatanodeInfo can be the central place to store all the information about a DN (just like today's HDFS). Completely agree, in the future patches we will have protobuf/java classes that more truly reflects the container semantics. bq. Also in this way we only need to maintain a single datanode map (which is more static compared with the current 3 maps) and most of the lock protection can be put into the DatanodeInfo level. The reason of breaking up these data structures into 3 separate maps is to reduce the single lock contention we seem to run into in the current HDFS. The nodes map will contain the actual node information, and these maps just indicate the state of the node. That way we need to hold locks for these structures for very very short times. Right now we don't see an issue, but I was just being paranoid and trying to avoid any single lock with long path lengths as in HDFS. The idea that is followed in this NodeManager is to use queues -- instead of locks to maintain the HBs flow. bq. Also with this change we can have a more fair way for heartbeat time calculation: for every heartbeat msg, we can update the corresponding datanode's latest update time before putting the heartbeat into the queue, in order to avoid the penalty on DN due to SCM's local latency. Just want to make sure that we are both on the same page on this one. In the current scheme, we get a heartbeat and insert it into a queue -- with no time stamp. Eventually an Heartbeat processor will pick up that heartbeat and update a map saying that this is the time stamp when we received the heartbeat. If SCM is for some reason is slow, it is when the processor picks up that it adds the timestamp. In other words, waiting in the queue is not penalized for a heartbeat. The reason why there is no penalty for waiting is because we process the HB queue before we process the stale nodes list or dead nodes list. if we add a timestamp to the queue, then we have an accurate HB time stamp, but then datanodes do get penalized for waiting. It is a trivial change to add timestamp to the HB queue, but the current code favors the datanodes especially when the SCM is under load. So just wanted to make sure that adding timestamp to the HB to indicate the wall clock time it actually arrived is indeed what you want to do. bq. For Node state, we may want to follow the current HDFS, i.e., we need to have AdminStates which includes NORMAL, DECOMMISSION_INPROGRESS, DECOMMISSIONED, ENTERING_MAINTENANCE, and IN_MAINTENANCE. Will do, thanks for bring these up. I intend to do them as a series of patches since it is easier to code review that way. I will add AdminStates -- right now this is patch is closer to the core functionality of HeartbeatManager in HDFS. As we go forward we will need to add the rest of the functionality described by you. bq. getNodes/getNodeCount can be defined in a metrics interface (like today's FSNamesystemMBean). I will file a JIRA for this and move it. Right now we don't have a metrics interface, but we should build one, it is also useful while writing tests. bq. Any reason we need a NodeManager interface? When we have container manager, there are code paths in it that depends on Nodemanager. The interface allows us to pass a TestNodeManager easily to classes that is calling into node manager. I am really trying to make sure that we can write tests for node manager without using MiniOzoneCluster. I want to use MiniOzoneCluster only for end-to-end tests, but focus more on cleaner and simpler unit tests. The interfaces make it much easier and cleaner than using mocks. > Ozone: SCM: Add NodeManager > --- > > Key: HDFS-10897 > URL: https://issues.apache.org/jira/browse/HDFS-10897 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >
[jira] [Created] (HDFS-10922) Adding additional unit tests for Trash
Xiaoyu Yao created HDFS-10922: - Summary: Adding additional unit tests for Trash Key: HDFS-10922 URL: https://issues.apache.org/jira/browse/HDFS-10922 Project: Hadoop HDFS Issue Type: Sub-task Components: test Reporter: Xiaoyu Yao This ticket is opened to track adding unit tests for Trash. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10912) Ozone:SCM: Add safe mode support to NodeManager.
[ https://issues.apache.org/jira/browse/HDFS-10912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530843#comment-15530843 ] Jing Zhao commented on HDFS-10912: -- Thanks for the work, [~anu]. For SCM, I think we may have two different types of "safemode": # the first one is to make sure the SCM receives enough DN registration. If we persist the container-node mapping in SCM, we do not need to wait for full container reports. Also SCM does not take the responsibility for maintaining the container states/durability, thus this type of safemode is very lightweight compared with the current NN safemode. (maybe we can rename it ...) # the second one is the manual safemode (triggered by {{forceEnterSafeMode}}). This safemode is actually against the whole SCM instead of its node manager (just like in today's HDFS the manual safemode is for the whole NN instead of the blockmanager). Therefore, to me {{forceExitSafeMode}}/{{forceEnterSafeMode}}/{{isInManualSafeMode}} can be moved to SCM level. {{forceExitSafeMode}} will reset both the manual safemode and the safemode in nodemanager. > Ozone:SCM: Add safe mode support to NodeManager. > > > Key: HDFS-10912 > URL: https://issues.apache.org/jira/browse/HDFS-10912 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer > Attachments: HDFS-10912-HDFS-7240.001.patch > > > Add Safe mode support : That is add the ability to force exit or enter safe > mode. As well as get the current safe mode status from node manager. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-10897) Ozone: SCM: Add NodeManager
[ https://issues.apache.org/jira/browse/HDFS-10897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530818#comment-15530818 ] Jing Zhao edited comment on HDFS-10897 at 9/28/16 8:43 PM: --- Thanks for working on this, [~anu]. The patch looks good to me overall. Some comments: # My main concern is about the current way tracking the heartbeat time for DataNodes. Instead of using 3 String-Long maps, I think it's better to use {{DatanodeInfo}} (or a simplified version) to store the latest heartbeat/report time. Later we still need to capture other information about DataNodes (its current load and state etc.) thus {{DatanodeInfo}} can be the central place to store all the information about a DN (just like today's HDFS). Also in this way we only need to maintain a single datanode map (which is more static compared with the current 3 maps) and most of the lock protection can be put into the DatanodeInfo level. # Also with this change we can have a more fair way for heartbeat time calculation: for every heartbeat msg, we can update the corresponding datanode's latest update time before putting the heartbeat into the queue, in order to avoid the penalty on DN due to SCM's local latency. # For Node state, we may want to follow the current HDFS, i.e., we need to have AdminStates which includes NORMAL, DECOMMISSION_INPROGRESS, DECOMMISSIONED, ENTERING_MAINTENANCE, and IN_MAINTENANCE. Stale/dead are calculated based on the latest heartbeat time thus maybe we do not need to define them as an explicit state (and for dead nodes we may want to directly remove it). {code} 36 * 4. A node can be in any of these 4 states: {HEALTHY, STALE, DEAD, 37 * DECOMMISSIONED} 38 * 39 * HEALTHY - It is a datanode that is regularly heartbeating us. 40 * 41 * STALE - A datanode for which we have missed few heart beats. 42 * 43 * DEAD - A datanode that we have not heard from for a while. 44 * 45 * DECOMMISSIONED - Someone told us to remove this node from the tracking 46 * list, by calling removeNode. We will throw away this nodes info soon. {code} # {{getNodes}}/{{getNodeCount}} can be defined in a metrics interface (like today's FSNamesystemMBean). # Any reason we need a NodeManager interface? was (Author: jingzhao): Thanks for working on this, [~anu]. The patch looks good to me overall. Some comments: # My main concern is about the current way tracking the heartbeat time for DataNodes. Instead of using 3 String-Long maps, I think it's better to use {{DatanodeInfo}} to store the latest heartbeat/report time. Later we still need to capture other information about DataNodes (its current load and state etc.) thus {{DatanodeInfo}} can be the central place to store all the information about a DN (just like today's HDFS). Also in this way we only need to maintain a single datanode map (which is more static compared with the current 3 maps) and most of the lock protection can be put into the DatanodeInfo level. # Also with this change we can have a more fair way for heartbeat time calculation: for every heartbeat msg, we can update the corresponding datanode's latest update time before putting the heartbeat into the queue, in order to avoid the penalty on DN due to SCM's local latency. # For Node state, we may want to follow the current HDFS, i.e., we need to have AdminStates which includes NORMAL, DECOMMISSION_INPROGRESS, DECOMMISSIONED, ENTERING_MAINTENANCE, and IN_MAINTENANCE. Stale/dead are calculated based on the latest heartbeat time thus maybe we do not need to define them as an explicit state (and for dead nodes we may want to directly remove it). {code} 36 * 4. A node can be in any of these 4 states: {HEALTHY, STALE, DEAD, 37 * DECOMMISSIONED} 38 * 39 * HEALTHY - It is a datanode that is regularly heartbeating us. 40 * 41 * STALE - A datanode for which we have missed few heart beats. 42 * 43 * DEAD - A datanode that we have not heard from for a while. 44 * 45 * DECOMMISSIONED - Someone told us to remove this node from the tracking 46 * list, by calling removeNode. We will throw away this nodes info soon. {code} # {{getNodes}}/{{getNodeCount}} can be defined in a metrics interface (like today's FSNamesystemMBean). # Any reason we need a NodeManager interface? > Ozone: SCM: Add NodeManager > --- > > Key: HDFS-10897 > URL: https://issues.apache.org/jira/browse/HDFS-10897 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer > Attachments: HDFS-10897-HDFS-7240.001.patch, > HDFS-10897-HDFS-7240.002.patch, HDFS-10897-HDFS-7240.003.patch > > > Add a nodeManager class that will
[jira] [Commented] (HDFS-10897) Ozone: SCM: Add NodeManager
[ https://issues.apache.org/jira/browse/HDFS-10897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530818#comment-15530818 ] Jing Zhao commented on HDFS-10897: -- Thanks for working on this, [~anu]. The patch looks good to me overall. Some comments: # My main concern is about the current way tracking the heartbeat time for DataNodes. Instead of using 3 String-Long maps, I think it's better to use {{DatanodeInfo}} to store the latest heartbeat/report time. Later we still need to capture other information about DataNodes (its current load and state etc.) thus {{DatanodeInfo}} can be the central place to store all the information about a DN (just like today's HDFS). Also in this way we only need to maintain a single datanode map (which is more static compared with the current 3 maps) and most of the lock protection can be put into the DatanodeInfo level. # Also with this change we can have a more fair way for heartbeat time calculation: for every heartbeat msg, we can update the corresponding datanode's latest update time before putting the heartbeat into the queue, in order to avoid the penalty on DN due to SCM's local latency. # For Node state, we may want to follow the current HDFS, i.e., we need to have AdminStates which includes NORMAL, DECOMMISSION_INPROGRESS, DECOMMISSIONED, ENTERING_MAINTENANCE, and IN_MAINTENANCE. Stale/dead are calculated based on the latest heartbeat time thus maybe we do not need to define them as an explicit state (and for dead nodes we may want to directly remove it). {code} 36 * 4. A node can be in any of these 4 states: {HEALTHY, STALE, DEAD, 37 * DECOMMISSIONED} 38 * 39 * HEALTHY - It is a datanode that is regularly heartbeating us. 40 * 41 * STALE - A datanode for which we have missed few heart beats. 42 * 43 * DEAD - A datanode that we have not heard from for a while. 44 * 45 * DECOMMISSIONED - Someone told us to remove this node from the tracking 46 * list, by calling removeNode. We will throw away this nodes info soon. {code} # {{getNodes}}/{{getNodeCount}} can be defined in a metrics interface (like today's FSNamesystemMBean). # Any reason we need a NodeManager interface? > Ozone: SCM: Add NodeManager > --- > > Key: HDFS-10897 > URL: https://issues.apache.org/jira/browse/HDFS-10897 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer > Attachments: HDFS-10897-HDFS-7240.001.patch, > HDFS-10897-HDFS-7240.002.patch, HDFS-10897-HDFS-7240.003.patch > > > Add a nodeManager class that will be used by Storage Controller Manager > eventually. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10915) Fix time measurement bug in TestDatanodeRestart#testWaitForRegistrationOnRestart
[ https://issues.apache.org/jira/browse/HDFS-10915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530790#comment-15530790 ] Xiaobing Zhou commented on HDFS-10915: -- Thank you [~liuml07] for committing it. > Fix time measurement bug in > TestDatanodeRestart#testWaitForRegistrationOnRestart > > > Key: HDFS-10915 > URL: https://issues.apache.org/jira/browse/HDFS-10915 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou >Priority: Minor > Fix For: 2.7.4, 3.0.0-alpha2 > > Attachments: HDFS-10915.000.patch, HDFS-10915.001.patch > > > It should be milliseconds in the message of IOException. > {code} > } catch (org.apache.hadoop.ipc.RemoteException e) { > long elapsed = System.currentTimeMillis() - start; > // timers have at-least semantics, so it should be at least 5 seconds. > if (elapsed < 5000 || elapsed > 1) { > throw new IOException(elapsed + " seconds passed.", e); > } > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10824) MiniDFSCluster#storageCapacities has no effects on real capacity
[ https://issues.apache.org/jira/browse/HDFS-10824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530788#comment-15530788 ] Xiaobing Zhou commented on HDFS-10824: -- Thank you [~arpiagariu], [~cnauroth] and [~anu] for reviewing/committing it. I posted branch-2 patch and re-opened the Jira to kick off Jenkins run. > MiniDFSCluster#storageCapacities has no effects on real capacity > > > Key: HDFS-10824 > URL: https://issues.apache.org/jira/browse/HDFS-10824 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Fix For: 3.0.0-alpha2 > > Attachments: HDFS-10824-branch-2.006.patch, HDFS-10824.000.patch, > HDFS-10824.001.patch, HDFS-10824.002.patch, HDFS-10824.003.patch, > HDFS-10824.004.patch, HDFS-10824.005.patch, HDFS-10824.006.patch > > > It has been noticed MiniDFSCluster#storageCapacities has no effects on real > capacity. It can be reproduced by explicitly setting storageCapacities and > then call ClientProtocol#getDatanodeStorageReport(DatanodeReportType.LIVE) to > compare results. The following are storage report for one node with two > volumes after I set capacity as 300 * 1024. Apparently, the capacity is not > changed. > adminState|DatanodeInfo$AdminStates (id=6861) > |blockPoolUsed|215192| > |cacheCapacity|0| > |cacheUsed|0| > |capacity|998164971520| > |datanodeUuid|"839912e9-5bcb-45d1-81cf-9a9c9c02a00b" (id=6862)| > |dependentHostNames|LinkedList (id=6863)| > |dfsUsed|215192| > |hostName|"127.0.0.1" (id=6864)| > |infoPort|64222| > |infoSecurePort|0| > |ipAddr|"127.0.0.1" (id=6865)| > |ipcPort|64223| > |lastUpdate|1472682790948| > |lastUpdateMonotonic|209605640| > |level|0| > |location|"/default-rack" (id=6866)| > |maintenanceExpireTimeInMS|0| > |parent|null| > |peerHostName|null| > |remaining|20486512640| > |softwareVersion|null| > |upgradeDomain|null| > |xceiverCount|1| > |xferAddr|"127.0.0.1:64220" (id=6855)| > |xferPort|64220| > [0]StorageReport (id=6856) > |blockPoolUsed|4096| > |capacity|499082485760| > |dfsUsed|4096| > |failed|false| > |remaining|10243256320| > |storage|DatanodeStorage (id=6869)| > [1]StorageReport (id=6859) > |blockPoolUsed|211096| > |capacity|499082485760| > |dfsUsed|211096| > |failed|false| > |remaining|10243256320| > |storage|DatanodeStorage (id=6872)| -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10824) MiniDFSCluster#storageCapacities has no effects on real capacity
[ https://issues.apache.org/jira/browse/HDFS-10824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-10824: - Attachment: HDFS-10824-branch-2.006.patch > MiniDFSCluster#storageCapacities has no effects on real capacity > > > Key: HDFS-10824 > URL: https://issues.apache.org/jira/browse/HDFS-10824 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Fix For: 3.0.0-alpha2 > > Attachments: HDFS-10824-branch-2.006.patch, HDFS-10824.000.patch, > HDFS-10824.001.patch, HDFS-10824.002.patch, HDFS-10824.003.patch, > HDFS-10824.004.patch, HDFS-10824.005.patch, HDFS-10824.006.patch > > > It has been noticed MiniDFSCluster#storageCapacities has no effects on real > capacity. It can be reproduced by explicitly setting storageCapacities and > then call ClientProtocol#getDatanodeStorageReport(DatanodeReportType.LIVE) to > compare results. The following are storage report for one node with two > volumes after I set capacity as 300 * 1024. Apparently, the capacity is not > changed. > adminState|DatanodeInfo$AdminStates (id=6861) > |blockPoolUsed|215192| > |cacheCapacity|0| > |cacheUsed|0| > |capacity|998164971520| > |datanodeUuid|"839912e9-5bcb-45d1-81cf-9a9c9c02a00b" (id=6862)| > |dependentHostNames|LinkedList (id=6863)| > |dfsUsed|215192| > |hostName|"127.0.0.1" (id=6864)| > |infoPort|64222| > |infoSecurePort|0| > |ipAddr|"127.0.0.1" (id=6865)| > |ipcPort|64223| > |lastUpdate|1472682790948| > |lastUpdateMonotonic|209605640| > |level|0| > |location|"/default-rack" (id=6866)| > |maintenanceExpireTimeInMS|0| > |parent|null| > |peerHostName|null| > |remaining|20486512640| > |softwareVersion|null| > |upgradeDomain|null| > |xceiverCount|1| > |xferAddr|"127.0.0.1:64220" (id=6855)| > |xferPort|64220| > [0]StorageReport (id=6856) > |blockPoolUsed|4096| > |capacity|499082485760| > |dfsUsed|4096| > |failed|false| > |remaining|10243256320| > |storage|DatanodeStorage (id=6869)| > [1]StorageReport (id=6859) > |blockPoolUsed|211096| > |capacity|499082485760| > |dfsUsed|211096| > |failed|false| > |remaining|10243256320| > |storage|DatanodeStorage (id=6872)| -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10824) MiniDFSCluster#storageCapacities has no effects on real capacity
[ https://issues.apache.org/jira/browse/HDFS-10824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-10824: - Status: Patch Available (was: Reopened) > MiniDFSCluster#storageCapacities has no effects on real capacity > > > Key: HDFS-10824 > URL: https://issues.apache.org/jira/browse/HDFS-10824 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Fix For: 3.0.0-alpha2 > > Attachments: HDFS-10824-branch-2.006.patch, HDFS-10824.000.patch, > HDFS-10824.001.patch, HDFS-10824.002.patch, HDFS-10824.003.patch, > HDFS-10824.004.patch, HDFS-10824.005.patch, HDFS-10824.006.patch > > > It has been noticed MiniDFSCluster#storageCapacities has no effects on real > capacity. It can be reproduced by explicitly setting storageCapacities and > then call ClientProtocol#getDatanodeStorageReport(DatanodeReportType.LIVE) to > compare results. The following are storage report for one node with two > volumes after I set capacity as 300 * 1024. Apparently, the capacity is not > changed. > adminState|DatanodeInfo$AdminStates (id=6861) > |blockPoolUsed|215192| > |cacheCapacity|0| > |cacheUsed|0| > |capacity|998164971520| > |datanodeUuid|"839912e9-5bcb-45d1-81cf-9a9c9c02a00b" (id=6862)| > |dependentHostNames|LinkedList (id=6863)| > |dfsUsed|215192| > |hostName|"127.0.0.1" (id=6864)| > |infoPort|64222| > |infoSecurePort|0| > |ipAddr|"127.0.0.1" (id=6865)| > |ipcPort|64223| > |lastUpdate|1472682790948| > |lastUpdateMonotonic|209605640| > |level|0| > |location|"/default-rack" (id=6866)| > |maintenanceExpireTimeInMS|0| > |parent|null| > |peerHostName|null| > |remaining|20486512640| > |softwareVersion|null| > |upgradeDomain|null| > |xceiverCount|1| > |xferAddr|"127.0.0.1:64220" (id=6855)| > |xferPort|64220| > [0]StorageReport (id=6856) > |blockPoolUsed|4096| > |capacity|499082485760| > |dfsUsed|4096| > |failed|false| > |remaining|10243256320| > |storage|DatanodeStorage (id=6869)| > [1]StorageReport (id=6859) > |blockPoolUsed|211096| > |capacity|499082485760| > |dfsUsed|211096| > |failed|false| > |remaining|10243256320| > |storage|DatanodeStorage (id=6872)| -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Reopened] (HDFS-10824) MiniDFSCluster#storageCapacities has no effects on real capacity
[ https://issues.apache.org/jira/browse/HDFS-10824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou reopened HDFS-10824: -- > MiniDFSCluster#storageCapacities has no effects on real capacity > > > Key: HDFS-10824 > URL: https://issues.apache.org/jira/browse/HDFS-10824 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Fix For: 3.0.0-alpha2 > > Attachments: HDFS-10824-branch-2.006.patch, HDFS-10824.000.patch, > HDFS-10824.001.patch, HDFS-10824.002.patch, HDFS-10824.003.patch, > HDFS-10824.004.patch, HDFS-10824.005.patch, HDFS-10824.006.patch > > > It has been noticed MiniDFSCluster#storageCapacities has no effects on real > capacity. It can be reproduced by explicitly setting storageCapacities and > then call ClientProtocol#getDatanodeStorageReport(DatanodeReportType.LIVE) to > compare results. The following are storage report for one node with two > volumes after I set capacity as 300 * 1024. Apparently, the capacity is not > changed. > adminState|DatanodeInfo$AdminStates (id=6861) > |blockPoolUsed|215192| > |cacheCapacity|0| > |cacheUsed|0| > |capacity|998164971520| > |datanodeUuid|"839912e9-5bcb-45d1-81cf-9a9c9c02a00b" (id=6862)| > |dependentHostNames|LinkedList (id=6863)| > |dfsUsed|215192| > |hostName|"127.0.0.1" (id=6864)| > |infoPort|64222| > |infoSecurePort|0| > |ipAddr|"127.0.0.1" (id=6865)| > |ipcPort|64223| > |lastUpdate|1472682790948| > |lastUpdateMonotonic|209605640| > |level|0| > |location|"/default-rack" (id=6866)| > |maintenanceExpireTimeInMS|0| > |parent|null| > |peerHostName|null| > |remaining|20486512640| > |softwareVersion|null| > |upgradeDomain|null| > |xceiverCount|1| > |xferAddr|"127.0.0.1:64220" (id=6855)| > |xferPort|64220| > [0]StorageReport (id=6856) > |blockPoolUsed|4096| > |capacity|499082485760| > |dfsUsed|4096| > |failed|false| > |remaining|10243256320| > |storage|DatanodeStorage (id=6869)| > [1]StorageReport (id=6859) > |blockPoolUsed|211096| > |capacity|499082485760| > |dfsUsed|211096| > |failed|false| > |remaining|10243256320| > |storage|DatanodeStorage (id=6872)| -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10892) Add unit tests for HDFS command 'dfs -tail' and 'dfs -stat'
[ https://issues.apache.org/jira/browse/HDFS-10892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530774#comment-15530774 ] Jitendra Nath Pandey commented on HDFS-10892: - I think it makes sense to track utf8 tests outside this jira. These tests look good to me. +1 > Add unit tests for HDFS command 'dfs -tail' and 'dfs -stat' > --- > > Key: HDFS-10892 > URL: https://issues.apache.org/jira/browse/HDFS-10892 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs, shell, test >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HDFS-10892.000.patch, HDFS-10892.001.patch, > HDFS-10892.002.patch, HDFS-10892.003.patch, HDFS-10892.004.patch, > HDFS-10892.005.patch > > > I did not find unit test in {{trunk}} code for following cases: > - HDFS command {{dfs -tail}} > - HDFS command {{dfs -stat}} > I think it still merits to have one though the commands have served us for > years. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10896) Move lock logging logic from FSNamesystem into FSNamesystemLock
[ https://issues.apache.org/jira/browse/HDFS-10896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530725#comment-15530725 ] Hanisha Koneru commented on HDFS-10896: --- +1 for v3 patch. Thanks [~xkrogen] for including changes from HDFS-10713. > Move lock logging logic from FSNamesystem into FSNamesystemLock > --- > > Key: HDFS-10896 > URL: https://issues.apache.org/jira/browse/HDFS-10896 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Reporter: Erik Krogen >Assignee: Erik Krogen > Labels: logging, namenode > Attachments: HDFS-10896.000.patch, HDFS-10896.001.patch, > HDFS-10896.002.patch, HDFS-10896.003.patch > > > There are a number of tickets (HDFS-10742, HDFS-10817, HDFS-10713, this > subtask's story HDFS-10475) which are adding/improving logging/metrics around > the {{FSNamesystemLock}}. All of this is done in {{FSNamesystem}} right now, > which is polluting the namesystem with ThreadLocal variables, timing > counters, etc. which are only relevant to the lock itself and the number of > these increases as the logging/metrics become more sophisticated. It would be > best to move these all into FSNamesystemLock to keep the metrics/logging tied > directly to the item of interest. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10826) The result of fsck should be CRITICAL when there are unrecoverable ec block groups.
[ https://issues.apache.org/jira/browse/HDFS-10826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530710#comment-15530710 ] Jing Zhao commented on HDFS-10826: -- # The failure of TestLeaseRecoveryStriped seems related. Could you please take a look, [~tasanuma0829]? # {{countNodes}} has already been called in {{createLocatedBlock}}. We can reuse the result. {code} 1071final boolean isCorrupt; 1072if (blk.isStriped()) { 1073 BlockInfoStriped sblk = (BlockInfoStriped) blk; 1074 isCorrupt = numCorruptReplicas != 0 && 1075 countNodes(blk).liveReplicas() < sblk.getRealDataBlockNum(); 1076} else { 1077 isCorrupt = numCorruptReplicas != 0 && numCorruptReplicas == numNodes; 1078} {code} > The result of fsck should be CRITICAL when there are unrecoverable ec block > groups. > --- > > Key: HDFS-10826 > URL: https://issues.apache.org/jira/browse/HDFS-10826 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma > Attachments: HDFS-10826.2.patch, HDFS-10826.3.patch, > HDFS-10826.WIP.1.patch > > > For RS-6-3, when there is one ec block group and > 1) 0~3 out of 9 internal blocks are missing, the result of fsck is HEALTY. > 2) 4~8 out of 9 internal blocks are missing, the result of fsck is HEALTY. > {noformat} > Erasure Coded Block Groups: > Total size:536870912 B > Total files: 1 > Total block groups (validated):1 (avg. block group size 536870912 B) > > UNRECOVERABLE BLOCK GROUPS: 1 (100.0 %) > > Minimally erasure-coded block groups: 0 (0.0 %) > Over-erasure-coded block groups: 0 (0.0 %) > Under-erasure-coded block groups: 1 (100.0 %) > Unsatisfactory placement block groups: 0 (0.0 %) > Default ecPolicy: RS-DEFAULT-6-3-64k > Average block group size: 5.0 > Missing block groups: 0 > Corrupt block groups: 0 > Missing internal blocks: 4 (44.43 %) > FSCK ended at Wed Aug 31 13:42:05 JST 2016 in 4 milliseconds > The filesystem under path '/' is HEALTHY > {noformat} > 3) 9 out of 9 internal blocks are missing, the result of fsck is CRITICAL. > (Because it is regarded as a missing block group.) > In case 2), the result should be CRITICAL since the ec block group is > unrecoverable. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10896) Move lock logging logic from FSNamesystem into FSNamesystemLock
[ https://issues.apache.org/jira/browse/HDFS-10896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530628#comment-15530628 ] Hadoop QA commented on HDFS-10896: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 31s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 585 unchanged - 4 fixed = 586 total (was 589) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 24s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 82m 45s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.TestDistributedFileSystem | | | hadoop.hdfs.server.mover.TestStorageMover | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-10896 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12830754/HDFS-10896.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 4e136ba819a8 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / e19b37e | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/16914/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/16914/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/16914/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/16914/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT
[jira] [Commented] (HDFS-10913) Refactor BlockReceiver by introducing faults injector to enhance testability of detecting slow mirrors
[ https://issues.apache.org/jira/browse/HDFS-10913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530612#comment-15530612 ] Hadoop QA commented on HDFS-10913: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 23s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 5 new + 74 unchanged - 1 fixed = 79 total (was 75) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 11s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 97m 2s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-10913 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12830751/HDFS-10913.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux e3bf6e53ccf7 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / e19b37e | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/16913/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/16913/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/16913/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/16913/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Refactor BlockReceiver by introducing faults injector to enhance testability > of detecting slow mirrors >
[jira] [Commented] (HDFS-10824) MiniDFSCluster#storageCapacities has no effects on real capacity
[ https://issues.apache.org/jira/browse/HDFS-10824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530601#comment-15530601 ] Hudson commented on HDFS-10824: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10508 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10508/]) HDFS-10824. MiniDFSCluster#storageCapacities has no effects on real (arp: rev c3b235e56597d55387b4003e376faee10b473d55) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMiniDFSCluster.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java > MiniDFSCluster#storageCapacities has no effects on real capacity > > > Key: HDFS-10824 > URL: https://issues.apache.org/jira/browse/HDFS-10824 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Fix For: 3.0.0-alpha2 > > Attachments: HDFS-10824.000.patch, HDFS-10824.001.patch, > HDFS-10824.002.patch, HDFS-10824.003.patch, HDFS-10824.004.patch, > HDFS-10824.005.patch, HDFS-10824.006.patch > > > It has been noticed MiniDFSCluster#storageCapacities has no effects on real > capacity. It can be reproduced by explicitly setting storageCapacities and > then call ClientProtocol#getDatanodeStorageReport(DatanodeReportType.LIVE) to > compare results. The following are storage report for one node with two > volumes after I set capacity as 300 * 1024. Apparently, the capacity is not > changed. > adminState|DatanodeInfo$AdminStates (id=6861) > |blockPoolUsed|215192| > |cacheCapacity|0| > |cacheUsed|0| > |capacity|998164971520| > |datanodeUuid|"839912e9-5bcb-45d1-81cf-9a9c9c02a00b" (id=6862)| > |dependentHostNames|LinkedList (id=6863)| > |dfsUsed|215192| > |hostName|"127.0.0.1" (id=6864)| > |infoPort|64222| > |infoSecurePort|0| > |ipAddr|"127.0.0.1" (id=6865)| > |ipcPort|64223| > |lastUpdate|1472682790948| > |lastUpdateMonotonic|209605640| > |level|0| > |location|"/default-rack" (id=6866)| > |maintenanceExpireTimeInMS|0| > |parent|null| > |peerHostName|null| > |remaining|20486512640| > |softwareVersion|null| > |upgradeDomain|null| > |xceiverCount|1| > |xferAddr|"127.0.0.1:64220" (id=6855)| > |xferPort|64220| > [0]StorageReport (id=6856) > |blockPoolUsed|4096| > |capacity|499082485760| > |dfsUsed|4096| > |failed|false| > |remaining|10243256320| > |storage|DatanodeStorage (id=6869)| > [1]StorageReport (id=6859) > |blockPoolUsed|211096| > |capacity|499082485760| > |dfsUsed|211096| > |failed|false| > |remaining|10243256320| > |storage|DatanodeStorage (id=6872)| -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10314) A new tool to sync current HDFS view to specified snapshot
[ https://issues.apache.org/jira/browse/HDFS-10314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530595#comment-15530595 ] Yongjun Zhang commented on HDFS-10314: -- Had a discussion with [~jingzhao], and we had the following agreement: 1. For now, he will be fine with option 2 stated in https://issues.apache.org/jira/browse/HDFS-10314?focusedCommentId=15524359=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15524359 as long as we document it well, even though it's not his favorite. In that case, we can continue to work on HDFS-9820. 2. When creating a new tool in the future (HDFS-10314), we need to do the following: * refactor the DistCp code to separate out the snapshot sync part (that handles rename/delete per snapshot diff) and copyList calculation part to its own class, e.g., DistCpPrepare. * let both DistCp and DistSync to call DistCpPrepare for the functionality they need * Modify DistCp to take an optional new argument copyListing. * Let DistSync call DistCpPrepare to do the snapshot sync part and copyListing creation part, and then pass the copyListing to DIstCp. Please feel free to correct/add if I'm inaccurate or missed anything. Thanks much Jing. > A new tool to sync current HDFS view to specified snapshot > -- > > Key: HDFS-10314 > URL: https://issues.apache.org/jira/browse/HDFS-10314 > Project: Hadoop HDFS > Issue Type: Bug > Components: tools >Reporter: Yongjun Zhang >Assignee: Yongjun Zhang > Attachments: HDFS-10314.001.patch > > > HDFS-9820 proposed adding -rdiff switch to distcp, as a reversed operation of > -diff switch. > Upon discussion with [~jingzhao], we will introduce a new tool that wraps > around distcp to achieve the same purpose. > I'm thinking about calling the new tool "rsync", similar to unix/linux > command "rsync". The "r" here means remote. > The syntax that simulate -rdiff behavior proposed in HDFS-9820 is > {code} > rsync > {code} > This command ensure is newer than . > I think, In the future, we can add another command to have the functionality > of -diff switch of distcp. > {code} > sync > {code} > that ensures is older than . > Thanks [~jingzhao]. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10824) MiniDFSCluster#storageCapacities has no effects on real capacity
[ https://issues.apache.org/jira/browse/HDFS-10824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-10824: - Resolution: Fixed Fix Version/s: 3.0.0-alpha2 Status: Resolved (was: Patch Available) +1 I committed this to trunk. Thanks [~xiaobingo] and thanks [~anu] and [~cnauroth] for the reviews. Xiaobing, if you want to post a branch-2 patch I can commit that too. The conflict looks straightforward but I'd prefer Jenkins do a full unit test run to be safe. > MiniDFSCluster#storageCapacities has no effects on real capacity > > > Key: HDFS-10824 > URL: https://issues.apache.org/jira/browse/HDFS-10824 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Fix For: 3.0.0-alpha2 > > Attachments: HDFS-10824.000.patch, HDFS-10824.001.patch, > HDFS-10824.002.patch, HDFS-10824.003.patch, HDFS-10824.004.patch, > HDFS-10824.005.patch, HDFS-10824.006.patch > > > It has been noticed MiniDFSCluster#storageCapacities has no effects on real > capacity. It can be reproduced by explicitly setting storageCapacities and > then call ClientProtocol#getDatanodeStorageReport(DatanodeReportType.LIVE) to > compare results. The following are storage report for one node with two > volumes after I set capacity as 300 * 1024. Apparently, the capacity is not > changed. > adminState|DatanodeInfo$AdminStates (id=6861) > |blockPoolUsed|215192| > |cacheCapacity|0| > |cacheUsed|0| > |capacity|998164971520| > |datanodeUuid|"839912e9-5bcb-45d1-81cf-9a9c9c02a00b" (id=6862)| > |dependentHostNames|LinkedList (id=6863)| > |dfsUsed|215192| > |hostName|"127.0.0.1" (id=6864)| > |infoPort|64222| > |infoSecurePort|0| > |ipAddr|"127.0.0.1" (id=6865)| > |ipcPort|64223| > |lastUpdate|1472682790948| > |lastUpdateMonotonic|209605640| > |level|0| > |location|"/default-rack" (id=6866)| > |maintenanceExpireTimeInMS|0| > |parent|null| > |peerHostName|null| > |remaining|20486512640| > |softwareVersion|null| > |upgradeDomain|null| > |xceiverCount|1| > |xferAddr|"127.0.0.1:64220" (id=6855)| > |xferPort|64220| > [0]StorageReport (id=6856) > |blockPoolUsed|4096| > |capacity|499082485760| > |dfsUsed|4096| > |failed|false| > |remaining|10243256320| > |storage|DatanodeStorage (id=6869)| > [1]StorageReport (id=6859) > |blockPoolUsed|211096| > |capacity|499082485760| > |dfsUsed|211096| > |failed|false| > |remaining|10243256320| > |storage|DatanodeStorage (id=6872)| -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10918) Add a tool to get FileEncryptionInfo from CLI
[ https://issues.apache.org/jira/browse/HDFS-10918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530458#comment-15530458 ] Xiao Chen commented on HDFS-10918: -- Hi [~andrew.wang], Could you take a look and see if this makes sense to you? Thanks a lot. > Add a tool to get FileEncryptionInfo from CLI > - > > Key: HDFS-10918 > URL: https://issues.apache.org/jira/browse/HDFS-10918 > Project: Hadoop HDFS > Issue Type: New Feature > Components: encryption >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HDFS-10918.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10913) Refactor BlockReceiver by introducing faults injector to enhance testability of detecting slow mirrors
[ https://issues.apache.org/jira/browse/HDFS-10913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530416#comment-15530416 ] Xiaoyu Yao commented on HDFS-10913: --- Thanks [~xiaobingo] for reporting the issue and posting the patch for it. The patch looks good to me overall. I only have one question about the unit test: Can you override the new delay methods in DataNodeFaultInjector to include delay with verifications if possible. e.g., {code} final DataNodeFaultInjector dnInjector = new DataNodeFaultInjector() { @Override public void delayWritingDataToDisk() throws IOException { try { Thread.sleep(); }catch (InterruptedException ie) { throw new IOException("Interrupted while sleeping. Bailing out."); } } } {code} > Refactor BlockReceiver by introducing faults injector to enhance testability > of detecting slow mirrors > -- > > Key: HDFS-10913 > URL: https://issues.apache.org/jira/browse/HDFS-10913 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-10913.000.patch, HDFS-10913.001.patch > > > BlockReceiver#datanodeSlowLogThresholdMs is used as threshold to detect slow > mirrors. BlockReceiver only writes some warning logs. In order to better test > behaviors of slow mirrors, it necessitates introducing fault injectors. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10810) Setreplication removing block from underconstrcution temporarily when batch IBR is enabled.
[ https://issues.apache.org/jira/browse/HDFS-10810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530405#comment-15530405 ] Hadoop QA commented on HDFS-10810: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 43s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 85m 44s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.web.TestWebHDFS | | | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency | | | hadoop.hdfs.TestDFSShell | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-10810 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12830735/HDFS-10810-003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux c108c3519360 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / e19b37e | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/16912/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/16912/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/16912/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Setreplication removing block from underconstrcution temporarily when batch > IBR is enabled. > > > Key: HDFS-10810 > URL: https://issues.apache.org/jira/browse/HDFS-10810 > Project: Hadoop HDFS >
[jira] [Updated] (HDFS-10896) Move lock logging logic from FSNamesystem into FSNamesystemLock
[ https://issues.apache.org/jira/browse/HDFS-10896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Krogen updated HDFS-10896: --- Attachment: HDFS-10896.003.patch Attached v003 patch with checkstyle fix. Thanks for the review, [~eddyxu]. > Move lock logging logic from FSNamesystem into FSNamesystemLock > --- > > Key: HDFS-10896 > URL: https://issues.apache.org/jira/browse/HDFS-10896 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Reporter: Erik Krogen >Assignee: Erik Krogen > Labels: logging, namenode > Attachments: HDFS-10896.000.patch, HDFS-10896.001.patch, > HDFS-10896.002.patch, HDFS-10896.003.patch > > > There are a number of tickets (HDFS-10742, HDFS-10817, HDFS-10713, this > subtask's story HDFS-10475) which are adding/improving logging/metrics around > the {{FSNamesystemLock}}. All of this is done in {{FSNamesystem}} right now, > which is polluting the namesystem with ThreadLocal variables, timing > counters, etc. which are only relevant to the lock itself and the number of > these increases as the logging/metrics become more sophisticated. It would be > best to move these all into FSNamesystemLock to keep the metrics/logging tied > directly to the item of interest. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9850) DiskBalancer : Explore removing references to FsVolumeSpi
[ https://issues.apache.org/jira/browse/HDFS-9850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530382#comment-15530382 ] Manoj Govindassamy commented on HDFS-9850: -- Thanks [~anu], [~eddyxu] for the review and commit help. > DiskBalancer : Explore removing references to FsVolumeSpi > -- > > Key: HDFS-9850 > URL: https://issues.apache.org/jira/browse/HDFS-9850 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Affects Versions: 3.0.0-alpha2 >Reporter: Anu Engineer >Assignee: Manoj Govindassamy > Fix For: 3.0.0-alpha2 > > Attachments: HDFS-9850.001.patch, HDFS-9850.002.patch, > HDFS-9850.003.patch, HDFS-9850.004.patch > > > In HDFS-9671, [~arpitagarwal] commented that we should explore the > possibility of removing references to FsVolumeSpi at any point and only deal > with storage ID. We are not sure if this is possible, this JIRA is to explore > if that can be done without issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10913) Refactor BlockReceiver by introducing faults injector to enhance testability of detecting slow mirrors
[ https://issues.apache.org/jira/browse/HDFS-10913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530342#comment-15530342 ] Xiaobing Zhou commented on HDFS-10913: -- v001 patch added brand new injector instance to avoid side effects from existing one. {code} 77 final DataNodeFaultInjector dnInjector = new DataNodeFaultInjector(); 78 DataNodeFaultInjector.set(dnInjector); {code} > Refactor BlockReceiver by introducing faults injector to enhance testability > of detecting slow mirrors > -- > > Key: HDFS-10913 > URL: https://issues.apache.org/jira/browse/HDFS-10913 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-10913.000.patch, HDFS-10913.001.patch > > > BlockReceiver#datanodeSlowLogThresholdMs is used as threshold to detect slow > mirrors. BlockReceiver only writes some warning logs. In order to better test > behaviors of slow mirrors, it necessitates introducing fault injectors. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10896) Move lock logging logic from FSNamesystem into FSNamesystemLock
[ https://issues.apache.org/jira/browse/HDFS-10896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530343#comment-15530343 ] Lei (Eddy) Xu commented on HDFS-10896: -- The patch is mostly moving code to the new classes. +1 pending the checkstyle fix. Thanks [~xkrogen] > Move lock logging logic from FSNamesystem into FSNamesystemLock > --- > > Key: HDFS-10896 > URL: https://issues.apache.org/jira/browse/HDFS-10896 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Reporter: Erik Krogen >Assignee: Erik Krogen > Labels: logging, namenode > Attachments: HDFS-10896.000.patch, HDFS-10896.001.patch, > HDFS-10896.002.patch > > > There are a number of tickets (HDFS-10742, HDFS-10817, HDFS-10713, this > subtask's story HDFS-10475) which are adding/improving logging/metrics around > the {{FSNamesystemLock}}. All of this is done in {{FSNamesystem}} right now, > which is polluting the namesystem with ThreadLocal variables, timing > counters, etc. which are only relevant to the lock itself and the number of > these increases as the logging/metrics become more sophisticated. It would be > best to move these all into FSNamesystemLock to keep the metrics/logging tied > directly to the item of interest. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10913) Refactor BlockReceiver by introducing faults injector to enhance testability of detecting slow mirrors
[ https://issues.apache.org/jira/browse/HDFS-10913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-10913: - Attachment: HDFS-10913.001.patch > Refactor BlockReceiver by introducing faults injector to enhance testability > of detecting slow mirrors > -- > > Key: HDFS-10913 > URL: https://issues.apache.org/jira/browse/HDFS-10913 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-10913.000.patch, HDFS-10913.001.patch > > > BlockReceiver#datanodeSlowLogThresholdMs is used as threshold to detect slow > mirrors. BlockReceiver only writes some warning logs. In order to better test > behaviors of slow mirrors, it necessitates introducing fault injectors. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10919) Provide admin/debug tool to dump out info of a given block
[ https://issues.apache.org/jira/browse/HDFS-10919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yongjun Zhang updated HDFS-10919: - Target Version/s: 3.0.0-alpha2 > Provide admin/debug tool to dump out info of a given block > -- > > Key: HDFS-10919 > URL: https://issues.apache.org/jira/browse/HDFS-10919 > Project: Hadoop HDFS > Issue Type: New Feature > Components: hdfs >Reporter: Yongjun Zhang > > We have fsck to find out blocks associated with a file, which is nice. > Sometimes, we saw trouble with a specific block, we'd like to collect info > about this block, such as > * what file this block belong to, > * where the replicas of this block are located, > * whether the block is EC coded; > * if a block is EC coded, whether it's a data block, or code > * if a block is EC coded, what's the codec. > * if a block is EC coded, what's the block group > * for the block group, what are the other blocks > Create this jira to provide such a util, as dfsadmin, or a debug tool. > Thanks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10919) Provide admin/debug tool to dump out info of a given block
[ https://issues.apache.org/jira/browse/HDFS-10919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530311#comment-15530311 ] Yongjun Zhang commented on HDFS-10919: -- Many thanks [~kihwal]! I was not aware of this fsck switch -blockId. Just searched, found it in https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#fsck https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#fsck but not https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html doesn't show. So it seems a new feature added to 2.7.2. Sure let's extend it to provide more info. > Provide admin/debug tool to dump out info of a given block > -- > > Key: HDFS-10919 > URL: https://issues.apache.org/jira/browse/HDFS-10919 > Project: Hadoop HDFS > Issue Type: New Feature > Components: hdfs >Reporter: Yongjun Zhang > > We have fsck to find out blocks associated with a file, which is nice. > Sometimes, we saw trouble with a specific block, we'd like to collect info > about this block, such as > * what file this block belong to, > * where the replicas of this block are located, > * whether the block is EC coded; > * if a block is EC coded, whether it's a data block, or code > * if a block is EC coded, what's the codec. > * if a block is EC coded, what's the block group > * for the block group, what are the other blocks > Create this jira to provide such a util, as dfsadmin, or a debug tool. > Thanks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10906) Add unit tests for Trash with HDFS encryption zones
[ https://issues.apache.org/jira/browse/HDFS-10906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530300#comment-15530300 ] Xiaoyu Yao commented on HDFS-10906: --- Agree. Regrading adding Kerberos enabled trash related unit test with KMS/EncryptionZone, we can refactor TestSecureEncryptionZoneWithKMS to share the basic setup for Kerberos (Kerby based MiniKDC) + KMS (MiniKMS) + MiniDFSCluster. > Add unit tests for Trash with HDFS encryption zones > --- > > Key: HDFS-10906 > URL: https://issues.apache.org/jira/browse/HDFS-10906 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption >Affects Versions: 2.8.0 >Reporter: Xiaoyu Yao > > The goal is to improve unit test coverage for HDFS trash with encryption zone > especially under Kerberos environment. The current unit test > TestEncryptionZones#testEncryptionZonewithTrash() has limited coverage on > non-Kerberos case. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10906) Add unit tests for Trash with HDFS encryption zones
[ https://issues.apache.org/jira/browse/HDFS-10906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530249#comment-15530249 ] Mingliang Liu commented on HDFS-10906: -- Thanks for the to do list, [~xyao]. +1 for the proposal. It merits a lot to cover them in unit tests. > Add unit tests for Trash with HDFS encryption zones > --- > > Key: HDFS-10906 > URL: https://issues.apache.org/jira/browse/HDFS-10906 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption >Affects Versions: 2.8.0 >Reporter: Xiaoyu Yao > > The goal is to improve unit test coverage for HDFS trash with encryption zone > especially under Kerberos environment. The current unit test > TestEncryptionZones#testEncryptionZonewithTrash() has limited coverage on > non-Kerberos case. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10810) Setreplication removing block from underconstrcution temporarily when batch IBR is enabled.
[ https://issues.apache.org/jira/browse/HDFS-10810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530236#comment-15530236 ] Mingliang Liu commented on HDFS-10810: -- Thanks [~brahmareddy]. That's true that [HDFS-10666] has not addressed all the places that use fixed time sleep for waiting. It is still an in-progress JIRA and needs a lot of work. If you can file/work on new sub-tasks, I'm happy to review. The intermittently failing ones are of high priority. The change in the v3 patch is overall good. I'd like to confirm my thought is correct - the last {{getUnderReplicatedBlocksCount()}} and {{getMissingBlocksCount}} assertions are actually the final consistent state. That is, the UnderReplicatedBlocksCount will always be 1 and MissingBlocksCount always be 0 after they reach their respective values. If only there is a way to test this. > Setreplication removing block from underconstrcution temporarily when batch > IBR is enabled. > > > Key: HDFS-10810 > URL: https://issues.apache.org/jira/browse/HDFS-10810 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula > Attachments: HDFS-10810-002.patch, HDFS-10810-003.patch, > HDFS-10810.patch > > > 1)Batch IBR is enabled with number of committed blocks allowed=1 > 2) Written one block and closed the file without waiting for IBR > 3)Setreplication called immediately on the file. > So till the finalized IBR Received, block will not be added to > {{neededReconstruction}} since following check will be {{false}} as block is > not marked as complete. > {code} > if (isNeededReconstruction(block, repl.liveReplicas())) { > neededReconstruction.update(block, repl.liveReplicas(), > repl.readOnlyReplicas(), repl.decommissionedAndDecommissioning(), > curExpectedReplicas, curReplicasDelta, expectedReplicasDelta); > }.{code} > Hence block will not marked as under-replicated. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10921) TestDiskspaceQuotaUpdate doesn't wait for NN to get out of safe mode
[ https://issues.apache.org/jira/browse/HDFS-10921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530234#comment-15530234 ] Hadoop QA commented on HDFS-10921: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 1s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 83m 43s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSInotifyEventInputStream | | | hadoop.hdfs.server.datanode.TestDataNodeUUID | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-10921 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12830719/HDFS-10921.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 1127f0d6ea1a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / e19b37e | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/16911/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/16911/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/16911/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > TestDiskspaceQuotaUpdate doesn't wait for NN to get out of safe mode > > > Key: HDFS-10921 > URL: https://issues.apache.org/jira/browse/HDFS-10921 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Eric Badger >
[jira] [Commented] (HDFS-10690) Optimize insertion/removal of replica in ShortCircuitCache.java
[ https://issues.apache.org/jira/browse/HDFS-10690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530221#comment-15530221 ] Xiaoyu Yao commented on HDFS-10690: --- [~fenghua_hu]: the patch v06 looks good. I plan to commit it by EOD tomorrow. > Optimize insertion/removal of replica in ShortCircuitCache.java > --- > > Key: HDFS-10690 > URL: https://issues.apache.org/jira/browse/HDFS-10690 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Affects Versions: 3.0.0-alpha2 >Reporter: Fenghua Hu >Assignee: Fenghua Hu > Attachments: HDFS-10690.001.patch, HDFS-10690.002.patch, > HDFS-10690.003.patch, HDFS-10690.004.patch, HDFS-10690.005.patch, > HDFS-10690.006.patch, ShortCircuitCache_LinkedMap.patch > > Original Estimate: 336h > Remaining Estimate: 336h > > Currently in ShortCircuitCache, two TreeMap objects are used to track the > cached replicas. > private final TreeMapevictable = new TreeMap<>(); > private final TreeMap evictableMmapped = new > TreeMap<>(); > TreeMap employs Red-Black tree for sorting. This isn't an issue when using > traditional HDD. But when using high-performance SSD/PCIe Flash, the cost > inserting/removing an entry becomes considerable. > To mitigate it, we designed a new list-based for replica tracking. > The list is a double-linked FIFO. FIFO is time-based, thus insertion is a > very low cost operation. On the other hand, list is not lookup-friendly. To > address this issue, we introduce two references into ShortCircuitReplica > object. > ShortCircuitReplica next = null; > ShortCircuitReplica prev = null; > In this way, lookup is not needed when removing a replica from the list. We > only need to modify its predecessor's and successor's references in the lists. > Our tests showed up to 15-50% performance improvement when using PCIe flash > as storage media. > The original patch is against 2.6.4, now I am porting to Hadoop trunk, and > patch will be posted soon. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10911) Change edit log OP_UPDATE_BLOCKS to store delta blocks only.
[ https://issues.apache.org/jira/browse/HDFS-10911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530215#comment-15530215 ] Kihwal Lee commented on HDFS-10911: --- IIRC, namenode didn't use to log all blocks all the time, but it became like that with the HA feature. You might want to revisit the reason and make sure things don't break with the proposed change. > Change edit log OP_UPDATE_BLOCKS to store delta blocks only. > > > Key: HDFS-10911 > URL: https://issues.apache.org/jira/browse/HDFS-10911 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Affects Versions: 2.7.3, 3.0.0-alpha1 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu > > Every time a HDFS client {{close}} or {{hflush}} an open file, NameNode > enumerates all the blocks and stores then into edit log (OP_UPDATE_BLOCKS). > It would cause problem when the client is appending a large file frequently > (i.e., WAL). > Because HDFS is append only, we can only store the blocks that have been > changed (delta blocks) in edit log. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10919) Provide admin/debug tool to dump out info of a given block
[ https://issues.apache.org/jira/browse/HDFS-10919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530196#comment-15530196 ] Kihwal Lee commented on HDFS-10919: --- Given a block ID you can use {{fsck -blockId}} to get the file it belongs to and other information. We could extend it to provide more information. > Provide admin/debug tool to dump out info of a given block > -- > > Key: HDFS-10919 > URL: https://issues.apache.org/jira/browse/HDFS-10919 > Project: Hadoop HDFS > Issue Type: New Feature > Components: hdfs >Reporter: Yongjun Zhang > > We have fsck to find out blocks associated with a file, which is nice. > Sometimes, we saw trouble with a specific block, we'd like to collect info > about this block, such as > * what file this block belong to, > * where the replicas of this block are located, > * whether the block is EC coded; > * if a block is EC coded, whether it's a data block, or code > * if a block is EC coded, what's the codec. > * if a block is EC coded, what's the block group > * for the block group, what are the other blocks > Create this jira to provide such a util, as dfsadmin, or a debug tool. > Thanks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10810) Setreplication removing block from underconstrcution temporarily when batch IBR is enabled.
[ https://issues.apache.org/jira/browse/HDFS-10810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-10810: Attachment: HDFS-10810-003.patch Uploaded the patch to remove the fixed time sleep..Seems HDFS-10666 not addressed all the classes,Still I can see, some classes are having fixed time sleep like {{TestDFSFinalize}}... May I can raise seperate jira's under HDFS-10666 for missed one.. > Setreplication removing block from underconstrcution temporarily when batch > IBR is enabled. > > > Key: HDFS-10810 > URL: https://issues.apache.org/jira/browse/HDFS-10810 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula > Attachments: HDFS-10810-002.patch, HDFS-10810-003.patch, > HDFS-10810.patch > > > 1)Batch IBR is enabled with number of committed blocks allowed=1 > 2) Written one block and closed the file without waiting for IBR > 3)Setreplication called immediately on the file. > So till the finalized IBR Received, block will not be added to > {{neededReconstruction}} since following check will be {{false}} as block is > not marked as complete. > {code} > if (isNeededReconstruction(block, repl.liveReplicas())) { > neededReconstruction.update(block, repl.liveReplicas(), > repl.readOnlyReplicas(), repl.decommissionedAndDecommissioning(), > curExpectedReplicas, curReplicasDelta, expectedReplicasDelta); > }.{code} > Hence block will not marked as under-replicated. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10824) MiniDFSCluster#storageCapacities has no effects on real capacity
[ https://issues.apache.org/jira/browse/HDFS-10824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530148#comment-15530148 ] Anu Engineer commented on HDFS-10824: - I am good with the changes. Thanks [~cnauroth] and [~arpitagarwal] for the code reviews and [~xiaobingo] for the patch. > MiniDFSCluster#storageCapacities has no effects on real capacity > > > Key: HDFS-10824 > URL: https://issues.apache.org/jira/browse/HDFS-10824 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-10824.000.patch, HDFS-10824.001.patch, > HDFS-10824.002.patch, HDFS-10824.003.patch, HDFS-10824.004.patch, > HDFS-10824.005.patch, HDFS-10824.006.patch > > > It has been noticed MiniDFSCluster#storageCapacities has no effects on real > capacity. It can be reproduced by explicitly setting storageCapacities and > then call ClientProtocol#getDatanodeStorageReport(DatanodeReportType.LIVE) to > compare results. The following are storage report for one node with two > volumes after I set capacity as 300 * 1024. Apparently, the capacity is not > changed. > adminState|DatanodeInfo$AdminStates (id=6861) > |blockPoolUsed|215192| > |cacheCapacity|0| > |cacheUsed|0| > |capacity|998164971520| > |datanodeUuid|"839912e9-5bcb-45d1-81cf-9a9c9c02a00b" (id=6862)| > |dependentHostNames|LinkedList (id=6863)| > |dfsUsed|215192| > |hostName|"127.0.0.1" (id=6864)| > |infoPort|64222| > |infoSecurePort|0| > |ipAddr|"127.0.0.1" (id=6865)| > |ipcPort|64223| > |lastUpdate|1472682790948| > |lastUpdateMonotonic|209605640| > |level|0| > |location|"/default-rack" (id=6866)| > |maintenanceExpireTimeInMS|0| > |parent|null| > |peerHostName|null| > |remaining|20486512640| > |softwareVersion|null| > |upgradeDomain|null| > |xceiverCount|1| > |xferAddr|"127.0.0.1:64220" (id=6855)| > |xferPort|64220| > [0]StorageReport (id=6856) > |blockPoolUsed|4096| > |capacity|499082485760| > |dfsUsed|4096| > |failed|false| > |remaining|10243256320| > |storage|DatanodeStorage (id=6869)| > [1]StorageReport (id=6859) > |blockPoolUsed|211096| > |capacity|499082485760| > |dfsUsed|211096| > |failed|false| > |remaining|10243256320| > |storage|DatanodeStorage (id=6872)| -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10810) Setreplication removing block from underconstrcution temporarily when batch IBR is enabled.
[ https://issues.apache.org/jira/browse/HDFS-10810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530131#comment-15530131 ] Brahma Reddy Battula commented on HDFS-10810: - Thanks [~szetszwo]. > Setreplication removing block from underconstrcution temporarily when batch > IBR is enabled. > > > Key: HDFS-10810 > URL: https://issues.apache.org/jira/browse/HDFS-10810 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula > Attachments: HDFS-10810-002.patch, HDFS-10810.patch > > > 1)Batch IBR is enabled with number of committed blocks allowed=1 > 2) Written one block and closed the file without waiting for IBR > 3)Setreplication called immediately on the file. > So till the finalized IBR Received, block will not be added to > {{neededReconstruction}} since following check will be {{false}} as block is > not marked as complete. > {code} > if (isNeededReconstruction(block, repl.liveReplicas())) { > neededReconstruction.update(block, repl.liveReplicas(), > repl.readOnlyReplicas(), repl.decommissionedAndDecommissioning(), > curExpectedReplicas, curReplicasDelta, expectedReplicasDelta); > }.{code} > Hence block will not marked as under-replicated. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10921) TestDiskspaceQuotaUpdate doesn't wait for NN to get out of safe mode
[ https://issues.apache.org/jira/browse/HDFS-10921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530075#comment-15530075 ] Hadoop QA commented on HDFS-10921: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 54s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 95m 57s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency | | | hadoop.hdfs.TestBlockStoragePolicy | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-10921 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12830709/HDFS-10921.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux fd2cca2e2c03 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / e19b37e | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/16910/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/16910/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/16910/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > TestDiskspaceQuotaUpdate doesn't wait for NN to get out of safe mode > > > Key: HDFS-10921 > URL: https://issues.apache.org/jira/browse/HDFS-10921 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Eric Badger >
[jira] [Commented] (HDFS-10921) TestDiskspaceQuotaUpdate doesn't wait for NN to get out of safe mode
[ https://issues.apache.org/jira/browse/HDFS-10921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530014#comment-15530014 ] Eric Badger commented on HDFS-10921: [~rushabh.shah], [~daryn], [~kihwal], do you think it's reasonable to change {{restartNameNodes()}} to wait for active by default? Lots of tests call this function and it could add non-negligible runtime to the tests. However, the argument could be made that the NN hasn't finished restarting until it is back out of safemode. So I'm wondering if we should keep the patch as is or if we should special-case fix TestDiskspaceQuotaUpdate to call restartNameNode() with waitActive == true. > TestDiskspaceQuotaUpdate doesn't wait for NN to get out of safe mode > > > Key: HDFS-10921 > URL: https://issues.apache.org/jira/browse/HDFS-10921 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Eric Badger >Assignee: Eric Badger > Attachments: HDFS-10921.001.patch, HDFS-10921.002.patch > > > Test fails intermittently because the NN is still in safe mode. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10804) Use separate lock for ReplicaMap
[ https://issues.apache.org/jira/browse/HDFS-10804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530010#comment-15530010 ] Hadoop QA commented on HDFS-10804: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 25s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 82 unchanged - 0 fixed = 83 total (was 82) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 58m 37s{color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 79m 10s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-10804 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12830708/HDFS-10804-003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux a410a6335e2e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 9b0fd01 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/16909/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/16909/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/16909/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Use separate lock for ReplicaMap > > > Key: HDFS-10804 > URL: https://issues.apache.org/jira/browse/HDFS-10804 > Project: Hadoop HDFS > Issue Type: Improvement > Components:
[jira] [Updated] (HDFS-10921) TestDiskspaceQuotaUpdate doesn't wait for NN to get out of safe mode
[ https://issues.apache.org/jira/browse/HDFS-10921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Badger updated HDFS-10921: --- Status: Patch Available (was: Open) > TestDiskspaceQuotaUpdate doesn't wait for NN to get out of safe mode > > > Key: HDFS-10921 > URL: https://issues.apache.org/jira/browse/HDFS-10921 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Eric Badger >Assignee: Eric Badger > Attachments: HDFS-10921.001.patch, HDFS-10921.002.patch > > > Test fails intermittently because the NN is still in safe mode. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10921) TestDiskspaceQuotaUpdate doesn't wait for NN to get out of safe mode
[ https://issues.apache.org/jira/browse/HDFS-10921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Badger updated HDFS-10921: --- Attachment: HDFS-10921.002.patch Looked deeper into this and saw that {{cluster = new MiniDFSCluster.Builder(conf).numDataNodes(REPLICATION).build();}} actually calls cluster.waitClusterUp(). The problem actually is that some tests call {{cluster.restartNameNodes()}}. There is an option to wait for the namenodes to become active, but that is currently set to false. I'm attaching a patch that sets this value to true so that the tests wait for the NN to get all the way back up before moving onto the next test. > TestDiskspaceQuotaUpdate doesn't wait for NN to get out of safe mode > > > Key: HDFS-10921 > URL: https://issues.apache.org/jira/browse/HDFS-10921 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Eric Badger >Assignee: Eric Badger > Attachments: HDFS-10921.001.patch, HDFS-10921.002.patch > > > Test fails intermittently because the NN is still in safe mode. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10921) TestDiskspaceQuotaUpdate doesn't wait for NN to get out of safe mode
[ https://issues.apache.org/jira/browse/HDFS-10921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Badger updated HDFS-10921: --- Status: Open (was: Patch Available) > TestDiskspaceQuotaUpdate doesn't wait for NN to get out of safe mode > > > Key: HDFS-10921 > URL: https://issues.apache.org/jira/browse/HDFS-10921 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Eric Badger >Assignee: Eric Badger > Attachments: HDFS-10921.001.patch > > > Test fails intermittently because the NN is still in safe mode. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9668) Optimize the locking in FsDatasetImpl
[ https://issues.apache.org/jira/browse/HDFS-9668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15529771#comment-15529771 ] Hadoop QA commented on HDFS-9668: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 41s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 21s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 31s{color} | {color:orange} root: The patch generated 1 new + 262 unchanged - 14 fixed = 263 total (was 276) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 32s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 40s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}129m 34s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.hdfs.TestFileCorruption | | | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-9668 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12830683/HDFS-9668-15.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 4c26f63822c8 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 9b0fd01 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/16908/artifact/patchprocess/diff-checkstyle-root.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/16908/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results |
[jira] [Commented] (HDFS-9509) Add new metrics for measuring datanode storage statistics
[ https://issues.apache.org/jira/browse/HDFS-9509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15529840#comment-15529840 ] Jagadesh Kiran N commented on HDFS-9509: [~szetszwo] ,Currently along with {code} sendDataPacketBlockedOnNetworkNanosQuantiles {code}& {code}sendDataPacketTransferNanosQuantiles {code}, already some more metrics are there in trunk for example {code} ramDiskBlocksEvictionWindowMsQuantiles ramDiskBlocksLazyPersistWindowMsQuantiles, packetAckRoundTripTimeNanosQuantiles, flushNanosQuantiles, fsyncNanosQuantiles {code} are there, Can you please suggest if any other metrics is required ? > Add new metrics for measuring datanode storage statistics > - > > Key: HDFS-9509 > URL: https://issues.apache.org/jira/browse/HDFS-9509 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Tsz Wo Nicholas Sze > > We already have sendDataPacketBlockedOnNetworkNanos and > sendDataPacketTransferNanos for the transferTo case. We should add more > metrics for the other cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10921) TestDiskspaceQuotaUpdate doesn't wait for NN to get out of safe mode
[ https://issues.apache.org/jira/browse/HDFS-10921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Badger updated HDFS-10921: --- Status: Patch Available (was: Open) > TestDiskspaceQuotaUpdate doesn't wait for NN to get out of safe mode > > > Key: HDFS-10921 > URL: https://issues.apache.org/jira/browse/HDFS-10921 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Eric Badger >Assignee: Eric Badger > Attachments: HDFS-10921.001.patch > > > Test fails intermittently because the NN is still in safe mode. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10921) TestDiskspaceQuotaUpdate doesn't wait for NN to get out of safe mode
[ https://issues.apache.org/jira/browse/HDFS-10921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Badger updated HDFS-10921: --- Attachment: HDFS-10921.001.patch Attaching patch that makes the cluster wait until the NN is active > TestDiskspaceQuotaUpdate doesn't wait for NN to get out of safe mode > > > Key: HDFS-10921 > URL: https://issues.apache.org/jira/browse/HDFS-10921 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Eric Badger >Assignee: Eric Badger > Attachments: HDFS-10921.001.patch > > > Test fails intermittently because the NN is still in safe mode. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-10921) TestDiskspaceQuotaUpdate doesn't wait for NN to get out of safe mode
Eric Badger created HDFS-10921: -- Summary: TestDiskspaceQuotaUpdate doesn't wait for NN to get out of safe mode Key: HDFS-10921 URL: https://issues.apache.org/jira/browse/HDFS-10921 Project: Hadoop HDFS Issue Type: Bug Reporter: Eric Badger Assignee: Eric Badger Test fails intermittently because the NN is still in safe mode. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10921) TestDiskspaceQuotaUpdate doesn't wait for NN to get out of safe mode
[ https://issues.apache.org/jira/browse/HDFS-10921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15529791#comment-15529791 ] Eric Badger commented on HDFS-10921: Stack trace of test failure {noformat} Cannot create directory /TestQuotaUpdate/testAppendOverTypeQuota. Name node is in safe mode. The reported blocks 1749 needs additional 251 blocks to reach the threshold 0.9990 of total blocks 2003. The number of live datanodes 4 has reached the minimum number 0. Safe mode will be turned off automatically once the thresholds have been reached. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.newSafemodeException(FSNamesystem.java:1372) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1359) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3004) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1080) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:637) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:823) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:771) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1805) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2425) {noformat} > TestDiskspaceQuotaUpdate doesn't wait for NN to get out of safe mode > > > Key: HDFS-10921 > URL: https://issues.apache.org/jira/browse/HDFS-10921 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Eric Badger >Assignee: Eric Badger > > Test fails intermittently because the NN is still in safe mode. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10804) Use separate lock for ReplicaMap
[ https://issues.apache.org/jira/browse/HDFS-10804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fenghua Hu updated HDFS-10804: -- Attachment: (was: HDFS-10804-003.patch) > Use separate lock for ReplicaMap > > > Key: HDFS-10804 > URL: https://issues.apache.org/jira/browse/HDFS-10804 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0-beta1 >Reporter: Fenghua Hu >Assignee: Fenghua Hu >Priority: Minor > Attachments: HDFS-10804-001.patch, HDFS-10804-002.patch > > > In currently implementation, ReplicaMap takes an external lock for > synchronization. > In function FsDatasetImpl#FsDatasetImpl(), the object is for synchronization > is the same lock object used by FsDatasetImpl routines. > and in private FsDatasetImpl#addVolume(), the same lock is used for > synchronization as well. > {code} > ReplicaMap tempVolumeMap = new ReplicaMap(datasetLock); > {code} > We can potentially eliminate the heavyweight lock for synchronizing > ReplicaMap instances. If it's not necessary, this could reduce lock > contention on the datasetLock object and improve performance. > Could you please give me some suggestions? Thanks a lot! > Fenghua -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10804) Use separate lock for ReplicaMap
[ https://issues.apache.org/jira/browse/HDFS-10804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fenghua Hu updated HDFS-10804: -- Attachment: HDFS-10804-003.patch Re-attach to trigger Jenkins build. > Use separate lock for ReplicaMap > > > Key: HDFS-10804 > URL: https://issues.apache.org/jira/browse/HDFS-10804 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0-beta1 >Reporter: Fenghua Hu >Assignee: Fenghua Hu >Priority: Minor > Attachments: HDFS-10804-001.patch, HDFS-10804-002.patch, > HDFS-10804-003.patch > > > In currently implementation, ReplicaMap takes an external lock for > synchronization. > In function FsDatasetImpl#FsDatasetImpl(), the object is for synchronization > is the same lock object used by FsDatasetImpl routines. > and in private FsDatasetImpl#addVolume(), the same lock is used for > synchronization as well. > {code} > ReplicaMap tempVolumeMap = new ReplicaMap(datasetLock); > {code} > We can potentially eliminate the heavyweight lock for synchronizing > ReplicaMap instances. If it's not necessary, this could reduce lock > contention on the datasetLock object and improve performance. > Could you please give me some suggestions? Thanks a lot! > Fenghua -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10824) MiniDFSCluster#storageCapacities has no effects on real capacity
[ https://issues.apache.org/jira/browse/HDFS-10824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HDFS-10824: - Hadoop Flags: Reviewed +1 for patch 006. Thank you, Xiaobing. [~anu] or [~arpitagarwal], do you have any further comments? > MiniDFSCluster#storageCapacities has no effects on real capacity > > > Key: HDFS-10824 > URL: https://issues.apache.org/jira/browse/HDFS-10824 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-10824.000.patch, HDFS-10824.001.patch, > HDFS-10824.002.patch, HDFS-10824.003.patch, HDFS-10824.004.patch, > HDFS-10824.005.patch, HDFS-10824.006.patch > > > It has been noticed MiniDFSCluster#storageCapacities has no effects on real > capacity. It can be reproduced by explicitly setting storageCapacities and > then call ClientProtocol#getDatanodeStorageReport(DatanodeReportType.LIVE) to > compare results. The following are storage report for one node with two > volumes after I set capacity as 300 * 1024. Apparently, the capacity is not > changed. > adminState|DatanodeInfo$AdminStates (id=6861) > |blockPoolUsed|215192| > |cacheCapacity|0| > |cacheUsed|0| > |capacity|998164971520| > |datanodeUuid|"839912e9-5bcb-45d1-81cf-9a9c9c02a00b" (id=6862)| > |dependentHostNames|LinkedList (id=6863)| > |dfsUsed|215192| > |hostName|"127.0.0.1" (id=6864)| > |infoPort|64222| > |infoSecurePort|0| > |ipAddr|"127.0.0.1" (id=6865)| > |ipcPort|64223| > |lastUpdate|1472682790948| > |lastUpdateMonotonic|209605640| > |level|0| > |location|"/default-rack" (id=6866)| > |maintenanceExpireTimeInMS|0| > |parent|null| > |peerHostName|null| > |remaining|20486512640| > |softwareVersion|null| > |upgradeDomain|null| > |xceiverCount|1| > |xferAddr|"127.0.0.1:64220" (id=6855)| > |xferPort|64220| > [0]StorageReport (id=6856) > |blockPoolUsed|4096| > |capacity|499082485760| > |dfsUsed|4096| > |failed|false| > |remaining|10243256320| > |storage|DatanodeStorage (id=6869)| > [1]StorageReport (id=6859) > |blockPoolUsed|211096| > |capacity|499082485760| > |dfsUsed|211096| > |failed|false| > |remaining|10243256320| > |storage|DatanodeStorage (id=6872)| -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9668) Optimize the locking in FsDatasetImpl
[ https://issues.apache.org/jira/browse/HDFS-9668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15529666#comment-15529666 ] Hadoop QA commented on HDFS-9668: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 42s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 15s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 32s{color} | {color:orange} root: The patch generated 2 new + 262 unchanged - 14 fixed = 264 total (was 276) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 44s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 58m 3s{color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}108m 36s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-9668 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12830681/HDFS-9668-14.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 057501cebdc5 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 9b0fd01 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/16907/artifact/patchprocess/diff-checkstyle-root.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/16907/testReport/ | | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs U: . | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/16907/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Optimize the
[jira] [Commented] (HDFS-9668) Optimize the locking in FsDatasetImpl
[ https://issues.apache.org/jira/browse/HDFS-9668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15529510#comment-15529510 ] Hadoop QA commented on HDFS-9668: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 52s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 7s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 31s{color} | {color:orange} root: The patch generated 2 new + 262 unchanged - 14 fixed = 264 total (was 276) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 44s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 62m 1s{color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}114m 50s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.net.TestClusterTopology | | | hadoop.ha.TestZKFailoverController | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-9668 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12830669/HDFS-9668-14.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 68d8e7d4eefb 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 9b0fd01 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/16906/artifact/patchprocess/diff-checkstyle-root.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/16906/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/16906/testReport/ | | modules | C:
[jira] [Commented] (HDFS-10920) TestStorageMover#testNoSpaceDisk is failing intermittently
[ https://issues.apache.org/jira/browse/HDFS-10920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15529427#comment-15529427 ] Hadoop QA commented on HDFS-10920: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 62m 49s{color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 86m 19s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-10920 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12830663/HDFS-10920-00.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux d3429af91faa 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 9b0fd01 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/16905/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/16905/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > TestStorageMover#testNoSpaceDisk is failing intermittently > -- > > Key: HDFS-10920 > URL: https://issues.apache.org/jira/browse/HDFS-10920 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Reporter: Rakesh R >Assignee: Rakesh R > Attachments: HDFS-10920-00.patch > > > TestStorageMover#testNoSpaceDisk test case is failing frequently in the build. > References: > [HDFS-Build_16890|https://builds.apache.org/job/PreCommit-HDFS-Build/16890], >
[jira] [Updated] (HDFS-9668) Optimize the locking in FsDatasetImpl
[ https://issues.apache.org/jira/browse/HDFS-9668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jingcheng Du updated HDFS-9668: --- Attachment: HDFS-9668-15.patch Upload a new patch V15 to fix the newly added checkstyle warning. > Optimize the locking in FsDatasetImpl > - > > Key: HDFS-9668 > URL: https://issues.apache.org/jira/browse/HDFS-9668 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Jingcheng Du >Assignee: Jingcheng Du > Attachments: HDFS-9668-1.patch, HDFS-9668-10.patch, > HDFS-9668-11.patch, HDFS-9668-12.patch, HDFS-9668-13.patch, > HDFS-9668-14.patch, HDFS-9668-14.patch, HDFS-9668-15.patch, > HDFS-9668-2.patch, HDFS-9668-3.patch, HDFS-9668-4.patch, HDFS-9668-5.patch, > HDFS-9668-6.patch, HDFS-9668-7.patch, HDFS-9668-8.patch, HDFS-9668-9.patch, > execution_time.png > > > During the HBase test on a tiered storage of HDFS (WAL is stored in > SSD/RAMDISK, and all other files are stored in HDD), we observe many > long-time BLOCKED threads on FsDatasetImpl in DataNode. The following is part > of the jstack result: > {noformat} > "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at > /192.168.50.16:48521 [Receiving block > BP-1042877462-192.168.50.13-1446173170517:blk_1073779272_40852]" - Thread > t@93336 >java.lang.Thread.State: BLOCKED > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:) > - waiting to lock <18324c9> (a > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl) owned by > "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at > /192.168.50.16:48520 [Receiving block > BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" t@93335 > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:113) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:183) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:615) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235) > at java.lang.Thread.run(Thread.java:745) >Locked ownable synchronizers: > - None > > "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at > /192.168.50.16:48520 [Receiving block > BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" - Thread > t@93335 >java.lang.Thread.State: RUNNABLE > at java.io.UnixFileSystem.createFileExclusively(Native Method) > at java.io.File.createNewFile(File.java:1012) > at > org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:66) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:271) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:286) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:1140) > - locked <18324c9> (a > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:113) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:183) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:615) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235) > at java.lang.Thread.run(Thread.java:745) >Locked ownable synchronizers: > - None > {noformat} > We measured the execution of some operations in FsDatasetImpl during the > test. Here following is the result. > !execution_time.png! > The operations of finalizeBlock, addBlock and createRbw on HDD in a heavy > load take a really long time. > It means one slow operation of finalizeBlock, addBlock and createRbw in a > slow storage can block all the other same operations in the same DataNode, > especially in HBase when many wal/flusher/compactor are configured. > We need a finer grained lock mechanism in a new FsDatasetImpl implementation > and users can choose the implementation by configuring > "dfs.datanode.fsdataset.factory" in DataNode. > We can implement the lock by either storage level or block-level. --
[jira] [Updated] (HDFS-9668) Optimize the locking in FsDatasetImpl
[ https://issues.apache.org/jira/browse/HDFS-9668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jingcheng Du updated HDFS-9668: --- Attachment: HDFS-9668-14.patch The failure of unit tests should not be related with this patch. Reattach the patch V14 to run the Haoop QA again. > Optimize the locking in FsDatasetImpl > - > > Key: HDFS-9668 > URL: https://issues.apache.org/jira/browse/HDFS-9668 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Jingcheng Du >Assignee: Jingcheng Du > Attachments: HDFS-9668-1.patch, HDFS-9668-10.patch, > HDFS-9668-11.patch, HDFS-9668-12.patch, HDFS-9668-13.patch, > HDFS-9668-14.patch, HDFS-9668-14.patch, HDFS-9668-2.patch, HDFS-9668-3.patch, > HDFS-9668-4.patch, HDFS-9668-5.patch, HDFS-9668-6.patch, HDFS-9668-7.patch, > HDFS-9668-8.patch, HDFS-9668-9.patch, execution_time.png > > > During the HBase test on a tiered storage of HDFS (WAL is stored in > SSD/RAMDISK, and all other files are stored in HDD), we observe many > long-time BLOCKED threads on FsDatasetImpl in DataNode. The following is part > of the jstack result: > {noformat} > "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at > /192.168.50.16:48521 [Receiving block > BP-1042877462-192.168.50.13-1446173170517:blk_1073779272_40852]" - Thread > t@93336 >java.lang.Thread.State: BLOCKED > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:) > - waiting to lock <18324c9> (a > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl) owned by > "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at > /192.168.50.16:48520 [Receiving block > BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" t@93335 > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:113) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:183) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:615) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235) > at java.lang.Thread.run(Thread.java:745) >Locked ownable synchronizers: > - None > > "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at > /192.168.50.16:48520 [Receiving block > BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" - Thread > t@93335 >java.lang.Thread.State: RUNNABLE > at java.io.UnixFileSystem.createFileExclusively(Native Method) > at java.io.File.createNewFile(File.java:1012) > at > org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:66) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:271) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:286) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:1140) > - locked <18324c9> (a > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:113) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:183) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:615) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235) > at java.lang.Thread.run(Thread.java:745) >Locked ownable synchronizers: > - None > {noformat} > We measured the execution of some operations in FsDatasetImpl during the > test. Here following is the result. > !execution_time.png! > The operations of finalizeBlock, addBlock and createRbw on HDD in a heavy > load take a really long time. > It means one slow operation of finalizeBlock, addBlock and createRbw in a > slow storage can block all the other same operations in the same DataNode, > especially in HBase when many wal/flusher/compactor are configured. > We need a finer grained lock mechanism in a new FsDatasetImpl implementation > and users can choose the implementation by configuring > "dfs.datanode.fsdataset.factory" in DataNode. > We can implement the lock by either storage
[jira] [Commented] (HDFS-9668) Optimize the locking in FsDatasetImpl
[ https://issues.apache.org/jira/browse/HDFS-9668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15529304#comment-15529304 ] Hadoop QA commented on HDFS-9668: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 42s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 17s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 32s{color} | {color:orange} root: The patch generated 2 new + 262 unchanged - 14 fixed = 264 total (was 276) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 59s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 4s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}110m 44s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.net.TestClusterTopology | | | hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness | | | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS | | | hadoop.hdfs.server.namenode.ha.TestHASafeMode | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-9668 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12830647/HDFS-9668-14.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 77bb5c7e1cec 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 03f519a | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/16903/artifact/patchprocess/diff-checkstyle-root.txt | | unit |