[jira] [Updated] (HDFS-9766) TestDataNodeMetrics#testDataNodeTimeSpend fails intermittently
[ https://issues.apache.org/jira/browse/HDFS-9766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-9766: Attachment: HDFS-9766.01.patch > TestDataNodeMetrics#testDataNodeTimeSpend fails intermittently > -- > > Key: HDFS-9766 > URL: https://issues.apache.org/jira/browse/HDFS-9766 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Affects Versions: 3.0.0 >Reporter: Mingliang Liu > Attachments: HDFS-9766.01.patch > > > *Stacktrace* > {code} > java.lang.AssertionError: null > at org.junit.Assert.fail(Assert.java:86) > at org.junit.Assert.assertTrue(Assert.java:41) > at org.junit.Assert.assertTrue(Assert.java:52) > at > org.apache.hadoop.hdfs.server.datanode.TestDataNodeMetrics.testDataNodeTimeSpend(TestDataNodeMetrics.java:289) > {code} > See recent builds: > * > https://builds.apache.org/job/PreCommit-HDFS-Build/14393/testReport/org.apache.hadoop.hdfs.server.datanode/TestDataNodeMetrics/testDataNodeTimeSpend/ > * > https://builds.apache.org/job/PreCommit-HDFS-Build/14317/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9766) TestDataNodeMetrics#testDataNodeTimeSpend fails intermittently
[ https://issues.apache.org/jira/browse/HDFS-9766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166855#comment-15166855 ] Xiao Chen commented on HDFS-9766: - Thanks [~liuml07] for creating this and analyzing the cause. I met the same failure in [a precommit in HDFS-9804|https://issues.apache.org/jira/browse/HDFS-9804?focusedCommentId=15166671=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15166671]. I think we can use a {{waitFor}} to fix the flakiness on sleep. I also think we may need a better way than {{for (int x =0; x < 50; x++) }} to make sure the metrics in fact increased. Attached patch 1 to this direction. Please see if it makes sense to you. > TestDataNodeMetrics#testDataNodeTimeSpend fails intermittently > -- > > Key: HDFS-9766 > URL: https://issues.apache.org/jira/browse/HDFS-9766 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Affects Versions: 3.0.0 >Reporter: Mingliang Liu > Attachments: HDFS-9766.01.patch > > > *Stacktrace* > {code} > java.lang.AssertionError: null > at org.junit.Assert.fail(Assert.java:86) > at org.junit.Assert.assertTrue(Assert.java:41) > at org.junit.Assert.assertTrue(Assert.java:52) > at > org.apache.hadoop.hdfs.server.datanode.TestDataNodeMetrics.testDataNodeTimeSpend(TestDataNodeMetrics.java:289) > {code} > See recent builds: > * > https://builds.apache.org/job/PreCommit-HDFS-Build/14393/testReport/org.apache.hadoop.hdfs.server.datanode/TestDataNodeMetrics/testDataNodeTimeSpend/ > * > https://builds.apache.org/job/PreCommit-HDFS-Build/14317/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9782) RollingFileSystemSink should have configurable roll interval
[ https://issues.apache.org/jira/browse/HDFS-9782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166823#comment-15166823 ] Andrew Wang commented on HDFS-9782: --- Based on numbers I've seen, the NN can do a few hundred files per second, so throwing a couple hundred or thousand at the NN all at once will result in a multi-second blip. A little fuzz goes a long way here, so if you're cool with 1min or even 30s, I think that's sufficient. Speaking from experience, even big cluster operators aren't necessarily more savvy about Hadoop config keys. There is also a meta point about timeliness. There's always going to be inaccuracy in data collection (NTP fail, GC pause, dog chewed an Ethernet cable), and this needs to be accounted for when processing. This is like the famous "lambda architecture" from the streaming world; handle late data in a rollup. > RollingFileSystemSink should have configurable roll interval > > > Key: HDFS-9782 > URL: https://issues.apache.org/jira/browse/HDFS-9782 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Daniel Templeton >Assignee: Daniel Templeton > Attachments: HDFS-9782.001.patch, HDFS-9782.002.patch, > HDFS-9782.003.patch, HDFS-9782.004.patch > > > Right now it defaults to rolling at the top of every hour. Instead that > interval should be configurable. The interval should also allow for some > play so that all hosts don't try to flush their files simultaneously. > I'm filing this in HDFS because I suspect it will involve touching the HDFS > tests. If it turns out not to, I'll move it into common instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9683) DiskBalancer : Add cancelPlan implementation
[ https://issues.apache.org/jira/browse/HDFS-9683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166793#comment-15166793 ] Hadoop QA commented on HDFS-9683: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 50s {color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s {color} | {color:green} HDFS-1312 passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s {color} | {color:green} HDFS-1312 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s {color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s {color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s {color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 17s {color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s {color} | {color:green} HDFS-1312 passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 59s {color} | {color:green} HDFS-1312 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 54s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 48s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 23s {color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: patch generated 1 new + 181 unchanged - 0 fixed = 182 total (was 181) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 25s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 59s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 30s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 54m 38s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 24s {color} | {color:red} Patch generated 2 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 152m 26s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_72 Failed junit tests | hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs | | | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits | | | hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness | | | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation | | | hadoop.hdfs.TestFileAppend | | | hadoop.hdfs.server.balancer.TestBalancer | | JDK v1.7.0_95 Failed junit tests | hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness | \\ \\ ||
[jira] [Commented] (HDFS-9804) Allow long-running Balancer to login with keytab
[ https://issues.apache.org/jira/browse/HDFS-9804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166788#comment-15166788 ] Xiao Chen commented on HDFS-9804: - The checkstyle is {{DFSConfigKeys}} over 80 chars as before. The failed tests, although {{TestDataNodeMetrics}} failed on both with the same error, is unrelated and passed locally. There's HDFS-9766 tracking this intermittent test. > Allow long-running Balancer to login with keytab > > > Key: HDFS-9804 > URL: https://issues.apache.org/jira/browse/HDFS-9804 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Xiao Chen >Assignee: Xiao Chen > Labels: supportability > Attachments: HDFS-9804.01.patch, HDFS-9804.02.patch, > HDFS-9804.03.patch > > > From the discussion of HDFS-9698, it might be nice to allow the balancer to > run as a daemon and login from a keytab. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9710) Change DN to send block receipt IBRs in batches
[ https://issues.apache.org/jira/browse/HDFS-9710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166776#comment-15166776 ] Hadoop QA commented on HDFS-9710: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 7 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 33s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s {color} | {color:green} trunk passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 31s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 52s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s {color} | {color:green} trunk passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 44s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 28s {color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: patch generated 2 new + 907 unchanged - 2 fixed = 909 total (was 909) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 8 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 42s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 54m 56s {color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 34s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 137m 34s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.7.0_95 Failed junit tests | hadoop.hdfs.server.datanode.TestFsDatasetCache | | | hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs | | | hadoop.hdfs.TestRollingUpgrade | | JDK v1.7.0_95 Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ca8df7 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12789852/h9710_20160224.patch | | JIRA Issue | HDFS-9710 | | Optional Tests | asflicense compile javac javadoc
[jira] [Created] (HDFS-9857) Erasure Coding: Rename replication-based names in BlockManager to more generic
Rakesh R created HDFS-9857: -- Summary: Erasure Coding: Rename replication-based names in BlockManager to more generic Key: HDFS-9857 URL: https://issues.apache.org/jira/browse/HDFS-9857 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Rakesh R Assignee: Rakesh R The idea of this jira is to rename the following entities in BlockManager as, - {{UnderReplicatedBlocks}} to {{LowRedundancyBlocks}} - {{PendingReplicationBlocks}} to {{PendingReconstructionBlocks}} - {{neededReplications}} to {{neededReconstruction}} - {{excessReplicateMap}} to {{extraRedundancyMap}} Thanks [~zhz], [~andrew.wang] for the useful [discussions|https://issues.apache.org/jira/browse/HDFS-7955?focusedCommentId=15149406=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15149406] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9427) HDFS should not default to ephemeral ports
[ https://issues.apache.org/jira/browse/HDFS-9427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166764#comment-15166764 ] Hadoop QA commented on HDFS-9427: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 6 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 28s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 19s {color} | {color:green} trunk passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 48s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 13s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 24s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 42s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 8m 16s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 26s {color} | {color:green} trunk passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 33s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 29s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 46s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 46s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 44s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 44s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 10s {color} | {color:red} root: patch generated 5 new + 576 unchanged - 5 fixed = 581 total (was 581) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 26s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 45s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 1 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 9m 51s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 31s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 36s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 31s {color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_72. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 50s {color} | {color:green} hadoop-hdfs-client in the patch passed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 25s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_72. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 55s {color} | {color:green} hadoop-yarn-registry in the patch passed with JDK v1.8.0_72. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 54s {color} | {color:green} hadoop-mapreduce-client-core in
[jira] [Commented] (HDFS-7298) HDFS may honor socket timeout configuration
[ https://issues.apache.org/jira/browse/HDFS-7298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166758#comment-15166758 ] Hadoop QA commented on HDFS-7298: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 47s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s {color} | {color:green} trunk passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 54s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s {color} | {color:green} trunk passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 51s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 55s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 21s {color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: patch generated 4 new + 237 unchanged - 1 fixed = 241 total (was 238) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 47s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 35s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 157m 51s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_72 Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeMetrics | | | hadoop.hdfs.TestFileAppend | | | hadoop.hdfs.server.balancer.TestBalancer | | JDK v1.8.0_72 Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 | | JDK v1.7.0_95 Failed junit tests | hadoop.hdfs.server.namenode.ha.TestHAFsck | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ca8df7 | | JIRA Patch URL |
[jira] [Commented] (HDFS-9838) Refactor the excessReplicateMap to a class
[ https://issues.apache.org/jira/browse/HDFS-9838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166706#comment-15166706 ] Hudson commented on HDFS-9838: -- SUCCESS: Integrated in Hadoop-trunk-Commit #9367 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/9367/]) HDFS-9838. Refactor the excessReplicateMap to a class. (szetszwo: rev 6979cbfc1f4c28440816b56f5624765872b0be49) * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestOverReplicatedBlocks.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestNodeCount.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ExcessReplicaMap.java > Refactor the excessReplicateMap to a class > -- > > Key: HDFS-9838 > URL: https://issues.apache.org/jira/browse/HDFS-9838 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze > Fix For: 3.0.0 > > Attachments: h9838_20160219.patch, h9838_20160222.patch, > h9838_20160222b.patch, h9838_20160224.patch > > > There are a lot of code duplication for accessing the excessReplicateMap in > BlockManger. Let's refactor the related code to a class. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9831) Document webhdfs retry configuration keys introduced by HDFS-5219/HDFS-5122
[ https://issues.apache.org/jira/browse/HDFS-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166699#comment-15166699 ] Hadoop QA commented on HDFS-9831: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 25s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s {color} | {color:green} trunk passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s {color} | {color:green} trunk passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 50s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 44s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 43s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 26s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 50m 50s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 125m 39s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_72 Failed junit tests | hadoop.hdfs.TestFileCreationDelete | | JDK v1.7.0_95 Failed junit tests | hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ca8df7 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12789834/HDFS-9831.001.patch | | JIRA Issue | HDFS-9831 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml | | uname | Linux 4d2e4c0ef57b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / dbbfc58 | | Default Java | 1.7.0_95 | | Multi-JDK versions | /usr/lib/jvm/java-8-oracle:1.8.0_72 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 | | unit |
[jira] [Updated] (HDFS-9838) Refactor the excessReplicateMap to a class
[ https://issues.apache.org/jira/browse/HDFS-9838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo Nicholas Sze updated HDFS-9838: -- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) The failed test is not related. Thanks Jing for reviewing the patch. I have committed this. > Refactor the excessReplicateMap to a class > -- > > Key: HDFS-9838 > URL: https://issues.apache.org/jira/browse/HDFS-9838 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze > Fix For: 3.0.0 > > Attachments: h9838_20160219.patch, h9838_20160222.patch, > h9838_20160222b.patch, h9838_20160224.patch > > > There are a lot of code duplication for accessing the excessReplicateMap in > BlockManger. Let's refactor the related code to a class. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9782) RollingFileSystemSink should have configurable roll interval
[ https://issues.apache.org/jira/browse/HDFS-9782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166672#comment-15166672 ] Daniel Templeton commented on HDFS-9782: Thanks, [~andrew.wang]! HADOOP-8608 appears to be exactly what we want. Fair point about needing to deal with a GC pause, but having the offset on by default still strikes me as a potentially nasty surprise. I'm trying to think about this in terms of customer experience. We know that there's no noticeable performance impact at 200 nodes. We're just assuming that we'll run into issues at larger scale, but we don't actually know for sure. It just seems wrong to me to add this little bit of unexpected uncertainty into the mix for all users when we suspect that a handful of users might run into the issue. Also consider that an admin running a 1000-node cluster is going to be a bit more careful when changing configuration settings than someone with a 10-node cluster. The big cluster's admin is less likely to be surprised by needing to turn on the offset than the little cluster's admin will be about it being on by default. The above begs the question about what the default should be. If we think it's 1 second or even 10 seconds, I'll stop arguing now and turn it on by default. I assumed we'd want something more like 1 minute. At a minute, that's long enough that some user will trip over it and be confused. I don't think more than 1 minute is a reasonable default for several reasons, one of which is that it could interact badly with a short roll interval. (I don't think it makes sense to set the offset as a percentage of the roll interval, because the need for the offset is independent of the length of the roll interval.) bq. What kind of timeliness do we really require? Would it be acceptable if we did not synchronize rolling, but rolled more frequently? The use case behind this JIRA requires log rolls at the top of every hour with a known time by which the logs are guaranteed to be available. Having a 1 minute offset as the default is fine for this use case. The discussion we're having here is about all the use cases we haven't seen yet. Sorry to suck up time on such an trivial detail, but I think it's worth getting right. > RollingFileSystemSink should have configurable roll interval > > > Key: HDFS-9782 > URL: https://issues.apache.org/jira/browse/HDFS-9782 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Daniel Templeton >Assignee: Daniel Templeton > Attachments: HDFS-9782.001.patch, HDFS-9782.002.patch, > HDFS-9782.003.patch, HDFS-9782.004.patch > > > Right now it defaults to rolling at the top of every hour. Instead that > interval should be configurable. The interval should also allow for some > play so that all hosts don't try to flush their files simultaneously. > I'm filing this in HDFS because I suspect it will involve touching the HDFS > tests. If it turns out not to, I'll move it into common instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9804) Allow long-running Balancer to login with keytab
[ https://issues.apache.org/jira/browse/HDFS-9804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166671#comment-15166671 ] Hadoop QA commented on HDFS-9804: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 34s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 59s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 40s {color} | {color:green} trunk passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 37s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 9s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 55s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 27s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 28s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 1s {color} | {color:green} trunk passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 48s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 25s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 46s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 46s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 36s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 36s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 9s {color} | {color:red} root: patch generated 3 new + 527 unchanged - 0 fixed = 530 total (was 527) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 53s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 27s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 55s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 59s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 47s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 43s {color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 36s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 49s {color} | {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 51s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s {color} | {color:green} Patch does not generate ASF License warnings. {color} |
[jira] [Commented] (HDFS-9838) Refactor the excessReplicateMap to a class
[ https://issues.apache.org/jira/browse/HDFS-9838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1513#comment-1513 ] Hadoop QA commented on HDFS-9838: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 9s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s {color} | {color:green} trunk passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 6s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s {color} | {color:green} trunk passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 56s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 50s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 41s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 22s {color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: patch generated 1 new + 251 unchanged - 2 fixed = 252 total (was 253) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 16s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 53s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 9s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 58m 37s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 145m 50s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_72 Failed junit tests | hadoop.hdfs.TestRenameWhileOpen | | | hadoop.hdfs.TestDFSUpgradeFromImage | | JDK v1.7.0_95 Failed junit tests | hadoop.hdfs.server.blockmanagement.TestBlockManagerSafeMode | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ca8df7 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12789817/h9838_20160224.patch | | JIRA Issue | HDFS-9838 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 94d0a90673bb
[jira] [Updated] (HDFS-9683) DiskBalancer : Add cancelPlan implementation
[ https://issues.apache.org/jira/browse/HDFS-9683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-9683: --- Attachment: (was: HDFS-9683-HDFS-1312.001.patch) > DiskBalancer : Add cancelPlan implementation > > > Key: HDFS-9683 > URL: https://issues.apache.org/jira/browse/HDFS-9683 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Affects Versions: HDFS-1312 >Reporter: Anu Engineer >Assignee: Anu Engineer > Fix For: HDFS-1312 > > Attachments: HDFS-9683-HDFS-1312.001.patch > > > Add datanode side code for Cancel Plan -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9683) DiskBalancer : Add cancelPlan implementation
[ https://issues.apache.org/jira/browse/HDFS-9683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-9683: --- Status: Patch Available (was: Open) > DiskBalancer : Add cancelPlan implementation > > > Key: HDFS-9683 > URL: https://issues.apache.org/jira/browse/HDFS-9683 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Affects Versions: HDFS-1312 >Reporter: Anu Engineer >Assignee: Anu Engineer > Fix For: HDFS-1312 > > Attachments: HDFS-9683-HDFS-1312.001.patch > > > Add datanode side code for Cancel Plan -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9683) DiskBalancer : Add cancelPlan implementation
[ https://issues.apache.org/jira/browse/HDFS-9683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-9683: --- Attachment: HDFS-9683-HDFS-1312.001.patch > DiskBalancer : Add cancelPlan implementation > > > Key: HDFS-9683 > URL: https://issues.apache.org/jira/browse/HDFS-9683 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Affects Versions: HDFS-1312 >Reporter: Anu Engineer >Assignee: Anu Engineer > Fix For: HDFS-1312 > > Attachments: HDFS-9683-HDFS-1312.001.patch > > > Add datanode side code for Cancel Plan -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9710) Change DN to send block receipt IBRs in batches
[ https://issues.apache.org/jira/browse/HDFS-9710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo Nicholas Sze updated HDFS-9710: -- Attachment: h9710_20160224.patch h9710_20160224.patch: adds a test. The new test shows that #IBR is greater decreased from ~550 calls to 41 calls. - Batch mode IBR {code} 2016-02-24 18:46:04,197 [main] INFO datanode.TestBatchIbr (TestBatchIbr.java:runIbrTest(134)) - batchModeIbr=true, duration=6.487ms, createFileTime=68.188ms, verifyFileTime=35.348ms 2016-02-24 18:46:04,351 [main] INFO datanode.TestBatchIbr (TestBatchIbr.java:logIbrCounts(154)) - 127.0.0.1:57961: IncrementalBlockReportsNumOps=41 2016-02-24 18:46:04,362 [main] INFO datanode.TestBatchIbr (TestBatchIbr.java:logIbrCounts(154)) - 127.0.0.1:57965: IncrementalBlockReportsNumOps=41 2016-02-24 18:46:04,374 [main] INFO datanode.TestBatchIbr (TestBatchIbr.java:logIbrCounts(154)) - 127.0.0.1:57970: IncrementalBlockReportsNumOps=41 2016-02-24 18:46:04,386 [main] INFO datanode.TestBatchIbr (TestBatchIbr.java:logIbrCounts(154)) - 127.0.0.1:57974: IncrementalBlockReportsNumOps=41 {code} - Immediate IBR {code} 2016-02-24 18:46:10,748 [main] INFO datanode.TestBatchIbr (TestBatchIbr.java:runIbrTest(134)) - batchModeIbr=false, duration=4.678ms, createFileTime=67.169ms, verifyFileTime=7.548ms 2016-02-24 18:46:10,756 [main] INFO datanode.TestBatchIbr (TestBatchIbr.java:logIbrCounts(154)) - 127.0.0.1:61066: IncrementalBlockReportsNumOps=553 2016-02-24 18:46:10,764 [main] INFO datanode.TestBatchIbr (TestBatchIbr.java:logIbrCounts(154)) - 127.0.0.1:61071: IncrementalBlockReportsNumOps=561 2016-02-24 18:46:10,772 [main] INFO datanode.TestBatchIbr (TestBatchIbr.java:logIbrCounts(154)) - 127.0.0.1:61075: IncrementalBlockReportsNumOps=558 2016-02-24 18:46:10,780 [main] INFO datanode.TestBatchIbr (TestBatchIbr.java:logIbrCounts(154)) - 127.0.0.1:61079: IncrementalBlockReportsNumOps=558 {code} > Change DN to send block receipt IBRs in batches > --- > > Key: HDFS-9710 > URL: https://issues.apache.org/jira/browse/HDFS-9710 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze > Attachments: h9710_20160201.patch, h9710_20160205.patch, > h9710_20160216.patch, h9710_20160216b.patch, h9710_20160217.patch, > h9710_20160219.patch, h9710_20160224.patch > > > When a DN has received a block, it immediately sends a block receipt IBR RPC > to NN for reporting the block. Even if a DN has received multiple blocks at > the same time, it still sends multiple RPCs. It does not scale well since NN > has to process a huge number of RPCs when many DNs receiving many blocks at > the same time. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7661) Erasure coding: support hflush and hsync
[ https://issues.apache.org/jira/browse/HDFS-7661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166613#comment-15166613 ] Mingliang Liu commented on HDFS-7661: - Thanks for you comments, [~drankye]. 1. Augmenting the crc file, i.e. meta file, is possible. However, it becomes too complicated if we interleave the checksum and BG length records. If we place them in two segments of the .meta file as | header | crc | bglen records|, the CRC section should be preserved, which leads to holes in the file. Meanwhile, the {{.bglen}} file is treated as a redo/undo log whose records are to: * indicate the state of parity block data file (i.e. last cell): complete or incomplete. Incomplete means partial parity cell. * rollback last cell to previous healthy data if the state is incomplete. If the last cell is being overwritten, we need rollback to the state before overwrite happens; or else, the last cell is simply abandoned. We don't need these records for original data block. I'll update the design doc in detail to show how can we rollback safely using the {{bglen}} records. 2. I totally agree we should document {{offsetInBlock, packetLen, blockGroupLen}} definition and why we need them in the first place. Based on offline discussion with [~demongaorui] yesterday, we're refining the design doc with more detailed design motivations, which will show the challenging scenarios and why we need advanced techniques to address them. [~demongaorui] and I will share the design doc later this week. I appreciate your further review and comments. 3. The intension of the example was that we should not make any assumption about the packet size and cell size, but not assuming that they're naturally different. The fact is that they could be different and not aligned. Actually the current default size is not aligned, i.e. the packet data size is 63 KB and the cell size is 64 KB (just as the example showed). The cell size is EC policy dependent while we have different constraints on packet data size, refer to [HDFS-7308]. The best we can do is to forcefully make them aligned, in which case we still need to deal with scenarios that one cell may need multiple transmission packets or one packet contains multiple cells. Ping [~demongaorui]] for discussion. > Erasure coding: support hflush and hsync > > > Key: HDFS-7661 > URL: https://issues.apache.org/jira/browse/HDFS-7661 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Tsz Wo Nicholas Sze >Assignee: GAO Rui > Attachments: EC-file-flush-and-sync-steps-plan-2015-12-01.png, > HDFS-7661-unitTest-wip-trunk.patch, HDFS-7661-wip.01.patch, > HDFS-EC-file-flush-sync-design-version1.1.pdf, > HDFS-EC-file-flush-sync-design-version2.0.pdf > > > We also need to support hflush/hsync and visible length. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9847) HDFS configuration without time unit name should accept friendly time units
[ https://issues.apache.org/jira/browse/HDFS-9847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166598#comment-15166598 ] Lin Yiqun commented on HDFS-9847: - Hi, [~chris.douglas], it has case that can use time unit week {code} dfs.datanode.scan.period.hours 504 If this is positive, the DataNode will not scan any individual block more than once in the specified scan period. If this is negative, the block scanner is disabled. If this is set to zero, then the default value of 504 hours or 3 weeks is used. Prior versions of HDFS incorrectly documented that setting this key to zero will disable the block scanner. {code} Adding week and year in hadoop-common is wanting to support to more differnet units in subclassing(HBase, Hive , etc)rather just HDFS. There are some places using timeSecondInterval variable in {{int}} type, so I define {{getIntTimeSeconds}} in Configuration. It nedd not cast type from long to int each time. {code} int interval = conf.getIntTimeSeconds( DFSConfigKeys.DFS_DATANODE_DIRECTORYSCAN_INTERVAL_KEY, DFSConfigKeys.DFS_DATANODE_DIRECTORYSCAN_INTERVAL_DEFAULT); {code} > HDFS configuration without time unit name should accept friendly time units > --- > > Key: HDFS-9847 > URL: https://issues.apache.org/jira/browse/HDFS-9847 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 2.7.1 >Reporter: Lin Yiqun >Assignee: Lin Yiqun > Attachments: HDFS-9847.001.patch, HDFS-9847.002.patch, > timeduration-w-y.patch > > > In HDFS-9821, it talks about the issue of leting existing keys use friendly > units e.g. 60s, 5m, 1d, 6w etc. But there are som configuration key names > contain time unit name, like {{dfs.blockreport.intervalMsec}}, so we can make > some other configurations which without time unit name to accept friendly > time units. The time unit {{seconds}} is frequently used in hdfs. We can > updating this configurations first. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9427) HDFS should not default to ephemeral ports
[ https://issues.apache.org/jira/browse/HDFS-9427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166590#comment-15166590 ] Hadoop QA commented on HDFS-9427: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 36s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 18s {color} | {color:green} trunk passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 19s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 25s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 25s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 32s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 25s {color} | {color:green} trunk passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 13s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 32s {color} | {color:red} hadoop-hdfs-client in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 47s {color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 12s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 19s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 19s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 30s {color} | {color:red} hadoop-hdfs-project: patch generated 4 new + 414 unchanged - 4 fixed = 418 total (was 418) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 23s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 21s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 0s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 24s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 7s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 48s {color} | {color:green} hadoop-hdfs-client in the patch passed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 52m 53s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_72. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 56s {color} | {color:green} hadoop-hdfs-client in the patch passed with JDK v1.7.0_95. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 56m 15s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95.
[jira] [Commented] (HDFS-7298) HDFS may honor socket timeout configuration
[ https://issues.apache.org/jira/browse/HDFS-7298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166580#comment-15166580 ] Sameer Abhyankar commented on HDFS-7298: [~bimaltandel] [~yzhangal] I have uploaded a patch for this. > HDFS may honor socket timeout configuration > --- > > Key: HDFS-7298 > URL: https://issues.apache.org/jira/browse/HDFS-7298 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Guo Ruijing >Assignee: Sameer Abhyankar > Attachments: HDFS-7298.patch > > > DFS_CLIENT_SOCKET_TIMEOUT_KEY: HDFS socket read timeout > DFS_DATANODE_SOCKET_WRITE_TIMEOUT_KEY: HDFS socket write timeout > HDFS may honor socket timeout configuration: > 1. DataXceiver.java: > 1) existing code(not expected) >int timeoutValue = dnConf.socketTimeout > + (HdfsServerConstants.READ_TIMEOUT_EXTENSION * targets.length); > int writeTimeout = dnConf.socketWriteTimeout + > (HdfsServerConstants.WRITE_TIMEOUT_EXTENSION * > targets.length); > 2) proposed code: >int timeoutValue = dnConf.socketTimeout ? (dnConf.socketTimeout > + (HdfsServerConstants.READ_TIMEOUT_EXTENSION * targets.length) > : 0; > int writeTimeout = dnConf.socketWriteTimeout ? > (dnConf.socketWriteTimeout + > (HdfsServerConstants.WRITE_TIMEOUT_EXTENSION * > targets.length)) : 0; > 2) DFSClient.java > existing code is expected: > int getDatanodeWriteTimeout(int numNodes) { > return (dfsClientConf.confTime > 0) ? > (dfsClientConf.confTime + HdfsServerConstants.WRITE_TIMEOUT_EXTENSION * > numNodes) : 0; > } > int getDatanodeReadTimeout(int numNodes) { > return dfsClientConf.socketTimeout > 0 ? > (HdfsServerConstants.READ_TIMEOUT_EXTENSION * numNodes + > dfsClientConf.socketTimeout) : 0; > } > 3) DataNode.java: > existing code is not expected: > long writeTimeout = dnConf.socketWriteTimeout + > HdfsServerConstants.WRITE_TIMEOUT_EXTENSION * > (targets.length-1); > proposed code: > long writeTimeout = dnConf.socketWriteTimeout ? > (dnConf.socketWriteTimeout + > HdfsServerConstants.WRITE_TIMEOUT_EXTENSION * > (targets.length-1)) : 0; -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7298) HDFS may honor socket timeout configuration
[ https://issues.apache.org/jira/browse/HDFS-7298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sameer Abhyankar updated HDFS-7298: --- Attachment: HDFS-7298.patch > HDFS may honor socket timeout configuration > --- > > Key: HDFS-7298 > URL: https://issues.apache.org/jira/browse/HDFS-7298 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Guo Ruijing >Assignee: Sameer Abhyankar > Attachments: HDFS-7298.patch > > > DFS_CLIENT_SOCKET_TIMEOUT_KEY: HDFS socket read timeout > DFS_DATANODE_SOCKET_WRITE_TIMEOUT_KEY: HDFS socket write timeout > HDFS may honor socket timeout configuration: > 1. DataXceiver.java: > 1) existing code(not expected) >int timeoutValue = dnConf.socketTimeout > + (HdfsServerConstants.READ_TIMEOUT_EXTENSION * targets.length); > int writeTimeout = dnConf.socketWriteTimeout + > (HdfsServerConstants.WRITE_TIMEOUT_EXTENSION * > targets.length); > 2) proposed code: >int timeoutValue = dnConf.socketTimeout ? (dnConf.socketTimeout > + (HdfsServerConstants.READ_TIMEOUT_EXTENSION * targets.length) > : 0; > int writeTimeout = dnConf.socketWriteTimeout ? > (dnConf.socketWriteTimeout + > (HdfsServerConstants.WRITE_TIMEOUT_EXTENSION * > targets.length)) : 0; > 2) DFSClient.java > existing code is expected: > int getDatanodeWriteTimeout(int numNodes) { > return (dfsClientConf.confTime > 0) ? > (dfsClientConf.confTime + HdfsServerConstants.WRITE_TIMEOUT_EXTENSION * > numNodes) : 0; > } > int getDatanodeReadTimeout(int numNodes) { > return dfsClientConf.socketTimeout > 0 ? > (HdfsServerConstants.READ_TIMEOUT_EXTENSION * numNodes + > dfsClientConf.socketTimeout) : 0; > } > 3) DataNode.java: > existing code is not expected: > long writeTimeout = dnConf.socketWriteTimeout + > HdfsServerConstants.WRITE_TIMEOUT_EXTENSION * > (targets.length-1); > proposed code: > long writeTimeout = dnConf.socketWriteTimeout ? > (dnConf.socketWriteTimeout + > HdfsServerConstants.WRITE_TIMEOUT_EXTENSION * > (targets.length-1)) : 0; -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7298) HDFS may honor socket timeout configuration
[ https://issues.apache.org/jira/browse/HDFS-7298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sameer Abhyankar updated HDFS-7298: --- Assignee: Sameer Abhyankar (was: bimal tandel) Status: Patch Available (was: Open) > HDFS may honor socket timeout configuration > --- > > Key: HDFS-7298 > URL: https://issues.apache.org/jira/browse/HDFS-7298 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Guo Ruijing >Assignee: Sameer Abhyankar > > DFS_CLIENT_SOCKET_TIMEOUT_KEY: HDFS socket read timeout > DFS_DATANODE_SOCKET_WRITE_TIMEOUT_KEY: HDFS socket write timeout > HDFS may honor socket timeout configuration: > 1. DataXceiver.java: > 1) existing code(not expected) >int timeoutValue = dnConf.socketTimeout > + (HdfsServerConstants.READ_TIMEOUT_EXTENSION * targets.length); > int writeTimeout = dnConf.socketWriteTimeout + > (HdfsServerConstants.WRITE_TIMEOUT_EXTENSION * > targets.length); > 2) proposed code: >int timeoutValue = dnConf.socketTimeout ? (dnConf.socketTimeout > + (HdfsServerConstants.READ_TIMEOUT_EXTENSION * targets.length) > : 0; > int writeTimeout = dnConf.socketWriteTimeout ? > (dnConf.socketWriteTimeout + > (HdfsServerConstants.WRITE_TIMEOUT_EXTENSION * > targets.length)) : 0; > 2) DFSClient.java > existing code is expected: > int getDatanodeWriteTimeout(int numNodes) { > return (dfsClientConf.confTime > 0) ? > (dfsClientConf.confTime + HdfsServerConstants.WRITE_TIMEOUT_EXTENSION * > numNodes) : 0; > } > int getDatanodeReadTimeout(int numNodes) { > return dfsClientConf.socketTimeout > 0 ? > (HdfsServerConstants.READ_TIMEOUT_EXTENSION * numNodes + > dfsClientConf.socketTimeout) : 0; > } > 3) DataNode.java: > existing code is not expected: > long writeTimeout = dnConf.socketWriteTimeout + > HdfsServerConstants.WRITE_TIMEOUT_EXTENSION * > (targets.length-1); > proposed code: > long writeTimeout = dnConf.socketWriteTimeout ? > (dnConf.socketWriteTimeout + > HdfsServerConstants.WRITE_TIMEOUT_EXTENSION * > (targets.length-1)) : 0; -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9710) Change DN to send block receipt IBRs in batches
[ https://issues.apache.org/jira/browse/HDFS-9710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166565#comment-15166565 ] Tsz Wo Nicholas Sze commented on HDFS-9710: --- > 1. In IncrementalBlockReportManager#sendIBRs, the "isDebugEnabled" check can > be skipped since we're using slf4j.Logger. The log message uses Arrays.toString(..) so that {{log("{}", Arrays.toString(reports)}} does not work well. It will create the string even if the log is not enabled. > Change DN to send block receipt IBRs in batches > --- > > Key: HDFS-9710 > URL: https://issues.apache.org/jira/browse/HDFS-9710 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze > Attachments: h9710_20160201.patch, h9710_20160205.patch, > h9710_20160216.patch, h9710_20160216b.patch, h9710_20160217.patch, > h9710_20160219.patch > > > When a DN has received a block, it immediately sends a block receipt IBR RPC > to NN for reporting the block. Even if a DN has received multiple blocks at > the same time, it still sends multiple RPCs. It does not scale well since NN > has to process a huge number of RPCs when many DNs receiving many blocks at > the same time. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9838) Refactor the excessReplicateMap to a class
[ https://issues.apache.org/jira/browse/HDFS-9838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166539#comment-15166539 ] Jing Zhao commented on HDFS-9838: - Thanks for updating the patch, Nicholas! The new patch looks pretty good to me. +1 pending jenkins. > Refactor the excessReplicateMap to a class > -- > > Key: HDFS-9838 > URL: https://issues.apache.org/jira/browse/HDFS-9838 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze > Attachments: h9838_20160219.patch, h9838_20160222.patch, > h9838_20160222b.patch, h9838_20160224.patch > > > There are a lot of code duplication for accessing the excessReplicateMap in > BlockManger. Let's refactor the related code to a class. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HDFS-9856) Suppress Jenkins warning for sample JSON file
[ https://issues.apache.org/jira/browse/HDFS-9856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou reassigned HDFS-9856: --- Assignee: Xiaobing Zhou > Suppress Jenkins warning for sample JSON file > - > > Key: HDFS-9856 > URL: https://issues.apache.org/jira/browse/HDFS-9856 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Affects Versions: HDFS-1312 >Reporter: Arpit Agarwal >Assignee: Xiaobing Zhou > > Jenkins runs generate a warning for the sample JSON plan as follows: > {code} > Lines that start with ? in the ASF License report indicate files that do > not have an Apache license header: > !? > /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/diskBalancer/data-cluster-3node-3disk.json > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9831) Document webhdfs retry configuration keys introduced by HDFS-5219/HDFS-5122
[ https://issues.apache.org/jira/browse/HDFS-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-9831: Attachment: HDFS-9831.001.patch 001 added the use cases, thanks [~xyao] for review. > Document webhdfs retry configuration keys introduced by HDFS-5219/HDFS-5122 > > > Key: HDFS-9831 > URL: https://issues.apache.org/jira/browse/HDFS-9831 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation, webhdfs >Affects Versions: 2.6.0 >Reporter: Xiaoyu Yao >Assignee: Xiaobing Zhou > Attachments: HDFS-9831.000.patch, HDFS-9831.001.patch > > > This ticket is opened to document the configuration keys introduced by > HDFS-5219/HDFS-5122 for WebHdfs Retry. Both hdfs-default.xml and webhdfs.md > should be updated with the usage of these keys. > {code} > / WebHDFS retry policy >   public static final String  DFS_HTTP_CLIENT_RETRY_POLICY_ENABLED_KEY = > "dfs.http.client.retry.policy.enabled"; >   public static final boolean DFS_HTTP_CLIENT_RETRY_POLICY_ENABLED_DEFAULT = > false; >   public static final String  DFS_HTTP_CLIENT_RETRY_POLICY_SPEC_KEY = > "dfs.http.client.retry.policy.spec"; >   public static final String  DFS_HTTP_CLIENT_RETRY_POLICY_SPEC_DEFAULT = > "1,6,6,10"; //t1,n1,t2,n2,... >   public static final String  DFS_HTTP_CLIENT_FAILOVER_MAX_ATTEMPTS_KEY = > "dfs.http.client.failover.max.attempts"; >   public static final int    DFS_HTTP_CLIENT_FAILOVER_MAX_ATTEMPTS_DEFAULT = > 15; >   public static final String  DFS_HTTP_CLIENT_RETRY_MAX_ATTEMPTS_KEY = > "dfs.http.client.retry.max.attempts"; >   public static final int    DFS_HTTP_CLIENT_RETRY_MAX_ATTEMPTS_DEFAULT = 10; >   public static final String  DFS_HTTP_CLIENT_FAILOVER_SLEEPTIME_BASE_KEY = > "dfs.http.client.failover.sleep.base.millis"; >   public static final int    DFS_HTTP_CLIENT_FAILOVER_SLEEPTIME_BASE_DEFAULT > = 500; >   public static final String  DFS_HTTP_CLIENT_FAILOVER_SLEEPTIME_MAX_KEY = > "dfs.http.client.failover.sleep.max.millis"; >   public static final int    DFS_HTTP_CLIENT_FAILOVER_SLEEPTIME_MAX_DEFAULT = > 15000; > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9856) Suppress Jenkins warning for sample JSON file
[ https://issues.apache.org/jira/browse/HDFS-9856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166515#comment-15166515 ] Arpit Agarwal commented on HDFS-9856: - Thank you for the pointer [~cnauroth]. That looks like it would do the job. > Suppress Jenkins warning for sample JSON file > - > > Key: HDFS-9856 > URL: https://issues.apache.org/jira/browse/HDFS-9856 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Affects Versions: HDFS-1312 >Reporter: Arpit Agarwal > > Jenkins runs generate a warning for the sample JSON plan as follows: > {code} > Lines that start with ? in the ASF License report indicate files that do > not have an Apache license header: > !? > /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/diskBalancer/data-cluster-3node-3disk.json > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9856) Suppress Jenkins warning for sample JSON file
[ https://issues.apache.org/jira/browse/HDFS-9856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166509#comment-15166509 ] Chris Nauroth commented on HDFS-9856: - Hi [~arpitagarwal]. It's possible to configure exceptions to the license check in the apache-rat-plugin. Here is an excerpt from hadoop-hdfs-project/hadoop-hdfs/pom.xml: {code} org.apache.rat apache-rat-plugin CHANGES.txt CHANGES.HDFS-1623.txt CHANGES.HDFS-347.txt .gitattributes .idea/** src/main/conf/* ... {code} > Suppress Jenkins warning for sample JSON file > - > > Key: HDFS-9856 > URL: https://issues.apache.org/jira/browse/HDFS-9856 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Affects Versions: HDFS-1312 >Reporter: Arpit Agarwal > > Jenkins runs generate a warning for the sample JSON plan as follows: > {code} > Lines that start with ? in the ASF License report indicate files that do > not have an Apache license header: > !? > /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/diskBalancer/data-cluster-3node-3disk.json > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9734) Refactoring of checksum failure report related codes
[ https://issues.apache.org/jira/browse/HDFS-9734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166511#comment-15166511 ] Hadoop QA commented on HDFS-9734: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 38s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s {color} | {color:green} trunk passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 18s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 30s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 24s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 37s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 24s {color} | {color:green} trunk passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 10s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 17s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 13s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 16s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 27s {color} | {color:red} hadoop-hdfs-project: patch generated 2 new + 330 unchanged - 4 fixed = 332 total (was 334) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 20s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 21s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 57s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 5s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 52s {color} | {color:green} hadoop-hdfs-client in the patch passed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 20s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_72. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 56s {color} | {color:green} hadoop-hdfs-client in the patch passed with JDK v1.7.0_95. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 51m 37s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 141m 59s {color} | {color:black} {color} | \\ \\
[jira] [Updated] (HDFS-9427) HDFS should not default to ephemeral ports
[ https://issues.apache.org/jira/browse/HDFS-9427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-9427: Attachment: HDFS-9427.001.patch 001 changed comments/docs/sources that were inconsistent. > HDFS should not default to ephemeral ports > -- > > Key: HDFS-9427 > URL: https://issues.apache.org/jira/browse/HDFS-9427 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs-client, namenode >Affects Versions: 3.0.0 >Reporter: Arpit Agarwal >Assignee: Xiaobing Zhou >Priority: Critical > Labels: Incompatible > Attachments: HDFS-9427.000.patch, HDFS-9427.001.patch > > > HDFS defaults to ephemeral ports for the some HTTP/RPC endpoints. This can > cause bind exceptions on service startup if the port is in use. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9856) Suppress Jenkins warning for sample JSON file
[ https://issues.apache.org/jira/browse/HDFS-9856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166468#comment-15166468 ] Arpit Agarwal commented on HDFS-9856: - Not sure if there is a way to add ASF license exceptions for specific files. It [doesn't look like JSON files support comments|https://stackoverflow.com/a/4183018]. > Suppress Jenkins warning for sample JSON file > - > > Key: HDFS-9856 > URL: https://issues.apache.org/jira/browse/HDFS-9856 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Affects Versions: HDFS-1312 >Reporter: Arpit Agarwal > > Jenkins runs generate a warning for the sample JSON plan as follows: > {code} > Lines that start with ? in the ASF License report indicate files that do > not have an Apache license header: > !? > /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/diskBalancer/data-cluster-3node-3disk.json > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-9856) Suppress Jenkins warning for sample JSON file
Arpit Agarwal created HDFS-9856: --- Summary: Suppress Jenkins warning for sample JSON file Key: HDFS-9856 URL: https://issues.apache.org/jira/browse/HDFS-9856 Project: Hadoop HDFS Issue Type: Sub-task Affects Versions: HDFS-1312 Reporter: Arpit Agarwal Jenkins runs generate a warning for the sample JSON plan as follows: {code} Lines that start with ? in the ASF License report indicate files that do not have an Apache license header: !? /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/diskBalancer/data-cluster-3node-3disk.json {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9681) DiskBalancer : Add QueryPlan implementation
[ https://issues.apache.org/jira/browse/HDFS-9681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-9681: Resolution: Fixed Hadoop Flags: Reviewed Target Version/s: (was: HDFS-1312) Status: Resolved (was: Patch Available) Pushed to the feature branch. Thanks [~anu]. checkstyle is being obnoxious, the setter methods are well written. > DiskBalancer : Add QueryPlan implementation > --- > > Key: HDFS-9681 > URL: https://issues.apache.org/jira/browse/HDFS-9681 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Affects Versions: HDFS-1312 >Reporter: Anu Engineer >Assignee: Anu Engineer > Fix For: HDFS-1312 > > Attachments: HDFS-9681-HDFS-1312.001.patch > > > Add the datanode logic for QueryPlan -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9681) DiskBalancer : Add QueryPlan implementation
[ https://issues.apache.org/jira/browse/HDFS-9681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166443#comment-15166443 ] Arpit Agarwal commented on HDFS-9681: - +1 for the patch. I will commit it shortly. Using JSON to serialize the {{DiskBalancerWorkEntry}} structure sent via protobuf messages is unusual for Hadoop but I can see how it simplifies the code here. Protobuf would yield a more compact representation but for {{DiskBalancerWorkEntry}} the difference would be nominal and this message will be sent rarely. > DiskBalancer : Add QueryPlan implementation > --- > > Key: HDFS-9681 > URL: https://issues.apache.org/jira/browse/HDFS-9681 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Affects Versions: HDFS-1312 >Reporter: Anu Engineer >Assignee: Anu Engineer > Fix For: HDFS-1312 > > Attachments: HDFS-9681-HDFS-1312.001.patch > > > Add the datanode logic for QueryPlan -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7661) Erasure coding: support hflush and hsync
[ https://issues.apache.org/jira/browse/HDFS-7661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166441#comment-15166441 ] Kai Zheng commented on HDFS-7661: - I did a quick reading of the v2 design doc. Some comments and questions: * Overall, I'm not sure why introducing a new meta file {{.bgLen}} for striped parity block is better, than augmenting the existing block meta file. Using a new meta file, it will means it has to stay along with the block where the block moves/replicates/reconstructs. Also, why just keep it for parity blocks? Maybe not bad for all the BG blocks. * We may need well documenting about {{offsetInBlock, packetLen, blockGroupLen}} and why we need them. The names may be refined. Otherwise someone may wonder why such intermediate variables need to be persisted as part of meta data. * bq. Consider the default EC policy whose cell size 65536B (64KB), and the DFSPacket data size is 64512B(63KB) The assumption isn't good, because I don't think it's a good idea to have cell-size and packet-data-size like this, not multiplied. It's hard to align the buffer address for erasure encoding and checksum computing ({{both are performance critical}}) without buffer data copying. We should ensure either cell-size or packet-size can fall into the other, or for simple, they're equal. I may have more comments in the following days, thanks for addressing or clarifying them. > Erasure coding: support hflush and hsync > > > Key: HDFS-7661 > URL: https://issues.apache.org/jira/browse/HDFS-7661 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Tsz Wo Nicholas Sze >Assignee: GAO Rui > Attachments: EC-file-flush-and-sync-steps-plan-2015-12-01.png, > HDFS-7661-unitTest-wip-trunk.patch, HDFS-7661-wip.01.patch, > HDFS-EC-file-flush-sync-design-version1.1.pdf, > HDFS-EC-file-flush-sync-design-version2.0.pdf > > > We also need to support hflush/hsync and visible length. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9853) Ozone: Add container definitions
[ https://issues.apache.org/jira/browse/HDFS-9853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166431#comment-15166431 ] Hadoop QA commented on HDFS-9853: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 13s {color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s {color} | {color:green} HDFS-7240 passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s {color} | {color:green} HDFS-7240 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s {color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s {color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 18s {color} | {color:green} HDFS-7240 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 9s {color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s {color} | {color:green} HDFS-7240 passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 48s {color} | {color:green} HDFS-7240 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 43s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 39s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 17s {color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 10 new + 1 unchanged - 0 fixed = 11 total (was 1) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 30s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 52m 55s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 50m 43s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 133m 7s {color} | {color:black} {color} | \\ \\ || Reason ||
[jira] [Commented] (HDFS-9782) RollingFileSystemSink should have configurable roll interval
[ https://issues.apache.org/jira/browse/HDFS-9782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166420#comment-15166420 ] Andrew Wang commented on HDFS-9782: --- bq. My concern is that the offset interval alters when the metrics are reliably available. I think it violates the principal of least astonishment to have the metrics randomly (literally) show up late by default. I would rather it not be on unless it's needed, and the user turns it on explicitly. Is it that weird? You just need to poll {{offset}} after the flush. You also always need to be able to deal with late data, since the flush could pause or be delayed for other reasons too (e.g. GC pause). I'm still not entirely clear on the requirements, since I can't think of other windowed metrics that we try to synchronize cluster wide. What kind of timeliness do we really require? Would it be acceptable if we did not synchronize rolling, but rolled more frequently? bq. What's the alternative? I don't think millis is an acceptable unit for something that will likely be hours or days. I did some JIRA searching, and found HADOOP-8608 which I didn't realize was available. Is this what we want? > RollingFileSystemSink should have configurable roll interval > > > Key: HDFS-9782 > URL: https://issues.apache.org/jira/browse/HDFS-9782 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Daniel Templeton >Assignee: Daniel Templeton > Attachments: HDFS-9782.001.patch, HDFS-9782.002.patch, > HDFS-9782.003.patch, HDFS-9782.004.patch > > > Right now it defaults to rolling at the top of every hour. Instead that > interval should be configurable. The interval should also allow for some > play so that all hosts don't try to flush their files simultaneously. > I'm filing this in HDFS because I suspect it will involve touching the HDFS > tests. If it turns out not to, I'll move it into common instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9804) Allow long-running Balancer to login with keytab
[ https://issues.apache.org/jira/browse/HDFS-9804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166417#comment-15166417 ] Xiao Chen commented on HDFS-9804: - Thanks Zhe! I also updated the title of the jira to be more specific. > Allow long-running Balancer to login with keytab > > > Key: HDFS-9804 > URL: https://issues.apache.org/jira/browse/HDFS-9804 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Xiao Chen >Assignee: Xiao Chen > Labels: supportability > Attachments: HDFS-9804.01.patch, HDFS-9804.02.patch, > HDFS-9804.03.patch > > > From the discussion of HDFS-9698, it might be nice to allow the balancer to > run as a daemon and login from a keytab. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9852) hdfs dfs -setfacl error message is misleading
[ https://issues.apache.org/jira/browse/HDFS-9852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166408#comment-15166408 ] Wei-Chiu Chuang commented on HDFS-9852: --- Looks like I need to change more error messages, because the following {noformat} hdfs dfs -setfacl /data {noformat} seems to be a valid no-op. [~cnauroth] could you comment? > hdfs dfs -setfacl error message is misleading > - > > Key: HDFS-9852 > URL: https://issues.apache.org/jira/browse/HDFS-9852 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Trivial > Labels: supportability > Attachments: HDFS-9852.001.patch > > > When I type > {noformat}hdfs dfs -setfacl -m default:user::rwx{noformat} > It prints error message: > {noformat} > -setfacl: is missing > Usage: hadoop fs [generic options] -setfacl [-R] [{-b|-k} {-m|-x } > ]|[--set ] > {noformat} > But actually, it's the path that I missed. A correct command should be > {noformat} > hdfs dfs -setfacl -m default:user::rwx /data > {noformat} > In fact, > {noformat}-setfacl -x | -m | --set{noformat} expects two parameters. > We should print error message like this if it misses one: > {noformat} > -setfacl: Missing either or > {noformat} > and print the following if it misses two: > {noformat} > -setfacl: Missing arguments: > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9804) Allow long-running Balancer to login with keytab
[ https://issues.apache.org/jira/browse/HDFS-9804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-9804: Summary: Allow long-running Balancer to login with keytab (was: Allow long-running Balancer in Kerberized Environments) > Allow long-running Balancer to login with keytab > > > Key: HDFS-9804 > URL: https://issues.apache.org/jira/browse/HDFS-9804 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Xiao Chen >Assignee: Xiao Chen > Labels: supportability > Attachments: HDFS-9804.01.patch, HDFS-9804.02.patch, > HDFS-9804.03.patch > > > From the discussion of HDFS-9698, it might be nice to allow the balancer to > run as a daemon and login from a keytab. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9427) HDFS should not default to ephemeral ports
[ https://issues.apache.org/jira/browse/HDFS-9427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166393#comment-15166393 ] Allen Wittenauer commented on HDFS-9427: bq. How about changing into something like below? Just replace 50 with 9 No. Let's actually move the ports to be contiguous instead of all over the place. It's extremely sloppy and highlights the community's inability to plan to have these giant gaps all over the place. > HDFS should not default to ephemeral ports > -- > > Key: HDFS-9427 > URL: https://issues.apache.org/jira/browse/HDFS-9427 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs-client, namenode >Affects Versions: 3.0.0 >Reporter: Arpit Agarwal >Assignee: Xiaobing Zhou >Priority: Critical > Labels: Incompatible > Attachments: HDFS-9427.000.patch > > > HDFS defaults to ephemeral ports for the some HTTP/RPC endpoints. This can > cause bind exceptions on service startup if the port is in use. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9838) Refactor the excessReplicateMap to a class
[ https://issues.apache.org/jira/browse/HDFS-9838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo Nicholas Sze updated HDFS-9838: -- Attachment: h9838_20160224.patch h9838_20160224.patch: updates with trunk. > Refactor the excessReplicateMap to a class > -- > > Key: HDFS-9838 > URL: https://issues.apache.org/jira/browse/HDFS-9838 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze > Attachments: h9838_20160219.patch, h9838_20160222.patch, > h9838_20160222b.patch, h9838_20160224.patch > > > There are a lot of code duplication for accessing the excessReplicateMap in > BlockManger. Let's refactor the related code to a class. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9804) Allow long-running Balancer in Kerberized Environments
[ https://issues.apache.org/jira/browse/HDFS-9804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166391#comment-15166391 ] Zhe Zhang commented on HDFS-9804: - Thanks Xiao for the update and clarification! +1 on the v3 patch pending Jenkins. > Allow long-running Balancer in Kerberized Environments > -- > > Key: HDFS-9804 > URL: https://issues.apache.org/jira/browse/HDFS-9804 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Xiao Chen >Assignee: Xiao Chen > Labels: supportability > Attachments: HDFS-9804.01.patch, HDFS-9804.02.patch, > HDFS-9804.03.patch > > > From the discussion of HDFS-9698, it might be nice to allow the balancer to > run as a daemon and login from a keytab. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7955) Improve naming of classes, methods, and variables related to block replication and recovery
[ https://issues.apache.org/jira/browse/HDFS-7955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166377#comment-15166377 ] Zhe Zhang commented on HDFS-7955: - [~rakeshr] Yes I think it's a good idea to handle BlockManager code around the public API. > Improve naming of classes, methods, and variables related to block > replication and recovery > --- > > Key: HDFS-7955 > URL: https://issues.apache.org/jira/browse/HDFS-7955 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: Zhe Zhang >Assignee: Rakesh R > Attachments: HDFS-7955-001.patch, HDFS-7955-002.patch, > HDFS-7955-003.patch, HDFS-7955-004.patch, HDFS-7955-5.patch > > > Many existing names should be revised to avoid confusion when blocks can be > both replicated and erasure coded. This JIRA aims to solicit opinions on > making those names more consistent and intuitive. > # In current HDFS _block recovery_ refers to the process of finalizing the > last block of a file, triggered by _lease recovery_. It is different from the > intuitive meaning of _recovering a lost block_. To avoid confusion, I can > think of 2 options: > #* Rename this process as _block finalization_ or _block completion_. I > prefer this option because this is literally not a recovery. > #* If we want to keep existing terms unchanged we can name all EC recovery > and re-replication logics as _reconstruction_. > # As Kai [suggested | > https://issues.apache.org/jira/browse/HDFS-7369?focusedCommentId=14361131=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14361131] > under HDFS-7369, several replication-based names should be made more generic: > #* {{UnderReplicatedBlocks}} and {{neededReplications}}. E.g. we can use > {{LowRedundancyBlocks}}/{{AtRiskBlocks}}, and > {{neededRecovery}}/{{neededReconstruction}}. > #* {{PendingReplicationBlocks}} > #* {{ReplicationMonitor}} > I'm sure the above list is incomplete; discussions and comments are very > welcome. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9843) Document distcp options required for copying between encrypted locations
[ https://issues.apache.org/jira/browse/HDFS-9843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166376#comment-15166376 ] Hudson commented on HDFS-9843: -- FAILURE: Integrated in Hadoop-trunk-Commit #9365 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/9365/]) HDFS-9843. Document distcp options required for copying between (cnauroth: rev dbbfc58c33fd1d2f7abae1784c2d78b7438825e2) * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/TransparentEncryption.md > Document distcp options required for copying between encrypted locations > > > Key: HDFS-9843 > URL: https://issues.apache.org/jira/browse/HDFS-9843 > Project: Hadoop HDFS > Issue Type: Improvement > Components: distcp, documentation, encryption >Affects Versions: 2.6.0 >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Fix For: 2.8.0 > > Attachments: HDFS-9843.00.patch, HDFS-9843.01.patch, > HDFS-9843.02.patch > > > In TransparentEncryption.md#Distcp_considerations document section, we have > "Copying_between_encrypted_and_unencrypted_locations" which requires > -skipcrccheck and -update. > These options should be documented as required for "Copying between encrypted > locations" use cases as well because this involves decrypting source file and > encrypting destination file with a different EDEK, resulting in different > checksum at the destination. Distcp will fail at crc check if -skipcrccheck > if not specified. > This ticket is opened to document the required options for "Copying between > encrypted locations" use cases when using distcp with HDFS encryption. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9843) Document distcp options required for copying between encrypted locations
[ https://issues.apache.org/jira/browse/HDFS-9843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166339#comment-15166339 ] Xiaoyu Yao commented on HDFS-9843: -- Thank you, [~cnauroth] for reviewing and committing the patch! > Document distcp options required for copying between encrypted locations > > > Key: HDFS-9843 > URL: https://issues.apache.org/jira/browse/HDFS-9843 > Project: Hadoop HDFS > Issue Type: Improvement > Components: distcp, documentation, encryption >Affects Versions: 2.6.0 >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Fix For: 2.8.0 > > Attachments: HDFS-9843.00.patch, HDFS-9843.01.patch, > HDFS-9843.02.patch > > > In TransparentEncryption.md#Distcp_considerations document section, we have > "Copying_between_encrypted_and_unencrypted_locations" which requires > -skipcrccheck and -update. > These options should be documented as required for "Copying between encrypted > locations" use cases as well because this involves decrypting source file and > encrypting destination file with a different EDEK, resulting in different > checksum at the destination. Distcp will fail at crc check if -skipcrccheck > if not specified. > This ticket is opened to document the required options for "Copying between > encrypted locations" use cases when using distcp with HDFS encryption. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9831) Document webhdfs retry configuration keys introduced by HDFS-5219/HDFS-5122
[ https://issues.apache.org/jira/browse/HDFS-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166336#comment-15166336 ] Xiaoyu Yao commented on HDFS-9831: -- Thanks [~xiaobingo] for working on this. The patch looks good to me. One suggestion: can you add some description of the use cases that need to enable the WebHDFS retry policy in hdfs-site.xml. For example, If "true", enable the retry policy of WebHDFS client. This can be useful when using WebHDFS to - copy large files between clusters that could timeout or - copy files between HA clusters that could failover during the copy. {code} 2834 2835 dfs.http.client.retry.policy.enabled 2836 false 2837 2838If "true", enable the retry policy of WebHDFS client. 2839If "false", retry policy is turned off. 2840 2841 {code} > Document webhdfs retry configuration keys introduced by HDFS-5219/HDFS-5122 > > > Key: HDFS-9831 > URL: https://issues.apache.org/jira/browse/HDFS-9831 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation, webhdfs >Affects Versions: 2.6.0 >Reporter: Xiaoyu Yao >Assignee: Xiaobing Zhou > Attachments: HDFS-9831.000.patch > > > This ticket is opened to document the configuration keys introduced by > HDFS-5219/HDFS-5122 for WebHdfs Retry. Both hdfs-default.xml and webhdfs.md > should be updated with the usage of these keys. > {code} > / WebHDFS retry policy >   public static final String  DFS_HTTP_CLIENT_RETRY_POLICY_ENABLED_KEY = > "dfs.http.client.retry.policy.enabled"; >   public static final boolean DFS_HTTP_CLIENT_RETRY_POLICY_ENABLED_DEFAULT = > false; >   public static final String  DFS_HTTP_CLIENT_RETRY_POLICY_SPEC_KEY = > "dfs.http.client.retry.policy.spec"; >   public static final String  DFS_HTTP_CLIENT_RETRY_POLICY_SPEC_DEFAULT = > "1,6,6,10"; //t1,n1,t2,n2,... >   public static final String  DFS_HTTP_CLIENT_FAILOVER_MAX_ATTEMPTS_KEY = > "dfs.http.client.failover.max.attempts"; >   public static final int    DFS_HTTP_CLIENT_FAILOVER_MAX_ATTEMPTS_DEFAULT = > 15; >   public static final String  DFS_HTTP_CLIENT_RETRY_MAX_ATTEMPTS_KEY = > "dfs.http.client.retry.max.attempts"; >   public static final int    DFS_HTTP_CLIENT_RETRY_MAX_ATTEMPTS_DEFAULT = 10; >   public static final String  DFS_HTTP_CLIENT_FAILOVER_SLEEPTIME_BASE_KEY = > "dfs.http.client.failover.sleep.base.millis"; >   public static final int    DFS_HTTP_CLIENT_FAILOVER_SLEEPTIME_BASE_DEFAULT > = 500; >   public static final String  DFS_HTTP_CLIENT_FAILOVER_SLEEPTIME_MAX_KEY = > "dfs.http.client.failover.sleep.max.millis"; >   public static final int    DFS_HTTP_CLIENT_FAILOVER_SLEEPTIME_MAX_DEFAULT = > 15000; > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9395) Make HDFS audit logging consistant
[ https://issues.apache.org/jira/browse/HDFS-9395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kuhu Shukla updated HDFS-9395: -- Attachment: HDFS-9395-branch-2.7.001.patch Adding patch for branch-2.7, I have added @Ignore to testSetQuota which is failing exactly like in branch-2 per HDFS-9855. Seeking comments on that. Thanks a lot! > Make HDFS audit logging consistant > -- > > Key: HDFS-9395 > URL: https://issues.apache.org/jira/browse/HDFS-9395 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Kihwal Lee >Assignee: Kuhu Shukla > Fix For: 2.8.0 > > Attachments: HDFS-9395-branch-2.7.001.patch, HDFS-9395.001.patch, > HDFS-9395.002.patch, HDFS-9395.003.patch, HDFS-9395.004.patch, > HDFS-9395.005.patch, HDFS-9395.006.patch, HDFS-9395.007.patch > > > So, the big question here is what should go in the audit log? All failures, > or just "permission denied" failures? Or, to put it a different way, if > someone attempts to do something and it fails because a file doesn't exist, > is that worth an audit log entry? > We are currently inconsistent on this point. For example, concat, > getContentSummary, addCacheDirective, and setErasureEncodingPolicy create an > audit log entry for all failures, but setOwner, delete, and setAclEntries > attempt to only create an entry for AccessControlException-based failures. > There are a few operations, like allowSnapshot, disallowSnapshot, and > startRollingUpgrade that never create audit log failure entries at all. They > simply log nothing for any failure, and log success for a successful > operation. > So to summarize, different HDFS operations currently fall into 3 categories: > 1. audit-log all failures > 2. audit-log only AccessControlException failures > 3. never audit-log failures > Which category is right? And how can we fix the inconsistency -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9804) Allow long-running Balancer in Kerberized Environments
[ https://issues.apache.org/jira/browse/HDFS-9804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-9804: Attachment: HDFS-9804.03.patch Thanks [~zhz] for the review! Patch 3 is attached. bq. It's better to add some error handling in checkKerberosAndInit: what if DFS_BALANCER_KERBEROS_PRINCIPAL_KEY or the keytab file key is not set. That's handled in {{SecurityUtil#login}} :) It'll throw IOE with details if keytab is given incorrectly, and will use system username if principal is not provided bq. Looks like getAddress can be folded into the checkKerberosAndInit method? Sure. bq. Maybe checkKeytabAndInit is a better name? Agreed. bq. assertTrue(ugi.isLoginKeytabBased()) should be UserGroupInformation.isLoginKeytabBased() since the method is static Done bq. This can be a follow-on: ideally we can verify the behavior when used with the hdfs --daemon option. Makes sense, I assume --daemon would be the same for all commands though. bq. Another follow-on idea is to verify the relogin after TGT "max renew time" expires. It could be hard to control KDC TGT config though. I manually verified this, but I guess I could borrow your patch from HADOOP-12559 to do it programmatically. > Allow long-running Balancer in Kerberized Environments > -- > > Key: HDFS-9804 > URL: https://issues.apache.org/jira/browse/HDFS-9804 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Xiao Chen >Assignee: Xiao Chen > Labels: supportability > Attachments: HDFS-9804.01.patch, HDFS-9804.02.patch, > HDFS-9804.03.patch > > > From the discussion of HDFS-9698, it might be nice to allow the balancer to > run as a daemon and login from a keytab. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9837) BlockManager#countNodes should be able to detect duplicated internal blocks
[ https://issues.apache.org/jira/browse/HDFS-9837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166326#comment-15166326 ] Hudson commented on HDFS-9837: -- FAILURE: Integrated in Hadoop-trunk-Commit #9364 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/9364/]) HDFS-9837. BlockManager#countNodes should be able to detect duplicated (jing9: rev 47b92f2b6f2dafc129a41b247f35e77c8e47ffba) * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddOverReplicatedStripedBlocks.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestReconstructStripedBlocks.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManagerSafeMode.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/NumberReplicas.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > BlockManager#countNodes should be able to detect duplicated internal blocks > --- > > Key: HDFS-9837 > URL: https://issues.apache.org/jira/browse/HDFS-9837 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Jing Zhao >Assignee: Jing Zhao > Fix For: 3.0.0 > > Attachments: HDFS-9837.000.patch, HDFS-9837.001.patch, > HDFS-9837.002.patch, HDFS-9837.003.patch, HDFS-9837.004.patch > > > Currently {{BlockManager#countNodes}} only counts the number of > replicas/internal blocks thus it cannot detect the under-replicated scenario > where a striped EC block has 9 internal blocks but contains duplicated > data/parity blocks. E.g., b8 is missing while 2 b0 exist: > b0, b1, b2, b3, b4, b5, b6, b7, b0 > If the NameNode keeps running, NN is able to detect the duplication of b0 and > will put the block into the excess map. {{countNodes}} excludes internal > blocks captured in the excess map thus can return the correct number of live > replicas. However, if NN restarts before sending out the reconstruction > command, the missing internal block cannot be detected anymore. The following > steps can reproduce the issue: > # create an EC file > # kill DN1 and wait for the reconstruction to happen > # start DN1 again > # kill DN2 and restart NN immediately -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9843) Document distcp options required for copying between encrypted locations
[ https://issues.apache.org/jira/browse/HDFS-9843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HDFS-9843: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.8.0 Status: Resolved (was: Patch Available) +1 for patch v02. I have committed this to trunk, branch-2 and branch-2.8. [~xyao], thank you for the patch. > Document distcp options required for copying between encrypted locations > > > Key: HDFS-9843 > URL: https://issues.apache.org/jira/browse/HDFS-9843 > Project: Hadoop HDFS > Issue Type: Improvement > Components: distcp, documentation, encryption >Affects Versions: 2.6.0 >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Fix For: 2.8.0 > > Attachments: HDFS-9843.00.patch, HDFS-9843.01.patch, > HDFS-9843.02.patch > > > In TransparentEncryption.md#Distcp_considerations document section, we have > "Copying_between_encrypted_and_unencrypted_locations" which requires > -skipcrccheck and -update. > These options should be documented as required for "Copying between encrypted > locations" use cases as well because this involves decrypting source file and > encrypting destination file with a different EDEK, resulting in different > checksum at the destination. Distcp will fail at crc check if -skipcrccheck > if not specified. > This ticket is opened to document the required options for "Copying between > encrypted locations" use cases when using distcp with HDFS encryption. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9427) HDFS should not default to ephemeral ports
[ https://issues.apache.org/jira/browse/HDFS-9427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166323#comment-15166323 ] Jonathan Hsieh commented on HDFS-9427: -- Here's another to consider: the hadoop-kms port. It was brought up on HBASE-10123 that it uses 16000 as default (and clashes with hbase currently) this line indicates that 16000 is the default -- likely a search for that ENV variable will find where it is set by default in the code or confs. https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-kms/src/main/conf/kms-env.sh#L25 > HDFS should not default to ephemeral ports > -- > > Key: HDFS-9427 > URL: https://issues.apache.org/jira/browse/HDFS-9427 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs-client, namenode >Affects Versions: 3.0.0 >Reporter: Arpit Agarwal >Assignee: Xiaobing Zhou >Priority: Critical > Labels: Incompatible > Attachments: HDFS-9427.000.patch > > > HDFS defaults to ephemeral ports for the some HTTP/RPC endpoints. This can > cause bind exceptions on service startup if the port is in use. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9855) TestAuditLoggerWithCommand#testSetQuota fails with an unexpected AccessCOntrolException
[ https://issues.apache.org/jira/browse/HDFS-9855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kuhu Shukla updated HDFS-9855: -- Affects Version/s: 2.7.2 > TestAuditLoggerWithCommand#testSetQuota fails with an unexpected > AccessCOntrolException > --- > > Key: HDFS-9855 > URL: https://issues.apache.org/jira/browse/HDFS-9855 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Affects Versions: 2.8.0, 2.7.2 >Reporter: Kuhu Shukla >Assignee: Kuhu Shukla > > The addition of setQuota audit log testing throws an AccessControlException > instead of the expected FileSystemClosed IOException even when the filesystem > has been explicitly closed, other calls behave as expected during a trial > test. This is seen on branch-2 and not on trunk requiring investigation for a > possible bug/discrepancy. > CC:[~kihwal]. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9824) inconsistent message while running rename command if target exists
[ https://issues.apache.org/jira/browse/HDFS-9824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-9824: Description: In the following case, the message is not friendly, it's better to show . Source dir: {noformat} -rw-r--r-- 3 root hdfs 8526 2016-02-18 00:23 /tmp/src/1.log {noformat} Dest dir: {noformat} -rw-r--r-- 3 root hdfs 8526 2016-02-17 22:00 /tmp/dest/1.log -rw-r--r-- 3 root hdfs 8526 2016-02-18 00:17 /tmp/dest/2.log -rw-r--r-- 3 root hdfs 8526 2016-02-18 00:18 /tmp/dest/3.log -rw-r--r-- 3 root hdfs 8526 2016-02-18 00:18 /tmp/dest/4.log {noformat} Running displays inconsistent message that complains input/output error, while will show . The behavior of the two should be similar. was: In the following case, the message is not friendly, it's better to show . Source dir: {noformat} -rw-r--r-- 3 root hdfs 8526 2016-02-18 00:23 /tmp/src/1.log {noformat} Dest dir: {noformat} -rw-r--r-- 3 root hdfs 8526 2016-02-17 22:00 /tmp/dest/1.log -rw-r--r-- 3 root hdfs 8526 2016-02-18 00:17 /tmp/dest/2.log -rw-r--r-- 3 root hdfs 8526 2016-02-18 00:18 /tmp/dest/3.log -rw-r--r-- 3 root hdfs 8526 2016-02-18 00:18 /tmp/dest/4.log {noformat} Running displays unfriendly message that complains input/output error, while will show . The behavior of the two should be similar. > inconsistent message while running rename command if target exists > -- > > Key: HDFS-9824 > URL: https://issues.apache.org/jira/browse/HDFS-9824 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > > In the following case, the message > is not friendly, it's better to show . > Source dir: > {noformat} > -rw-r--r-- 3 root hdfs 8526 2016-02-18 00:23 /tmp/src/1.log > {noformat} > Dest dir: > {noformat} > -rw-r--r-- 3 root hdfs 8526 2016-02-17 22:00 /tmp/dest/1.log > -rw-r--r-- 3 root hdfs 8526 2016-02-18 00:17 /tmp/dest/2.log > -rw-r--r-- 3 root hdfs 8526 2016-02-18 00:18 /tmp/dest/3.log > -rw-r--r-- 3 root hdfs 8526 2016-02-18 00:18 /tmp/dest/4.log > {noformat} > Running displays inconsistent > message that complains input/output error, while /tmp/src/1.log /tmp/dest/1.log> will show exists>. The behavior of the two should be similar. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9824) inconsistent message while running rename command if target exists
[ https://issues.apache.org/jira/browse/HDFS-9824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-9824: Summary: inconsistent message while running rename command if target exists (was: Unfriendly message while running rename command if target exists) > inconsistent message while running rename command if target exists > -- > > Key: HDFS-9824 > URL: https://issues.apache.org/jira/browse/HDFS-9824 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > > In the following case, the message > is not friendly, it's better to show . > Source dir: > {noformat} > -rw-r--r-- 3 root hdfs 8526 2016-02-18 00:23 /tmp/src/1.log > {noformat} > Dest dir: > {noformat} > -rw-r--r-- 3 root hdfs 8526 2016-02-17 22:00 /tmp/dest/1.log > -rw-r--r-- 3 root hdfs 8526 2016-02-18 00:17 /tmp/dest/2.log > -rw-r--r-- 3 root hdfs 8526 2016-02-18 00:18 /tmp/dest/3.log > -rw-r--r-- 3 root hdfs 8526 2016-02-18 00:18 /tmp/dest/4.log > {noformat} > Running displays unfriendly message > that complains input/output error, while /tmp/dest/1.log> will show . The behavior > of the two should be similar. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9837) BlockManager#countNodes should be able to detect duplicated internal blocks
[ https://issues.apache.org/jira/browse/HDFS-9837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jing Zhao updated HDFS-9837: Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) The failed tests all passed in my local run. I've committed the patch into trunk. Thanks for the review, [~szetszwo] and [~rakeshr]! > BlockManager#countNodes should be able to detect duplicated internal blocks > --- > > Key: HDFS-9837 > URL: https://issues.apache.org/jira/browse/HDFS-9837 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Jing Zhao >Assignee: Jing Zhao > Fix For: 3.0.0 > > Attachments: HDFS-9837.000.patch, HDFS-9837.001.patch, > HDFS-9837.002.patch, HDFS-9837.003.patch, HDFS-9837.004.patch > > > Currently {{BlockManager#countNodes}} only counts the number of > replicas/internal blocks thus it cannot detect the under-replicated scenario > where a striped EC block has 9 internal blocks but contains duplicated > data/parity blocks. E.g., b8 is missing while 2 b0 exist: > b0, b1, b2, b3, b4, b5, b6, b7, b0 > If the NameNode keeps running, NN is able to detect the duplication of b0 and > will put the block into the excess map. {{countNodes}} excludes internal > blocks captured in the excess map thus can return the correct number of live > replicas. However, if NN restarts before sending out the reconstruction > command, the missing internal block cannot be detected anymore. The following > steps can reproduce the issue: > # create an EC file > # kill DN1 and wait for the reconstruction to happen > # start DN1 again > # kill DN2 and restart NN immediately -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9843) Document distcp options required for copying between encrypted locations
[ https://issues.apache.org/jira/browse/HDFS-9843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166304#comment-15166304 ] Hadoop QA commented on HDFS-9843: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 2m 38s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ca8df7 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12789334/HDFS-9843.02.patch | | JIRA Issue | HDFS-9843 | | Optional Tests | asflicense mvnsite | | uname | Linux 51bcd33a156c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 954dd57 | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/14602/console | | Powered by | Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Document distcp options required for copying between encrypted locations > > > Key: HDFS-9843 > URL: https://issues.apache.org/jira/browse/HDFS-9843 > Project: Hadoop HDFS > Issue Type: Improvement > Components: distcp, documentation, encryption >Affects Versions: 2.6.0 >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Attachments: HDFS-9843.00.patch, HDFS-9843.01.patch, > HDFS-9843.02.patch > > > In TransparentEncryption.md#Distcp_considerations document section, we have > "Copying_between_encrypted_and_unencrypted_locations" which requires > -skipcrccheck and -update. > These options should be documented as required for "Copying between encrypted > locations" use cases as well because this involves decrypting source file and > encrypting destination file with a different EDEK, resulting in different > checksum at the destination. Distcp will fail at crc check if -skipcrccheck > if not specified. > This ticket is opened to document the required options for "Copying between > encrypted locations" use cases when using distcp with HDFS encryption. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9734) Refactoring of checksum failure report related codes
[ https://issues.apache.org/jira/browse/HDFS-9734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166039#comment-15166039 ] Kai Zheng commented on HDFS-9734: - bq. One nit is that getCorruptedMap should be getCorruptionMap Right exactly. Thanks for the update, Zhe! > Refactoring of checksum failure report related codes > > > Key: HDFS-9734 > URL: https://issues.apache.org/jira/browse/HDFS-9734 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Kai Zheng >Assignee: Kai Zheng > Attachments: HADOOP-12744-v1.patch, HADOOP-12744-v2.patch, > HDFS-9734-v3.patch, HDFS-9734-v4.patch, HDFS-9734-v5.patch, > HDFS-9734-v6.patch, HDFS-9734-v7.patch, HDFS-9734-v8.patch > > > This was from discussion with [~jingzhao] in HDFS-9646. There is some > duplicate codes between client and datanode sides: > {code} > private void addCorruptedBlock(ExtendedBlock blk, DatanodeInfo node, > MapcorruptionMap) { > Set dnSet = corruptionMap.get(blk); > if (dnSet == null) { > dnSet = new HashSet<>(); > corruptionMap.put(blk, dnSet); > } > if (!dnSet.contains(node)) { > dnSet.add(node); > } > } > {code} > This would resolve the duplication and also simplify the codes some bit. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9782) RollingFileSystemSink should have configurable roll interval
[ https://issues.apache.org/jira/browse/HDFS-9782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15165693#comment-15165693 ] Daniel Templeton commented on HDFS-9782: Wow, checkstyle really doesn't like case statements to be indented... Thanks for jumping in, [~andrew.wang]! bq. If your concern is the linking between the interval and the offset, we could make the offset configuration a percent of the interval. My concern is that the offset interval alters when the metrics are reliably available. I think it violates the principal of least astonishment to have the metrics randomly (literally) show up late by default. I would rather it not be on unless it's needed, and the user turns it on explicitly. bq. I also agree with Robert and would prefer that we didn't add this unit parsing code at all, but that's not a blocker. What's the alternative? I don't think millis is an acceptable unit for something that will likely be hours or days. bq. Also, if you look at BPServiceActor#Scheduler, this is an example of how we can unit test a scheduler like this without sleeps. Food for thought. Now I get what you meant in HDFS-9637 about testing using a clock that can be set by the tests. That seems pretty reasonable. I clearly need to get better acquainted with Mockito. I'll take another pass at it. > RollingFileSystemSink should have configurable roll interval > > > Key: HDFS-9782 > URL: https://issues.apache.org/jira/browse/HDFS-9782 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Daniel Templeton >Assignee: Daniel Templeton > Attachments: HDFS-9782.001.patch, HDFS-9782.002.patch, > HDFS-9782.003.patch, HDFS-9782.004.patch > > > Right now it defaults to rolling at the top of every hour. Instead that > interval should be configurable. The interval should also allow for some > play so that all hosts don't try to flush their files simultaneously. > I'm filing this in HDFS because I suspect it will involve touching the HDFS > tests. If it turns out not to, I'll move it into common instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9837) BlockManager#countNodes should be able to detect duplicated internal blocks
[ https://issues.apache.org/jira/browse/HDFS-9837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15165690#comment-15165690 ] Hadoop QA commented on HDFS-9837: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 14s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 22s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 20s {color} | {color:green} trunk passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 34s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s {color} | {color:green} trunk passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 36s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 8s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 12s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 25s {color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: patch generated 5 new + 145 unchanged - 13 fixed = 150 total (was 158) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 54s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 39s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 21s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 20s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 5s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 42s {color} | {color:red} Patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 222m 38s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_72 Failed junit tests | hadoop.hdfs.qjournal.TestSecureNNWithQJM | | | hadoop.hdfs.security.TestDelegationTokenForProxyUser | | | hadoop.hdfs.TestDFSUpgradeFromImage | | JDK v1.8.0_72 Timed out junit tests | org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 | | | org.apache.hadoop.hdfs.TestWriteReadStripedFile | | | org.apache.hadoop.hdfs.TestDFSRemove | | | org.apache.hadoop.hdfs.TestHFlush | | JDK v1.7.0_95 Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot | | |
[jira] [Work started] (HDFS-9824) Unfriendly message while running rename command if target exists
[ https://issues.apache.org/jira/browse/HDFS-9824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HDFS-9824 started by Xiaobing Zhou. --- > Unfriendly message while running rename command if target exists > > > Key: HDFS-9824 > URL: https://issues.apache.org/jira/browse/HDFS-9824 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > > In the following case, the message > is not friendly, it's better to show . > Source dir: > {noformat} > -rw-r--r-- 3 root hdfs 8526 2016-02-18 00:23 /tmp/src/1.log > {noformat} > Dest dir: > {noformat} > -rw-r--r-- 3 root hdfs 8526 2016-02-17 22:00 /tmp/dest/1.log > -rw-r--r-- 3 root hdfs 8526 2016-02-18 00:17 /tmp/dest/2.log > -rw-r--r-- 3 root hdfs 8526 2016-02-18 00:18 /tmp/dest/3.log > -rw-r--r-- 3 root hdfs 8526 2016-02-18 00:18 /tmp/dest/4.log > {noformat} > Running displays unfriendly message > that complains input/output error, while /tmp/dest/1.log> will show . The behavior > of the two should be similar. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9427) HDFS should not default to ephemeral ports
[ https://issues.apache.org/jira/browse/HDFS-9427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-9427: Component/s: hdfs-client > HDFS should not default to ephemeral ports > -- > > Key: HDFS-9427 > URL: https://issues.apache.org/jira/browse/HDFS-9427 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs-client, namenode >Affects Versions: 3.0.0 >Reporter: Arpit Agarwal >Assignee: Xiaobing Zhou >Priority: Critical > Labels: Incompatible > Attachments: HDFS-9427.000.patch > > > HDFS defaults to ephemeral ports for the some HTTP/RPC endpoints. This can > cause bind exceptions on service startup if the port is in use. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9427) HDFS should not default to ephemeral ports
[ https://issues.apache.org/jira/browse/HDFS-9427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-9427: Status: Patch Available (was: Open) > HDFS should not default to ephemeral ports > -- > > Key: HDFS-9427 > URL: https://issues.apache.org/jira/browse/HDFS-9427 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs-client, namenode >Affects Versions: 3.0.0 >Reporter: Arpit Agarwal >Assignee: Xiaobing Zhou >Priority: Critical > Labels: Incompatible > Attachments: HDFS-9427.000.patch > > > HDFS defaults to ephemeral ports for the some HTTP/RPC endpoints. This can > cause bind exceptions on service startup if the port is in use. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9427) HDFS should not default to ephemeral ports
[ https://issues.apache.org/jira/browse/HDFS-9427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-9427: Attachment: HDFS-9427.000.patch Posted initial patch for review, thanks. Will make chances to keep comments/docs consistent in upcoming patches. > HDFS should not default to ephemeral ports > -- > > Key: HDFS-9427 > URL: https://issues.apache.org/jira/browse/HDFS-9427 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs-client, namenode >Affects Versions: 3.0.0 >Reporter: Arpit Agarwal >Assignee: Xiaobing Zhou >Priority: Critical > Labels: Incompatible > Attachments: HDFS-9427.000.patch > > > HDFS defaults to ephemeral ports for the some HTTP/RPC endpoints. This can > cause bind exceptions on service startup if the port is in use. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9710) Change DN to send block receipt IBRs in batches
[ https://issues.apache.org/jira/browse/HDFS-9710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15165669#comment-15165669 ] Jing Zhao commented on HDFS-9710: - Thanks for working on this, Nicholas! The patch looks good to me. Some comments: # In IncrementalBlockReportManager#sendIBRs, the "isDebugEnabled" check can be skipped since we're using slf4j.Logger. # We can add some new unit tests for the new feature/configuration. > Change DN to send block receipt IBRs in batches > --- > > Key: HDFS-9710 > URL: https://issues.apache.org/jira/browse/HDFS-9710 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze > Attachments: h9710_20160201.patch, h9710_20160205.patch, > h9710_20160216.patch, h9710_20160216b.patch, h9710_20160217.patch, > h9710_20160219.patch > > > When a DN has received a block, it immediately sends a block receipt IBR RPC > to NN for reporting the block. Even if a DN has received multiple blocks at > the same time, it still sends multiple RPCs. It does not scale well since NN > has to process a huge number of RPCs when many DNs receiving many blocks at > the same time. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9427) HDFS should not default to ephemeral ports
[ https://issues.apache.org/jira/browse/HDFS-9427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15165663#comment-15165663 ] Xiaobing Zhou commented on HDFS-9427: - There are two more {code} final String DFS_NAMENODE_BACKUP_ADDRESS_DEFAULT = "localhost:50100"; final String DFS_NAMENODE_BACKUP_HTTP_ADDRESS_DEFAULT = "0.0.0.0:50105"; {code} > HDFS should not default to ephemeral ports > -- > > Key: HDFS-9427 > URL: https://issues.apache.org/jira/browse/HDFS-9427 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, namenode >Affects Versions: 3.0.0 >Reporter: Arpit Agarwal >Assignee: Xiaobing Zhou >Priority: Critical > Labels: Incompatible > > HDFS defaults to ephemeral ports for the some HTTP/RPC endpoints. This can > cause bind exceptions on service startup if the port is in use. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9782) RollingFileSystemSink should have configurable roll interval
[ https://issues.apache.org/jira/browse/HDFS-9782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15164008#comment-15164008 ] Hadoop QA commented on HDFS-9782: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 5 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 9s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 32s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 36s {color} | {color:green} trunk passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 30s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 3s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 52s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 28s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 24s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 1s {color} | {color:green} trunk passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 49s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 29s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 22s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 22s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 43s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 43s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 3s {color} | {color:red} root: patch generated 25 new + 0 unchanged - 0 fixed = 25 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 52s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 26s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 56s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 9s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 56s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 49s {color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_72. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 55m 1s {color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 38s {color} | {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 51m 11s {color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.7.0_95. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 180m 12s {color} | {color:black} {color} | \\ \\ || Reason || Tests || |
[jira] [Commented] (HDFS-9427) HDFS should not default to ephemeral ports
[ https://issues.apache.org/jira/browse/HDFS-9427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15163930#comment-15163930 ] Xiaobing Zhou commented on HDFS-9427: - Thanks [~iwasakims] for the pointer. The numbers started with 9000 suggested by [~vinayrpet] are safe on various OS according to [~jmhsieh] [HBASE-10123 comment|https://issues.apache.org/jira/browse/HBASE-10123?focusedCommentId=13869893=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13869893]. > HDFS should not default to ephemeral ports > -- > > Key: HDFS-9427 > URL: https://issues.apache.org/jira/browse/HDFS-9427 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, namenode >Affects Versions: 3.0.0 >Reporter: Arpit Agarwal >Assignee: Xiaobing Zhou >Priority: Critical > Labels: Incompatible > > HDFS defaults to ephemeral ports for the some HTTP/RPC endpoints. This can > cause bind exceptions on service startup if the port is in use. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9734) Refactoring of checksum failure report related codes
[ https://issues.apache.org/jira/browse/HDFS-9734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang updated HDFS-9734: Attachment: HDFS-9734-v8.patch Thanks Kai for the update. The v7 patch LGTM. One nit is that {{getCorruptedMap}} should be {{getCorruptionMap}} to match the variable name? I'm attaching a v8 patch wit this fix. Please LMK if you agree. Waiting for a fresh Jenkins run in the meantime. > Refactoring of checksum failure report related codes > > > Key: HDFS-9734 > URL: https://issues.apache.org/jira/browse/HDFS-9734 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Kai Zheng >Assignee: Kai Zheng > Attachments: HADOOP-12744-v1.patch, HADOOP-12744-v2.patch, > HDFS-9734-v3.patch, HDFS-9734-v4.patch, HDFS-9734-v5.patch, > HDFS-9734-v6.patch, HDFS-9734-v7.patch, HDFS-9734-v8.patch > > > This was from discussion with [~jingzhao] in HDFS-9646. There is some > duplicate codes between client and datanode sides: > {code} > private void addCorruptedBlock(ExtendedBlock blk, DatanodeInfo node, > MapcorruptionMap) { > Set dnSet = corruptionMap.get(blk); > if (dnSet == null) { > dnSet = new HashSet<>(); > corruptionMap.put(blk, dnSet); > } > if (!dnSet.contains(node)) { > dnSet.add(node); > } > } > {code} > This would resolve the duplication and also simplify the codes some bit. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9843) Document distcp options required for copying between encrypted locations
[ https://issues.apache.org/jira/browse/HDFS-9843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15163919#comment-15163919 ] Chris Nauroth commented on HDFS-9843: - I think this patch missed out on a pre-commit run due to some Jenkins infrastructure problems yesterday. I just resubmitted a pre-commit run manually. > Document distcp options required for copying between encrypted locations > > > Key: HDFS-9843 > URL: https://issues.apache.org/jira/browse/HDFS-9843 > Project: Hadoop HDFS > Issue Type: Improvement > Components: distcp, documentation, encryption >Affects Versions: 2.6.0 >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Attachments: HDFS-9843.00.patch, HDFS-9843.01.patch, > HDFS-9843.02.patch > > > In TransparentEncryption.md#Distcp_considerations document section, we have > "Copying_between_encrypted_and_unencrypted_locations" which requires > -skipcrccheck and -update. > These options should be documented as required for "Copying between encrypted > locations" use cases as well because this involves decrypting source file and > encrypting destination file with a different EDEK, resulting in different > checksum at the destination. Distcp will fail at crc check if -skipcrccheck > if not specified. > This ticket is opened to document the required options for "Copying between > encrypted locations" use cases when using distcp with HDFS encryption. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9853) Ozone: Add container definitions
[ https://issues.apache.org/jira/browse/HDFS-9853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15163868#comment-15163868 ] Anu Engineer commented on HDFS-9853: I was not sure that Htrace IDs are used across the board. So even though Ozone front-end uses UUIDs and hence 128 bit trace IDs, I did not want to bake that assumption in the transport / protocol layer. Once we enable HTrace in Ozone I would think we can easily switch over. As for why the responses have traceID, it allows logging at transport layer and easy correlation of these packets instead of relying on a temporal ordering. It is much easier to write log parsing tools with explicit trace IDs, especially when we are in the development phase. An auxiliary benefit (which we don't intend to leverage) is to ability to return multiple responses for a request as well as easier multi-threading code on the client side. > Ozone: Add container definitions > > > Key: HDFS-9853 > URL: https://issues.apache.org/jira/browse/HDFS-9853 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Anu Engineer >Assignee: Anu Engineer > Attachments: HDFS-9853-HDFS-7240.001.patch, > HDFS-9853-HDFS-7240.002.patch > > > This patch introduces protoc definitions that operate against the container > on a datanode. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8356) Document missing properties in hdfs-default.xml
[ https://issues.apache.org/jira/browse/HDFS-8356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15163856#comment-15163856 ] Hadoop QA commented on HDFS-8356: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 40s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s {color} | {color:green} trunk passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s {color} | {color:green} trunk passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 52s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 49s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 17s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 50s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 59s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 21s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 167m 10s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_72 Failed junit tests | hadoop.hdfs.TestRollingUpgrade | | JDK v1.7.0_95 Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ca8df7 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12789611/HDFS-8356.005.patch | | JIRA Issue | HDFS-8356 | | Optional Tests | asflicense compile javac javadoc mvninstall
[jira] [Commented] (HDFS-9853) Ozone: Add container definitions
[ https://issues.apache.org/jira/browse/HDFS-9853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15163843#comment-15163843 ] Colin Patrick McCabe commented on HDFS-9853: Thanks, Anu. {code} 96// A string that identifies this command, we generate Trace ID in Ozone 97// frontend and this allows us to trace that command all over ozone. 98optional string traceID = 2; {code} HTrace trace IDs are 128-bit integers. I think we should avoid using a string here since it is not as efficient as simply using two 64-bit numbers. I also don't see why the response needs a trace ID, since the client knows the ID of the request it sent. > Ozone: Add container definitions > > > Key: HDFS-9853 > URL: https://issues.apache.org/jira/browse/HDFS-9853 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Anu Engineer >Assignee: Anu Engineer > Attachments: HDFS-9853-HDFS-7240.001.patch, > HDFS-9853-HDFS-7240.002.patch > > > This patch introduces protoc definitions that operate against the container > on a datanode. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9853) Ozone: Add container definitions
[ https://issues.apache.org/jira/browse/HDFS-9853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-9853: --- Attachment: HDFS-9853-HDFS-7240.002.patch [~cmccabe] Good catch. I have fixed the list to take keys instead of cursor locations in the updated patch. * Find bugs error is in generated code. So ignoring them. * Test failures are not related to this patch. > Ozone: Add container definitions > > > Key: HDFS-9853 > URL: https://issues.apache.org/jira/browse/HDFS-9853 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Anu Engineer >Assignee: Anu Engineer > Attachments: HDFS-9853-HDFS-7240.001.patch, > HDFS-9853-HDFS-7240.002.patch > > > This patch introduces protoc definitions that operate against the container > on a datanode. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9854) Log cipher suite negotiation more verbosely
[ https://issues.apache.org/jira/browse/HDFS-9854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15163806#comment-15163806 ] Hudson commented on HDFS-9854: -- FAILURE: Integrated in Hadoop-trunk-Commit #9362 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/9362/]) HDFS-9854. Log cipher suite negotiation more verbosely. Contributed by (cnauroth: rev d1dd248b756e5a323ac885eefd3f81a639d6b86f) * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferClient.java > Log cipher suite negotiation more verbosely > --- > > Key: HDFS-9854 > URL: https://issues.apache.org/jira/browse/HDFS-9854 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Labels: encryption, supportability > Fix For: 2.8.0 > > Attachments: HADOOP-12816.001.patch > > > We've had difficulty probing the root cause of performance slowdown with > in-transit encryption using AES-NI. We finally found the root cause was the > Hadoop client did not configure encryption properties correctly, so they did > not negotiate AES cipher suite when creating an encrypted stream pair, > despite the server (a data node) supports it. Existing debug message did not > help. We saw debug message "Server using cipher suite AES/CTR/NoPadding" on > the same data node, but that refers to the communication with other data > nodes. > It would be really helpful to log a debug message if a SASL server configures > AES cipher suite, but the SASL client doesn't, or vice versa. This debug > message should also log the client address to differentiate it from other > stream pairs. > More over, the debug message "Server using cipher suite AES/CTR/NoPadding" > should also be extended to include the client's address. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9853) Ozone: Add container definitions
[ https://issues.apache.org/jira/browse/HDFS-9853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15163801#comment-15163801 ] Colin Patrick McCabe commented on HDFS-9853: Thanks, Anu. If that's the case, it sounds like it would be more appropriate to replace "uint64 start" with "string prevKey" so that the list operation can pick up where it left off. I also question whether count needs to be 64 bits... I have trouble imagining a single list RPC requesting more than a billion elements. > Ozone: Add container definitions > > > Key: HDFS-9853 > URL: https://issues.apache.org/jira/browse/HDFS-9853 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Anu Engineer >Assignee: Anu Engineer > Attachments: HDFS-9853-HDFS-7240.001.patch > > > This patch introduces protoc definitions that operate against the container > on a datanode. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9782) RollingFileSystemSink should have configurable roll interval
[ https://issues.apache.org/jira/browse/HDFS-9782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15163795#comment-15163795 ] Andrew Wang commented on HDFS-9782: --- bq. In most clusters, this is not needed. It's only the large (1000-ish node) clusters that will need to worry about staggering the rolls. And then how much staggering is required depends heavily on the cluster. I think 0 is a reasonable default. Is there a downside to having it non-zero for small clusters? It's better to have defaults that work for all cluster sizes. If your concern is the linking between the interval and the offset, we could make the offset configuration a percent of the interval. One nit, we can use TimeUnit.convert rather than using the new constants. I also agree with Robert and would prefer that we didn't add this unit parsing code at all, but that's not a blocker. Also, if you look at BPServiceActor#Scheduler, this is an example of how we can unit test a scheduler like this without sleeps. Food for thought. > RollingFileSystemSink should have configurable roll interval > > > Key: HDFS-9782 > URL: https://issues.apache.org/jira/browse/HDFS-9782 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Daniel Templeton >Assignee: Daniel Templeton > Attachments: HDFS-9782.001.patch, HDFS-9782.002.patch, > HDFS-9782.003.patch, HDFS-9782.004.patch > > > Right now it defaults to rolling at the top of every hour. Instead that > interval should be configurable. The interval should also allow for some > play so that all hosts don't try to flush their files simultaneously. > I'm filing this in HDFS because I suspect it will involve touching the HDFS > tests. If it turns out not to, I'll move it into common instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9821) HDFS configuration should accept friendly time units
[ https://issues.apache.org/jira/browse/HDFS-9821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15163793#comment-15163793 ] Colin Patrick McCabe commented on HDFS-9821: It seems odd to me to have a configuration key that explicitly says "millis" and configure it in terms of hours (or some other time unit). On the other hand, if we have to introduce new unit-less keys for every affected key, that's quite a few new keys. I did a quick survey and found all these: {code} dfs.client.write.byte-array-manager.count-reset-time-period-ms dfs.client.file-block-storage-locations.timeout.millis dfs.client.socketcache.expiryMsec dfs.client.write.exclude.nodes.cache.expiry.interval.millis dfs.datanode.lazywriter.interval.sec dfs.namenode.storageinfo.defragment.interval.ms dfs.namenode.storageinfo.defragment.timeout.ms dfs.namenode.edit.log.autoroll.check.interval.ms dfs.namenode.lazypersist.file.scrub.interval.sec dfs.content-summary.sleep-microsec dfs.datanode.oob.timeout-ms dfs.datanode.cache.revocation.timeout.ms dfs.datanode.cache.revocation.polling.ms dfs.namenode.path.based.cache.refresh.interval.ms dfs.namenode.startup.delay.block.deletion.sec dfs.datanode.scan.period.hours dfs.namenode.path.based.cache.retry.interval.ms dfs.blockreport.intervalMsec dfs.namenode.full.block.report.lease.length.ms dfs.cachereport.intervalMsec dfs.client.read.shortcircuit.streams.cache.expiry.ms dfs.client.mmap.cache.timeout.ms dfs.client.mmap.retry.timeout.ms dfs.client.short.circuit.replica.stale.threshold.ms ... {code} at that point I stopped counting since there were too many. So yes, I guess re-using the existing key names is more pragmatic. > HDFS configuration should accept friendly time units > > > Key: HDFS-9821 > URL: https://issues.apache.org/jira/browse/HDFS-9821 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode, namenode >Affects Versions: 2.8.0 >Reporter: Arpit Agarwal >Assignee: Xiaobing Zhou > > HDFS configuration keys that define time intervals use units inconsistently > (Hours, seconds, milliseconds). > Not all keys have the unit as part of their name. Related keys may use > different units e.g. {{dfs.blockreport.intervalMsec}} accepts msec while > {{dfs.blockreport.initialDelay}} accepts seconds. Milliseconds is rarely > useful as a time unit which makes these values hard to parse when reading > config files. > We can either > # Let existing keys use friendly units e.g. 100ms, 60s, 5m, 1d, 6w etc. This > can be done compatibly since there will be no conflict with existing valid > configuration. If no suffix is specified just default to the current time > unit. > # Just deprecate the existing keys and define new ones that accept friendly > units. > We continue to use fine-grained time units (usually ms) internally in code > and also accept "ms" option for tests. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9853) Ozone: Add container definitions
[ https://issues.apache.org/jira/browse/HDFS-9853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15163786#comment-15163786 ] Anu Engineer commented on HDFS-9853: [~cmccabe] You are absolutely right. I concur that we will need range partitioning to do this effectively or we will need to support secondary indices. AFAIK, based on all the community feedback (including yours) I think we are currently favoring range partitioning. I am not working on that part explicitly, but I will let someone like [~cnauroth] or [~jnp] comment on it authoritatively. > Ozone: Add container definitions > > > Key: HDFS-9853 > URL: https://issues.apache.org/jira/browse/HDFS-9853 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Anu Engineer >Assignee: Anu Engineer > Attachments: HDFS-9853-HDFS-7240.001.patch > > > This patch introduces protoc definitions that operate against the container > on a datanode. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9854) Log cipher suite negotiation more verbosely
[ https://issues.apache.org/jira/browse/HDFS-9854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15163776#comment-15163776 ] Wei-Chiu Chuang commented on HDFS-9854: --- Thank you very much for reviewing and committing it! [~cnauroth] > Log cipher suite negotiation more verbosely > --- > > Key: HDFS-9854 > URL: https://issues.apache.org/jira/browse/HDFS-9854 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Labels: encryption, supportability > Fix For: 2.8.0 > > Attachments: HADOOP-12816.001.patch > > > We've had difficulty probing the root cause of performance slowdown with > in-transit encryption using AES-NI. We finally found the root cause was the > Hadoop client did not configure encryption properties correctly, so they did > not negotiate AES cipher suite when creating an encrypted stream pair, > despite the server (a data node) supports it. Existing debug message did not > help. We saw debug message "Server using cipher suite AES/CTR/NoPadding" on > the same data node, but that refers to the communication with other data > nodes. > It would be really helpful to log a debug message if a SASL server configures > AES cipher suite, but the SASL client doesn't, or vice versa. This debug > message should also log the client address to differentiate it from other > stream pairs. > More over, the debug message "Server using cipher suite AES/CTR/NoPadding" > should also be extended to include the client's address. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-9855) TestAuditLoggerWithCommand#testSetQuota fails with an unexpected AccessCOntrolException
Kuhu Shukla created HDFS-9855: - Summary: TestAuditLoggerWithCommand#testSetQuota fails with an unexpected AccessCOntrolException Key: HDFS-9855 URL: https://issues.apache.org/jira/browse/HDFS-9855 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 2.8.0 Reporter: Kuhu Shukla Assignee: Kuhu Shukla The addition of setQuota audit log testing throws an AccessControlException instead of the expected FileSystemClosed IOException even when the filesystem has been explicitly closed, other calls behave as expected during a trial test. This is seen on branch-2 and not on trunk requiring investigation for a possible bug/discrepancy. CC:[~kihwal]. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9853) Ozone: Add container definitions
[ https://issues.apache.org/jira/browse/HDFS-9853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15163740#comment-15163740 ] Colin Patrick McCabe commented on HDFS-9853: Hi [~anu], Last time we talked, we hadn't decided on whether OZone would use range paritioning or hash partitioning. I see here that there is a ListContainerRequestProto: {code} 197 message ListContainerRequestProto { 198 required Pipeline pipeline = 1; 199 required uint64 start = 2 [default = 0]; // Start Index 200 required uint64 count = 3; // Max Results to return 201 } {code} It seems to me that with hash partioning, it will be impossible to correctly implement this. If the node membership of the cluster changes between one {{ListContainerRequestProto}} request and the next, what is referred to by "start" will change, and we will end up with duplicated or missing results. With range partitioning, it is unclear what "start" refers to. If it is the position of the key inside the total ordering of all keys, it seems that adding another key could cause this to refer to something different. > Ozone: Add container definitions > > > Key: HDFS-9853 > URL: https://issues.apache.org/jira/browse/HDFS-9853 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Anu Engineer >Assignee: Anu Engineer > Attachments: HDFS-9853-HDFS-7240.001.patch > > > This patch introduces protoc definitions that operate against the container > on a datanode. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9854) Log cipher suite negotiation more verbosely
[ https://issues.apache.org/jira/browse/HDFS-9854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HDFS-9854: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.8.0 Status: Resolved (was: Patch Available) +1 for the patch. I have committed this to trunk, branch-2 and branch-2.8. [~jojochuang], thank you for contributing the patch. > Log cipher suite negotiation more verbosely > --- > > Key: HDFS-9854 > URL: https://issues.apache.org/jira/browse/HDFS-9854 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Labels: encryption, supportability > Fix For: 2.8.0 > > Attachments: HADOOP-12816.001.patch > > > We've had difficulty probing the root cause of performance slowdown with > in-transit encryption using AES-NI. We finally found the root cause was the > Hadoop client did not configure encryption properties correctly, so they did > not negotiate AES cipher suite when creating an encrypted stream pair, > despite the server (a data node) supports it. Existing debug message did not > help. We saw debug message "Server using cipher suite AES/CTR/NoPadding" on > the same data node, but that refers to the communication with other data > nodes. > It would be really helpful to log a debug message if a SASL server configures > AES cipher suite, but the SASL client doesn't, or vice versa. This debug > message should also log the client address to differentiate it from other > stream pairs. > More over, the debug message "Server using cipher suite AES/CTR/NoPadding" > should also be extended to include the client's address. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9395) Make HDFS audit logging consistant
[ https://issues.apache.org/jira/browse/HDFS-9395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kihwal Lee updated HDFS-9395: - Target Version/s: 2.7.3 (was: 2.8.0) > Make HDFS audit logging consistant > -- > > Key: HDFS-9395 > URL: https://issues.apache.org/jira/browse/HDFS-9395 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Kihwal Lee >Assignee: Kuhu Shukla > Fix For: 2.8.0 > > Attachments: HDFS-9395.001.patch, HDFS-9395.002.patch, > HDFS-9395.003.patch, HDFS-9395.004.patch, HDFS-9395.005.patch, > HDFS-9395.006.patch, HDFS-9395.007.patch > > > So, the big question here is what should go in the audit log? All failures, > or just "permission denied" failures? Or, to put it a different way, if > someone attempts to do something and it fails because a file doesn't exist, > is that worth an audit log entry? > We are currently inconsistent on this point. For example, concat, > getContentSummary, addCacheDirective, and setErasureEncodingPolicy create an > audit log entry for all failures, but setOwner, delete, and setAclEntries > attempt to only create an entry for AccessControlException-based failures. > There are a few operations, like allowSnapshot, disallowSnapshot, and > startRollingUpgrade that never create audit log failure entries at all. They > simply log nothing for any failure, and log success for a successful > operation. > So to summarize, different HDFS operations currently fall into 3 categories: > 1. audit-log all failures > 2. audit-log only AccessControlException failures > 3. never audit-log failures > Which category is right? And how can we fix the inconsistency -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9395) Make HDFS audit logging consistant
[ https://issues.apache.org/jira/browse/HDFS-9395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kihwal Lee updated HDFS-9395: - Target Version/s: 2.8.0 (was: 2.7.3) > Make HDFS audit logging consistant > -- > > Key: HDFS-9395 > URL: https://issues.apache.org/jira/browse/HDFS-9395 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Kihwal Lee >Assignee: Kuhu Shukla > Attachments: HDFS-9395.001.patch, HDFS-9395.002.patch, > HDFS-9395.003.patch, HDFS-9395.004.patch, HDFS-9395.005.patch, > HDFS-9395.006.patch, HDFS-9395.007.patch > > > So, the big question here is what should go in the audit log? All failures, > or just "permission denied" failures? Or, to put it a different way, if > someone attempts to do something and it fails because a file doesn't exist, > is that worth an audit log entry? > We are currently inconsistent on this point. For example, concat, > getContentSummary, addCacheDirective, and setErasureEncodingPolicy create an > audit log entry for all failures, but setOwner, delete, and setAclEntries > attempt to only create an entry for AccessControlException-based failures. > There are a few operations, like allowSnapshot, disallowSnapshot, and > startRollingUpgrade that never create audit log failure entries at all. They > simply log nothing for any failure, and log success for a successful > operation. > So to summarize, different HDFS operations currently fall into 3 categories: > 1. audit-log all failures > 2. audit-log only AccessControlException failures > 3. never audit-log failures > Which category is right? And how can we fix the inconsistency -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Moved] (HDFS-9854) Log cipher suite negotiation more verbosely
[ https://issues.apache.org/jira/browse/HDFS-9854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth moved HADOOP-12816 to HDFS-9854: -- Key: HDFS-9854 (was: HADOOP-12816) Project: Hadoop HDFS (was: Hadoop Common) > Log cipher suite negotiation more verbosely > --- > > Key: HDFS-9854 > URL: https://issues.apache.org/jira/browse/HDFS-9854 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Labels: encryption, supportability > Attachments: HADOOP-12816.001.patch > > > We've had difficulty probing the root cause of performance slowdown with > in-transit encryption using AES-NI. We finally found the root cause was the > Hadoop client did not configure encryption properties correctly, so they did > not negotiate AES cipher suite when creating an encrypted stream pair, > despite the server (a data node) supports it. Existing debug message did not > help. We saw debug message "Server using cipher suite AES/CTR/NoPadding" on > the same data node, but that refers to the communication with other data > nodes. > It would be really helpful to log a debug message if a SASL server configures > AES cipher suite, but the SASL client doesn't, or vice versa. This debug > message should also log the client address to differentiate it from other > stream pairs. > More over, the debug message "Server using cipher suite AES/CTR/NoPadding" > should also be extended to include the client's address. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9805) TCP_NODELAY not set before SASL handshake in data transfer pipeline
[ https://issues.apache.org/jira/browse/HDFS-9805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary Helmling updated HDFS-9805: Target Version/s: 2.8.0 Ping. Any takers for this change? It's pretty straightforward, though I can add a separate config for it if necessary. > TCP_NODELAY not set before SASL handshake in data transfer pipeline > --- > > Key: HDFS-9805 > URL: https://issues.apache.org/jira/browse/HDFS-9805 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Gary Helmling >Assignee: Gary Helmling > Attachments: HDFS-9805.001.patch > > > There are a few places in the DN -> DN block transfer pipeline where > TCP_NODELAY is not set before doing a SASL handshake: > * in {{DataNode.DataTransfer::run()}} > * in {{DataXceiver::replaceBlock()}} > * in {{DataXceiver::writeBlock()}} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9851) Name node throws NPE when setPermission is called on a path that does not exist
[ https://issues.apache.org/jira/browse/HDFS-9851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15163575#comment-15163575 ] Mingliang Liu commented on HDFS-9851: - Guarding in {{checkOwner}} seems ok to me. {code} 282 try { 283 userfs.setPermission(CHILD_FILE3, new FsPermission((short) 0777)); 284 assertTrue(false); 285 } catch (java.io.FileNotFoundException e) { 286 LOG.info("GOOD: got " + e); 287 } {code} May be cleaner as following: {code} try { userfs.setPermission(CHILD_FILE3, new FsPermission((short) 0777)); fail("some error message as the file is not found..."); } catch (java.io.FileNotFoundException ignored) { } {code} > Name node throws NPE when setPermission is called on a path that does not > exist > --- > > Key: HDFS-9851 > URL: https://issues.apache.org/jira/browse/HDFS-9851 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.7.1, 2.7.2 >Reporter: David Yan >Assignee: Brahma Reddy Battula >Priority: Critical > Attachments: HDFS-9851.patch > > > Tried it on both Hadoop 2.7.1 and 2.7.2, and I'm getting the same error when > setPermission is called on a path that does not exist: > {code} > 16/02/23 16:37:03.888 DEBUG > security.UserGroupInformation:FSPermissionChecker.ja > va:164 - ACCESS CHECK: > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker@299b19af, > doCheckOwner=true, ancestorAccess=null, parentAccess=null, access=null, > subAccess=null, ignoreEmptyDir=false > 16/02/23 16:37:03.889 DEBUG ipc.Server:ProtobufRpcEngine.java:631 - Served: > setPermission queueTime= 3 procesingTime= 3 exception= NullPointerException > 16/02/23 16:37:03.890 WARN ipc.Server:Server.java:2068 - IPC Server handler 2 > on 9000, call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission > from 127.0.0.1:36190 Call#21 Retry#0 > java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkOwner(FSPermissionChecker.java:247) > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:227) > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190) > at > org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1720) > at > org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1704) > at > org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkOwner(FSDirectory.java:1673) > at > org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setPermission(FSDirAttrOp.java:61) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1653) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:695) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:453) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043) > {code} > I don't see this problem with Hadoop 2.6.x. > The client that issues the setPermission call was compiled with Hadoop 2.2.0 > libraries. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9846) Make DelegationTokenFetcher a Tool
[ https://issues.apache.org/jira/browse/HDFS-9846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15163534#comment-15163534 ] Mingliang Liu commented on HDFS-9846: - Thanks for pointing out [HADOOP-12563] which I missed it. > Make DelegationTokenFetcher a Tool > -- > > Key: HDFS-9846 > URL: https://issues.apache.org/jira/browse/HDFS-9846 > Project: Hadoop HDFS > Issue Type: Improvement > Components: tools >Reporter: Mingliang Liu >Assignee: Mingliang Liu > > Currently the {{org.apache.hadoop.hdfs.tools.DelegationTokenFetcher}} is not > implementing the {{Tool}} interface, while it should. > This jira is to track the effort of refactoring the code to implement the > {{Tool}} interface. The main benefits are unified generic option parsing, > modifying the configurations, and conjunction with {{ToolRunner}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HDFS-9846) Make DelegationTokenFetcher a Tool
[ https://issues.apache.org/jira/browse/HDFS-9846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu resolved HDFS-9846. - Resolution: Won't Fix > Make DelegationTokenFetcher a Tool > -- > > Key: HDFS-9846 > URL: https://issues.apache.org/jira/browse/HDFS-9846 > Project: Hadoop HDFS > Issue Type: Improvement > Components: tools >Reporter: Mingliang Liu >Assignee: Mingliang Liu > > Currently the {{org.apache.hadoop.hdfs.tools.DelegationTokenFetcher}} is not > implementing the {{Tool}} interface, while it should. > This jira is to track the effort of refactoring the code to implement the > {{Tool}} interface. The main benefits are unified generic option parsing, > modifying the configurations, and conjunction with {{ToolRunner}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9852) hdfs dfs -setfacl error message is misleading
[ https://issues.apache.org/jira/browse/HDFS-9852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15163523#comment-15163523 ] Wei-Chiu Chuang commented on HDFS-9852: --- No test cases are attached, because it's a message-only patch. > hdfs dfs -setfacl error message is misleading > - > > Key: HDFS-9852 > URL: https://issues.apache.org/jira/browse/HDFS-9852 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Trivial > Labels: supportability > Attachments: HDFS-9852.001.patch > > > When I type > {noformat}hdfs dfs -setfacl -m default:user::rwx{noformat} > It prints error message: > {noformat} > -setfacl: is missing > Usage: hadoop fs [generic options] -setfacl [-R] [{-b|-k} {-m|-x } > ]|[--set ] > {noformat} > But actually, it's the path that I missed. A correct command should be > {noformat} > hdfs dfs -setfacl -m default:user::rwx /data > {noformat} > In fact, > {noformat}-setfacl -x | -m | --set{noformat} expects two parameters. > We should print error message like this if it misses one: > {noformat} > -setfacl: Missing either or > {noformat} > and print the following if it misses two: > {noformat} > -setfacl: Missing arguments: > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9852) hdfs dfs -setfacl error message is misleading
[ https://issues.apache.org/jira/browse/HDFS-9852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15163518#comment-15163518 ] Hadoop QA commented on HDFS-9852: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 43s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 4s {color} | {color:green} trunk passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 46s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 39s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s {color} | {color:green} trunk passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 49s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 43s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 43s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 52s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 52s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 58s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 2s {color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_72. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 29s {color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 65m 19s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ca8df7 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12789606/HDFS-9852.001.patch | | JIRA Issue | HDFS-9852 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 41a98c119ee3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git
[jira] [Commented] (HDFS-9395) Make HDFS audit logging consistant
[ https://issues.apache.org/jira/browse/HDFS-9395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15163489#comment-15163489 ] Hudson commented on HDFS-9395: -- FAILURE: Integrated in Hadoop-trunk-Commit #9361 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/9361/]) HDFS-9395. Make HDFS audit logging consistant. Contributed by Kuhu (kihwal: rev d27d7fc72e279614212c1eae52a84675073e89fb) * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuditLoggerWithCommands.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > Make HDFS audit logging consistant > -- > > Key: HDFS-9395 > URL: https://issues.apache.org/jira/browse/HDFS-9395 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Kihwal Lee >Assignee: Kuhu Shukla > Attachments: HDFS-9395.001.patch, HDFS-9395.002.patch, > HDFS-9395.003.patch, HDFS-9395.004.patch, HDFS-9395.005.patch, > HDFS-9395.006.patch, HDFS-9395.007.patch > > > So, the big question here is what should go in the audit log? All failures, > or just "permission denied" failures? Or, to put it a different way, if > someone attempts to do something and it fails because a file doesn't exist, > is that worth an audit log entry? > We are currently inconsistent on this point. For example, concat, > getContentSummary, addCacheDirective, and setErasureEncodingPolicy create an > audit log entry for all failures, but setOwner, delete, and setAclEntries > attempt to only create an entry for AccessControlException-based failures. > There are a few operations, like allowSnapshot, disallowSnapshot, and > startRollingUpgrade that never create audit log failure entries at all. They > simply log nothing for any failure, and log success for a successful > operation. > So to summarize, different HDFS operations currently fall into 3 categories: > 1. audit-log all failures > 2. audit-log only AccessControlException failures > 3. never audit-log failures > Which category is right? And how can we fix the inconsistency -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9782) RollingFileSystemSink should have configurable roll interval
[ https://issues.apache.org/jira/browse/HDFS-9782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Templeton updated HDFS-9782: --- Attachment: HDFS-9782.004.patch [~rkanter], here's a patch fixing the typo. I'm holding off on other changes until we see eye to eye. :) > RollingFileSystemSink should have configurable roll interval > > > Key: HDFS-9782 > URL: https://issues.apache.org/jira/browse/HDFS-9782 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Daniel Templeton >Assignee: Daniel Templeton > Attachments: HDFS-9782.001.patch, HDFS-9782.002.patch, > HDFS-9782.003.patch, HDFS-9782.004.patch > > > Right now it defaults to rolling at the top of every hour. Instead that > interval should be configurable. The interval should also allow for some > play so that all hosts don't try to flush their files simultaneously. > I'm filing this in HDFS because I suspect it will involve touching the HDFS > tests. If it turns out not to, I'll move it into common instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9847) HDFS configuration without time unit name should accept friendly time units
[ https://issues.apache.org/jira/browse/HDFS-9847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15163459#comment-15163459 ] Chris Douglas commented on HDFS-9847: - I'd hesitate to admit years as a unit. We won't handle leap years, though any timer configured at that granularity doesn't require high precision. Weeks will also be accurate, but do we have any use cases? Isn't it sufficient for those config values to be expressed in days? The convenience methods (e.g., {{getIntTimeSeconds}}) added to {{Configuration}} in 002 add to its API unnecessarily. > HDFS configuration without time unit name should accept friendly time units > --- > > Key: HDFS-9847 > URL: https://issues.apache.org/jira/browse/HDFS-9847 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 2.7.1 >Reporter: Lin Yiqun >Assignee: Lin Yiqun > Attachments: HDFS-9847.001.patch, HDFS-9847.002.patch, > timeduration-w-y.patch > > > In HDFS-9821, it talks about the issue of leting existing keys use friendly > units e.g. 60s, 5m, 1d, 6w etc. But there are som configuration key names > contain time unit name, like {{dfs.blockreport.intervalMsec}}, so we can make > some other configurations which without time unit name to accept friendly > time units. The time unit {{seconds}} is frequently used in hdfs. We can > updating this configurations first. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9395) Make HDFS audit logging consistant
[ https://issues.apache.org/jira/browse/HDFS-9395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15163454#comment-15163454 ] Kihwal Lee commented on HDFS-9395: -- Thanks for the clarification and reviews. I will commit it soon. > Make HDFS audit logging consistant > -- > > Key: HDFS-9395 > URL: https://issues.apache.org/jira/browse/HDFS-9395 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Kihwal Lee >Assignee: Kuhu Shukla > Attachments: HDFS-9395.001.patch, HDFS-9395.002.patch, > HDFS-9395.003.patch, HDFS-9395.004.patch, HDFS-9395.005.patch, > HDFS-9395.006.patch, HDFS-9395.007.patch > > > So, the big question here is what should go in the audit log? All failures, > or just "permission denied" failures? Or, to put it a different way, if > someone attempts to do something and it fails because a file doesn't exist, > is that worth an audit log entry? > We are currently inconsistent on this point. For example, concat, > getContentSummary, addCacheDirective, and setErasureEncodingPolicy create an > audit log entry for all failures, but setOwner, delete, and setAclEntries > attempt to only create an entry for AccessControlException-based failures. > There are a few operations, like allowSnapshot, disallowSnapshot, and > startRollingUpgrade that never create audit log failure entries at all. They > simply log nothing for any failure, and log success for a successful > operation. > So to summarize, different HDFS operations currently fall into 3 categories: > 1. audit-log all failures > 2. audit-log only AccessControlException failures > 3. never audit-log failures > Which category is right? And how can we fix the inconsistency -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9853) Ozone: Add container definitions
[ https://issues.apache.org/jira/browse/HDFS-9853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-9853: --- Attachment: HDFS-9853-HDFS-7240.001.patch > Ozone: Add container definitions > > > Key: HDFS-9853 > URL: https://issues.apache.org/jira/browse/HDFS-9853 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Anu Engineer >Assignee: Anu Engineer > Attachments: HDFS-9853-HDFS-7240.001.patch > > > This patch introduces protoc definitions that operate against the container > on a datanode. -- This message was sent by Atlassian JIRA (v6.3.4#6332)