[jira] [Commented] (HDFS-10567) Improve plan command help message
[ https://issues.apache.org/jira/browse/HDFS-10567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15349001#comment-15349001 ] Hadoop QA commented on HDFS-10567: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 30s{color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 51s{color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 22s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}102m 17s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.TestRollingUpgrade | | | hadoop.hdfs.server.namenode.TestEditLog | | | hadoop.hdfs.TestEncryptionZonesWithKMS | | | hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:85209cc | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12813166/HDFS-10567-HDFS-1312.000.patch | | JIRA Issue | HDFS-10567 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 12b819a70ba4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-1312 / b2584be | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/15913/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/15913/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/15913/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This
[jira] [Commented] (HDFS-9700) DFSClient and DFSOutputStream should set TCP_NODELAY on sockets for DataTransferProtocol
[ https://issues.apache.org/jira/browse/HDFS-9700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348970#comment-15348970 ] Colin Patrick McCabe commented on HDFS-9700: Hmm. I think it's confusing to use a configuration key for Hadoop RPC to configure something that isn't Hadoop RPC. We have tons of keys named with {{ipc}} and all of them relate to Hadoop RPC, not to DataTransferProtocol. {{ipc.client.connect.max.retries}}, {{ipc.server.listen.queue.size}}, {{ipc.client.connect.timeout}}, and so forth. There are valid cases where you might want a different configuration for RPC versus datatransferprotocol. For example, conservative users might also want to avoid turning on {{TCP_NODELAY}} for {{DataTransferProtocol}} since it is a new feature, and not as well tested as doing what we do currently. But since we have {{TCP_NODELAY}} on for RPC, they might want to keep that on. I agree that in the long term, {{TCP_NODELAY}} should be used for both. But that's an argument for removing the configuration altogether, not for making it do something other than what it's named. > DFSClient and DFSOutputStream should set TCP_NODELAY on sockets for > DataTransferProtocol > > > Key: HDFS-9700 > URL: https://issues.apache.org/jira/browse/HDFS-9700 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Affects Versions: 2.7.1, 2.6.3 >Reporter: Gary Helmling >Assignee: Gary Helmling > Fix For: 2.8.0 > > Attachments: HDFS-9700-branch-2.7.002.patch, > HDFS-9700-branch-2.7.003.patch, HDFS-9700-v1.patch, HDFS-9700-v2.patch, > HDFS-9700.002.patch, HDFS-9700.003.patch, HDFS-9700.004.patch, > HDFS-9700_branch-2.7-v2.patch, HDFS-9700_branch-2.7.patch > > > In {{DFSClient.connectToDN()}} and > {{DFSOutputStream.createSocketForPipeline()}}, we never call > {{setTcpNoDelay()}} on the constructed socket before sending. In both cases, > we should respect the value of ipc.client.tcpnodelay in the configuration. > While this applies whether security is enabled or not, it seems to have a > bigger impact on latency when security is enabled. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-10579) HDFS web interfaces lack XFS protection
Anu Engineer created HDFS-10579: --- Summary: HDFS web interfaces lack XFS protection Key: HDFS-10579 URL: https://issues.apache.org/jira/browse/HDFS-10579 Project: Hadoop HDFS Issue Type: Bug Components: datanode, namenode Affects Versions: 3.0.0-alpha1 Reporter: Anu Engineer Assignee: Anu Engineer Fix For: 3.0.0-alpha1 The web interfaces of Namenode and Datanode does not protect against XFS attacks. A filter was added in hadoop common (HADOOP-13008) to prevent XFS attacks. This JIRA proposes to use that filter to protect namenode and datanode web UI. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10578) libhdfs++: Silence compile warnings from URI parser
[ https://issues.apache.org/jira/browse/HDFS-10578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348932#comment-15348932 ] Hadoop QA commented on HDFS-10578: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 38s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 21s{color} | {color:green} HDFS-8707 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 19s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 21s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s{color} | {color:green} HDFS-8707 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} HDFS-8707 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 8s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 5m 8s{color} | {color:green} hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_91 with JDK v1.8.0_91 generated 0 new + 3 unchanged - 26 fixed = 3 total (was 29) {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 8s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 5m 8s{color} | {color:green} hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.7.0_101 with JDK v1.7.0_101 generated 0 new + 3 unchanged - 26 fixed = 3 total (was 29) {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 53s{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK v1.7.0_101. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 57m 34s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0cf5e66 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12813148/HDFS-10578.HDFS-8707.000.patch | | JIRA Issue | HDFS-10578 | | Optional Tests | asflicense compile cc mvnsite javac unit | | uname | Linux 4c756b7d8972 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-8707 / a903f78 | | Default Java | 1.7.0_101 | | Multi-JDK versions | /usr/lib/jvm/java-8-oracle:1.8.0_91 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_101 | | JDK v1.7.0_101 Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/15911/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: hadoop-hdfs-project/hadoop-hdfs-native-client | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/15911/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > libhdfs++: Silence compile warnings from URI parser > --- > > Key: HDFS-10578 > URL:
[jira] [Commented] (HDFS-10533) Make DistCpOptions class immutable
[ https://issues.apache.org/jira/browse/HDFS-10533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348926#comment-15348926 ] Hadoop QA commented on HDFS-10533: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 12 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 15s{color} | {color:orange} hadoop-tools/hadoop-distcp: The patch generated 27 new + 322 unchanged - 53 fixed = 349 total (was 375) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} hadoop-tools_hadoop-distcp generated 0 new + 46 unchanged - 4 fixed = 46 total (was 50) {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 21s{color} | {color:green} hadoop-distcp in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 22m 56s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:85209cc | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12813157/HDFS-10533.001.patch | | JIRA Issue | HDFS-10533 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux c1620176eb54 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / bf74dbf | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/15912/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-distcp.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/15912/testReport/ | | modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/15912/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Make DistCpOptions class immutable > -- > > Key: HDFS-10533 > URL: https://issues.apache.org/jira/browse/HDFS-10533 > Project: Hadoop HDFS > Issue Type: Improvement > Components: distcp >Reporter: Mingliang Liu >
[jira] [Commented] (HDFS-10567) Improve plan command help message
[ https://issues.apache.org/jira/browse/HDFS-10567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348918#comment-15348918 ] Xiaobing Zhou commented on HDFS-10567: -- I posted patch v000 for review. 1. Added unit for '--bandwidth' 2. Added more comments for various 3. fixed 'wetolerate' and explained '--thresholdPercentage' > Improve plan command help message > - > > Key: HDFS-10567 > URL: https://issues.apache.org/jira/browse/HDFS-10567 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Lei (Eddy) Xu >Assignee: Xiaobing Zhou > Attachments: HDFS-10567-HDFS-1312.000.patch > > > {code} > --bandwidth Maximum disk bandwidth to be consumed by > diskBalancer. e.g. 10 > --maxerror Describes how many errors can be > tolerated while copying between a pair > of disks. > --outFile to write output to, if not > specified defaults will be used. > --plan creates a plan for datanode. > --thresholdPercentagePercentage skew that wetolerate before > diskbalancer starts working e.g. 10 > --v Print out the summary of the plan on > console > {code} > We should > * Put the unit into {{--bandwidth}}, or its help message. Is it an integer or > float / double number? Not clear in CLI message. > * Give more details about {{--plan}}. It is not clear what the {{}} is > for. > * {{--thresholdPercentage}}, has typo {{wetolerate}} in the error message. > Also it needs to indicated that it is the difference between space > utilization between two disks / volumes. Is it an integer or float / double > number? > Thanks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10567) Improve plan command help message
[ https://issues.apache.org/jira/browse/HDFS-10567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-10567: - Status: Patch Available (was: Open) > Improve plan command help message > - > > Key: HDFS-10567 > URL: https://issues.apache.org/jira/browse/HDFS-10567 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Lei (Eddy) Xu >Assignee: Xiaobing Zhou > Attachments: HDFS-10567-HDFS-1312.000.patch > > > {code} > --bandwidth Maximum disk bandwidth to be consumed by > diskBalancer. e.g. 10 > --maxerror Describes how many errors can be > tolerated while copying between a pair > of disks. > --outFile to write output to, if not > specified defaults will be used. > --plan creates a plan for datanode. > --thresholdPercentagePercentage skew that wetolerate before > diskbalancer starts working e.g. 10 > --v Print out the summary of the plan on > console > {code} > We should > * Put the unit into {{--bandwidth}}, or its help message. Is it an integer or > float / double number? Not clear in CLI message. > * Give more details about {{--plan}}. It is not clear what the {{}} is > for. > * {{--thresholdPercentage}}, has typo {{wetolerate}} in the error message. > Also it needs to indicated that it is the difference between space > utilization between two disks / volumes. Is it an integer or float / double > number? > Thanks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10567) Improve plan command help message
[ https://issues.apache.org/jira/browse/HDFS-10567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-10567: - Attachment: HDFS-10567-HDFS-1312.000.patch > Improve plan command help message > - > > Key: HDFS-10567 > URL: https://issues.apache.org/jira/browse/HDFS-10567 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Lei (Eddy) Xu >Assignee: Xiaobing Zhou > Attachments: HDFS-10567-HDFS-1312.000.patch > > > {code} > --bandwidth Maximum disk bandwidth to be consumed by > diskBalancer. e.g. 10 > --maxerror Describes how many errors can be > tolerated while copying between a pair > of disks. > --outFile to write output to, if not > specified defaults will be used. > --plan creates a plan for datanode. > --thresholdPercentagePercentage skew that wetolerate before > diskbalancer starts working e.g. 10 > --v Print out the summary of the plan on > console > {code} > We should > * Put the unit into {{--bandwidth}}, or its help message. Is it an integer or > float / double number? Not clear in CLI message. > * Give more details about {{--plan}}. It is not clear what the {{}} is > for. > * {{--thresholdPercentage}}, has typo {{wetolerate}} in the error message. > Also it needs to indicated that it is the difference between space > utilization between two disks / volumes. Is it an integer or float / double > number? > Thanks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10533) Make DistCpOptions class immutable
[ https://issues.apache.org/jira/browse/HDFS-10533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HDFS-10533: - Attachment: HDFS-10533.001.patch > Make DistCpOptions class immutable > -- > > Key: HDFS-10533 > URL: https://issues.apache.org/jira/browse/HDFS-10533 > Project: Hadoop HDFS > Issue Type: Improvement > Components: distcp >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HDFS-10533.000.patch, HDFS-10533.000.patch, > HDFS-10533.001.patch > > > Currently the {{DistCpOptions}} class encapsulates all DistCp options, which > may be set from command-line (via the {{OptionsParser}}) or may be set > manually (eg construct an instance and call setters). As there are multiple > option fields and more (e.g. [HDFS-9868], [HDFS-10314]) to add, validating > them can be cumbersome. Ideally, the {{DistCpOptions}} object should be > immutable. The benefits are: > # {{DistCpOptions}} is simple and easier to use and share, plus it scales well > # validation is automatic, e.g. manually constructed {{DistCpOptions}} gets > validated before usage > # validation error message is well-defined which does not depend on the order > of setters > This jira is to track the effort of making the {{DistCpOptions}} immutable by > using a Builder pattern for creation. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10578) libhdfs++: Silence compile warnings from URI parser
[ https://issues.apache.org/jira/browse/HDFS-10578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] James Clampffer updated HDFS-10578: --- Attachment: HDFS-10578.HDFS-8707.000.patch Simple fix. > libhdfs++: Silence compile warnings from URI parser > --- > > Key: HDFS-10578 > URL: https://issues.apache.org/jira/browse/HDFS-10578 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: James Clampffer >Assignee: James Clampffer > Attachments: HDFS-10578.HDFS-8707.000.patch > > > The URI parser is calling free on buffers that are const qualified and gcc > complains. It had already been complaining about some other stuff that we > had a flag for, I'd like to just add a "-w" flag to silence everything. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10578) libhdfs++: Silence compile warnings from URI parser
[ https://issues.apache.org/jira/browse/HDFS-10578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] James Clampffer updated HDFS-10578: --- Status: Patch Available (was: Open) > libhdfs++: Silence compile warnings from URI parser > --- > > Key: HDFS-10578 > URL: https://issues.apache.org/jira/browse/HDFS-10578 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: James Clampffer >Assignee: James Clampffer > Attachments: HDFS-10578.HDFS-8707.000.patch > > > The URI parser is calling free on buffers that are const qualified and gcc > complains. It had already been complaining about some other stuff that we > had a flag for, I'd like to just add a "-w" flag to silence everything. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-10578) libhdfs++: Silence compile warnings from URI parser
James Clampffer created HDFS-10578: -- Summary: libhdfs++: Silence compile warnings from URI parser Key: HDFS-10578 URL: https://issues.apache.org/jira/browse/HDFS-10578 Project: Hadoop HDFS Issue Type: Sub-task Reporter: James Clampffer Assignee: James Clampffer The URI parser is calling free on buffers that are const qualified and gcc complains. It had already been complaining about some other stuff that we had a flag for, I'd like to just add a "-w" flag to silence everything. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10559) DiskBalancer: Use SHA1 for Plan ID
[ https://issues.apache.org/jira/browse/HDFS-10559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348810#comment-15348810 ] Hadoop QA commented on HDFS-10559: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 1s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 50s{color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 31s{color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 29s{color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 24s{color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 17s{color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s{color} | {color:green} HDFS-1312 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 6s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 54s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 35s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}101m 4s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSClientRetries | | | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:85209cc | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12813122/HDFS-10559-HDFS-1312.001.patch | | JIRA Issue | HDFS-10559 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc | | uname | Linux c531c9f3d882 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-1312 / b2584be | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/15906/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/15906/testReport/ | | modules | C:
[jira] [Commented] (HDFS-10533) Make DistCpOptions class immutable
[ https://issues.apache.org/jira/browse/HDFS-10533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348787#comment-15348787 ] Hadoop QA commented on HDFS-10533: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 44s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 12 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 14s{color} | {color:orange} hadoop-tools/hadoop-distcp: The patch generated 42 new + 322 unchanged - 53 fixed = 364 total (was 375) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 18s{color} | {color:red} hadoop-tools_hadoop-distcp generated 2 new + 47 unchanged - 3 fixed = 49 total (was 50) {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 52s{color} | {color:red} hadoop-distcp in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 40m 17s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.tools.TestExternalCall | | | hadoop.tools.TestDistCpWithRawXAttrs | | | hadoop.tools.TestDistCpWithAcls | | | hadoop.tools.TestDistCpWithXAttrs | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:85209cc | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12812948/HDFS-10533.000.patch | | JIRA Issue | HDFS-10533 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 396d33b02665 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / bf74dbf | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/15910/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-distcp.txt | | whitespace | https://builds.apache.org/job/PreCommit-HDFS-Build/15910/artifact/patchprocess/whitespace-eol.txt | | javadoc | https://builds.apache.org/job/PreCommit-HDFS-Build/15910/artifact/patchprocess/diff-javadoc-javadoc-hadoop-tools_hadoop-distcp.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/15910/artifact/patchprocess/patch-unit-hadoop-tools_hadoop-distcp.txt | | Test Results |
[jira] [Updated] (HDFS-10343) BlockManager#createLocatedBlocks may return blocks on failed storages
[ https://issues.apache.org/jira/browse/HDFS-10343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kuhu Shukla updated HDFS-10343: --- Attachment: HDFS-10343.001.patch Attaching initial patch that removes failed storages and adjusts the machines array if needed. > BlockManager#createLocatedBlocks may return blocks on failed storages > - > > Key: HDFS-10343 > URL: https://issues.apache.org/jira/browse/HDFS-10343 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.6.0 >Reporter: Daryn Sharp >Assignee: Kuhu Shukla > Attachments: HDFS-10343.001.patch > > > Storage state is ignored when building the machines list. Failed storage > removal is not immediate so clients may be directed to bad locations. The > client recovers but it's less than ideal. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10577) DiskBalancer: Support building imbalanced MiniDFSCluster from JSON
[ https://issues.apache.org/jira/browse/HDFS-10577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-10577: - Summary: DiskBalancer: Support building imbalanced MiniDFSCluster from JSON (was: Support building imbalanced MiniDFSCluster from JSON) > DiskBalancer: Support building imbalanced MiniDFSCluster from JSON > -- > > Key: HDFS-10577 > URL: https://issues.apache.org/jira/browse/HDFS-10577 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Fix For: 3.0.0-alpha1 > > > To build an imbalanced MiniDFSCluster, there are much work to do (e.g. > TestDiskBalancer#testDiskBalancerEndToEnd). It's even more given tens of data > nodes are built, on the other hand, Diskbalancer data model can easily dump > and build any kinds of imbalanced cluster (e.g. > data-cluster-64node-3disk.json used by TestDiskBalancerCommand#setUp). This > proposes to support building imbalanced MiniDFSCluster from dumped JSON file > to make writing tests easy. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-7959) WebHdfs logging is missing on Datanode
[ https://issues.apache.org/jira/browse/HDFS-7959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348742#comment-15348742 ] Hudson commented on HDFS-7959: -- SUCCESS: Integrated in Hadoop-trunk-Commit #10018 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10018/]) HDFS-7959. WebHdfs logging is missing on Datanode (Kihwal Lee via sjlee) (sjlee: rev bf74dbf80dc9379d669779a598950908adffb8a7) * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java * hadoop-common-project/hadoop-common/src/main/conf/log4j.properties > WebHdfs logging is missing on Datanode > -- > > Key: HDFS-7959 > URL: https://issues.apache.org/jira/browse/HDFS-7959 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Kihwal Lee >Assignee: Kihwal Lee >Priority: Critical > Labels: BB2015-05-TBR > Fix For: 2.9.0 > > Attachments: HDFS-7959.1.branch-2.patch, HDFS-7959.1.trunk.patch, > HDFS-7959.2.branch-2.patch, HDFS-7959.2.trunk.patch, > HDFS-7959.3.branch-2.patch, HDFS-7959.3.trunk.patch, > HDFS-7959.branch-2.patch, HDFS-7959.patch, HDFS-7959.patch, HDFS-7959.patch, > HDFS-7959.trunk.patch > > > After the conversion to netty, webhdfs requests are not logged on datanodes. > The existing jetty log only logs the non-webhdfs requests that come through > the internal proxy. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-7959) WebHdfs logging is missing on Datanode
[ https://issues.apache.org/jira/browse/HDFS-7959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sangjin Lee updated HDFS-7959: -- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.9.0 Status: Resolved (was: Patch Available) Committed patch v.3 to trunk and branch-2. Thanks [~kihwal] for your contribution! > WebHdfs logging is missing on Datanode > -- > > Key: HDFS-7959 > URL: https://issues.apache.org/jira/browse/HDFS-7959 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Kihwal Lee >Assignee: Kihwal Lee >Priority: Critical > Labels: BB2015-05-TBR > Fix For: 2.9.0 > > Attachments: HDFS-7959.1.branch-2.patch, HDFS-7959.1.trunk.patch, > HDFS-7959.2.branch-2.patch, HDFS-7959.2.trunk.patch, > HDFS-7959.3.branch-2.patch, HDFS-7959.3.trunk.patch, > HDFS-7959.branch-2.patch, HDFS-7959.patch, HDFS-7959.patch, HDFS-7959.patch, > HDFS-7959.trunk.patch > > > After the conversion to netty, webhdfs requests are not logged on datanodes. > The existing jetty log only logs the non-webhdfs requests that come through > the internal proxy. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10533) Make DistCpOptions class immutable
[ https://issues.apache.org/jira/browse/HDFS-10533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HDFS-10533: - Status: Patch Available (was: Open) > Make DistCpOptions class immutable > -- > > Key: HDFS-10533 > URL: https://issues.apache.org/jira/browse/HDFS-10533 > Project: Hadoop HDFS > Issue Type: Improvement > Components: distcp >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HDFS-10533.000.patch, HDFS-10533.000.patch > > > Currently the {{DistCpOptions}} class encapsulates all DistCp options, which > may be set from command-line (via the {{OptionsParser}}) or may be set > manually (eg construct an instance and call setters). As there are multiple > option fields and more (e.g. [HDFS-9868], [HDFS-10314]) to add, validating > them can be cumbersome. Ideally, the {{DistCpOptions}} object should be > immutable. The benefits are: > # {{DistCpOptions}} is simple and easier to use and share, plus it scales well > # validation is automatic, e.g. manually constructed {{DistCpOptions}} gets > validated before usage > # validation error message is well-defined which does not depend on the order > of setters > This jira is to track the effort of making the {{DistCpOptions}} immutable by > using a Builder pattern for creation. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-7959) WebHdfs logging is missing on Datanode
[ https://issues.apache.org/jira/browse/HDFS-7959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348716#comment-15348716 ] Kihwal Lee commented on HDFS-7959: -- {{TestHttpServerLifecycle}} works fine. Ran multiple times. {noformat} --- T E S T S --- Running org.apache.hadoop.http.TestHttpServerLifecycle Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.307 sec - in org.apache.hadoop.http.TestHttpServerLifecycle Results : Tests run: 7, Failures: 0, Errors: 0, Skipped: 0 {noformat} {{TestOfflineEditsViewer}} HDFS-10572 > WebHdfs logging is missing on Datanode > -- > > Key: HDFS-7959 > URL: https://issues.apache.org/jira/browse/HDFS-7959 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Kihwal Lee >Assignee: Kihwal Lee >Priority: Critical > Labels: BB2015-05-TBR > Attachments: HDFS-7959.1.branch-2.patch, HDFS-7959.1.trunk.patch, > HDFS-7959.2.branch-2.patch, HDFS-7959.2.trunk.patch, > HDFS-7959.3.branch-2.patch, HDFS-7959.3.trunk.patch, > HDFS-7959.branch-2.patch, HDFS-7959.patch, HDFS-7959.patch, HDFS-7959.patch, > HDFS-7959.trunk.patch > > > After the conversion to netty, webhdfs requests are not logged on datanodes. > The existing jetty log only logs the non-webhdfs requests that come through > the internal proxy. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (HDFS-7959) WebHdfs logging is missing on Datanode
[ https://issues.apache.org/jira/browse/HDFS-7959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kihwal Lee updated HDFS-7959: - Comment: was deleted (was: {{TestHttpServerLifecycle}} works fine. Ran multiple times. {noformat} --- T E S T S --- Running org.apache.hadoop.http.TestHttpServerLifecycle Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.307 sec - in org.apache.hadoop.http.TestHttpServerLifecycle Results : Tests run: 7, Failures: 0, Errors: 0, Skipped: 0 {noformat} {{TestOfflineEditsViewer}} HDFS-10572) > WebHdfs logging is missing on Datanode > -- > > Key: HDFS-7959 > URL: https://issues.apache.org/jira/browse/HDFS-7959 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Kihwal Lee >Assignee: Kihwal Lee >Priority: Critical > Labels: BB2015-05-TBR > Attachments: HDFS-7959.1.branch-2.patch, HDFS-7959.1.trunk.patch, > HDFS-7959.2.branch-2.patch, HDFS-7959.2.trunk.patch, > HDFS-7959.3.branch-2.patch, HDFS-7959.3.trunk.patch, > HDFS-7959.branch-2.patch, HDFS-7959.patch, HDFS-7959.patch, HDFS-7959.patch, > HDFS-7959.trunk.patch > > > After the conversion to netty, webhdfs requests are not logged on datanodes. > The existing jetty log only logs the non-webhdfs requests that come through > the internal proxy. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10577) Support building imbalanced MiniDFSCluster from JSON
[ https://issues.apache.org/jira/browse/HDFS-10577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-10577: - Issue Type: Sub-task (was: Improvement) Parent: HDFS-10576 > Support building imbalanced MiniDFSCluster from JSON > > > Key: HDFS-10577 > URL: https://issues.apache.org/jira/browse/HDFS-10577 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Fix For: 3.0.0-alpha1 > > > To build an imbalanced MiniDFSCluster, there are much work to do (e.g. > TestDiskBalancer#testDiskBalancerEndToEnd). It's even more given tens of data > nodes are built, on the other hand, Diskbalancer data model can easily dump > and build any kinds of imbalanced cluster (e.g. > data-cluster-64node-3disk.json used by TestDiskBalancerCommand#setUp). This > proposes to support building imbalanced MiniDFSCluster from dumped JSON file > to make writing tests easy. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10441) libhdfs++: HA namenode support
[ https://issues.apache.org/jira/browse/HDFS-10441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348701#comment-15348701 ] Hadoop QA commented on HDFS-10441: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} docker {color} | {color:red} 2m 47s{color} | {color:red} Docker failed to build yetus/hadoop:0cf5e66. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12813128/HDFS-10441.HDFS-8707.006.patch | | JIRA Issue | HDFS-10441 | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/15907/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > libhdfs++: HA namenode support > -- > > Key: HDFS-10441 > URL: https://issues.apache.org/jira/browse/HDFS-10441 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: James Clampffer >Assignee: James Clampffer > Attachments: HDFS-10441.HDFS-8707.000.patch, > HDFS-10441.HDFS-8707.002.patch, HDFS-10441.HDFS-8707.003.patch, > HDFS-10441.HDFS-8707.004.patch, HDFS-10441.HDFS-8707.005.patch, > HDFS-10441.HDFS-8707.006.patch, HDFS-8707.HDFS-10441.001.patch > > > If a cluster is HA enabled then do proper failover. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10534) NameNode WebUI should display DataNode usage rate with a certain percentile
[ https://issues.apache.org/jira/browse/HDFS-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348687#comment-15348687 ] Kai Sasaki commented on HDFS-10534: --- Thank you so much [~andrew.wang] and [~zhz] for reviewing! Histgram sounds good idea which enables us to make own metrics flexibly. But I think percentile metrics which is provided by NN is also useful because it is provided through JMX api. This way seems simple access to get node usage metrics when external system tries to get it. So I think we can make NN provides percentile metrics here (Of course I'll fix configuration naming issue) and also implement histogram UI in another JIRA. What do you think about? > NameNode WebUI should display DataNode usage rate with a certain percentile > --- > > Key: HDFS-10534 > URL: https://issues.apache.org/jira/browse/HDFS-10534 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode, ui >Reporter: Zhe Zhang >Assignee: Kai Sasaki > Attachments: HDFS-10534.01.patch, HDFS-10534.02.patch, > HDFS-10534.03.patch, HDFS-10534.04.patch, HDFS-10534.05.patch, Screen Shot > 2016-06-23 at 6.25.50 AM.png > > > In addition of *Min/Median/Max*, another meaningful metric for cluster > balance is DN usage rate at a certain percentile (e.g. 90 or 95). We should > add a config option, and another filed on NN WebUI, to display this. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10577) Support building imbalanced MiniDFSCluster from JSON
[ https://issues.apache.org/jira/browse/HDFS-10577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-10577: - Issue Type: Improvement (was: Sub-task) Parent: (was: HDFS-1312) > Support building imbalanced MiniDFSCluster from JSON > > > Key: HDFS-10577 > URL: https://issues.apache.org/jira/browse/HDFS-10577 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Fix For: 3.0.0-alpha1 > > > To build an imbalanced MiniDFSCluster, there are much work to do (e.g. > TestDiskBalancer#testDiskBalancerEndToEnd). It's even more given tens of data > nodes are built, on the other hand, Diskbalancer data model can easily dump > and build any kinds of imbalanced cluster (e.g. > data-cluster-64node-3disk.json used by TestDiskBalancerCommand#setUp). This > proposes to support building imbalanced MiniDFSCluster from dumped JSON file > to make writing tests easy. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10577) Support building imbalanced MiniDFSCluster from JSON
[ https://issues.apache.org/jira/browse/HDFS-10577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-10577: - Description: To build an imbalanced MiniDFSCluster, there are much work to do (e.g. TestDiskBalancer#testDiskBalancerEndToEnd). It's even more given tens of data nodes are built, on the other hand, Diskbalancer data model can easily dump and build any kinds of imbalanced cluster (e.g. data-cluster-64node-3disk.json used by TestDiskBalancerCommand#setUp). This proposes to support building imbalanced MiniDFSCluster from dumped JSON file to make writing tests easy. (was: To build an imbalanced MiniDFSCluster, there are much work to do (e.g. TestDiskBalancer#testDiskBalancerEndToEnd). It's even more given tens of data nodes are built, on the other hand, Diskbalancer data model can easily dump and build any kinds of imbalanced cluster (e.g. data-cluster-64node-3disk.json used by TestDiskBalancerCommand#setUp). This proposes to support building MiniDFSCluster from dumped JSON file to make writing tests easy.) > Support building imbalanced MiniDFSCluster from JSON > > > Key: HDFS-10577 > URL: https://issues.apache.org/jira/browse/HDFS-10577 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Fix For: 3.0.0-alpha1 > > > To build an imbalanced MiniDFSCluster, there are much work to do (e.g. > TestDiskBalancer#testDiskBalancerEndToEnd). It's even more given tens of data > nodes are built, on the other hand, Diskbalancer data model can easily dump > and build any kinds of imbalanced cluster (e.g. > data-cluster-64node-3disk.json used by TestDiskBalancerCommand#setUp). This > proposes to support building imbalanced MiniDFSCluster from dumped JSON file > to make writing tests easy. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10577) Support building imbalanced MiniDFSCluster from JSON
[ https://issues.apache.org/jira/browse/HDFS-10577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-10577: - Summary: Support building imbalanced MiniDFSCluster from JSON (was: Support building MiniDFSCluster from JSON) > Support building imbalanced MiniDFSCluster from JSON > > > Key: HDFS-10577 > URL: https://issues.apache.org/jira/browse/HDFS-10577 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Fix For: 3.0.0-alpha1 > > > To build an imbalanced MiniDFSCluster, there are much work to do (e.g. > TestDiskBalancer#testDiskBalancerEndToEnd). It's even more given tens of data > nodes are built, on the other hand, Diskbalancer data model can easily dump > and build any kinds of imbalanced cluster (e.g. > data-cluster-64node-3disk.json used by TestDiskBalancerCommand#setUp). This > proposes to support building MiniDFSCluster from dumped JSON file to make > writing tests easy. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10441) libhdfs++: HA namenode support
[ https://issues.apache.org/jira/browse/HDFS-10441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] James Clampffer updated HDFS-10441: --- Attachment: HDFS-10441.HDFS-8707.006.patch New patch, rebased onto trunk and should be ready for review. In addition to the stuff in the last patch: -got rid of duplicate data structures, cleaned up DNS lookup + added retry lookup -got rid of some dead code -replaced the recursive_mutex with a normal mutex in the HANamenodeTracker > libhdfs++: HA namenode support > -- > > Key: HDFS-10441 > URL: https://issues.apache.org/jira/browse/HDFS-10441 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: James Clampffer >Assignee: James Clampffer > Attachments: HDFS-10441.HDFS-8707.000.patch, > HDFS-10441.HDFS-8707.002.patch, HDFS-10441.HDFS-8707.003.patch, > HDFS-10441.HDFS-8707.004.patch, HDFS-10441.HDFS-8707.005.patch, > HDFS-10441.HDFS-8707.006.patch, HDFS-8707.HDFS-10441.001.patch > > > If a cluster is HA enabled then do proper failover. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-10577) Support building MiniDFSCluster from JSON
Xiaobing Zhou created HDFS-10577: Summary: Support building MiniDFSCluster from JSON Key: HDFS-10577 URL: https://issues.apache.org/jira/browse/HDFS-10577 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Xiaobing Zhou Assignee: Xiaobing Zhou To build an imbalanced MiniDFSCluster, there are much work to do (e.g. TestDiskBalancer#testDiskBalancerEndToEnd). It's even more given tens of data nodes are built, on the other hand, Diskbalancer data model can easily dump and build any kinds of imbalanced cluster (e.g. data-cluster-64node-3disk.json used by TestDiskBalancerCommand#setUp). This proposes to support building MiniDFSCluster from dumped JSON file to make writing tests easy. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-1312) Re-balance disks within a Datanode
[ https://issues.apache.org/jira/browse/HDFS-1312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-1312: Fix Version/s: 3.0.0-alpha1 > Re-balance disks within a Datanode > -- > > Key: HDFS-1312 > URL: https://issues.apache.org/jira/browse/HDFS-1312 > Project: Hadoop HDFS > Issue Type: New Feature > Components: datanode >Reporter: Travis Crawford >Assignee: Anu Engineer > Fix For: 3.0.0-alpha1 > > Attachments: Architecture_and_test_update.pdf, > Architecture_and_testplan.pdf, HDFS-1312.001.patch, HDFS-1312.002.patch, > HDFS-1312.003.patch, HDFS-1312.004.patch, HDFS-1312.005.patch, > HDFS-1312.006.patch, HDFS-1312.007.patch, disk-balancer-proposal.pdf > > > Filing this issue in response to ``full disk woes`` on hdfs-user. > Datanodes fill their storage directories unevenly, leading to situations > where certain disks are full while others are significantly less used. Users > at many different sites have experienced this issue, and HDFS administrators > are taking steps like: > - Manually rebalancing blocks in storage directories > - Decomissioning nodes & later readding them > There's a tradeoff between making use of all available spindles, and filling > disks at the sameish rate. Possible solutions include: > - Weighting less-used disks heavier when placing new blocks on the datanode. > In write-heavy environments this will still make use of all spindles, > equalizing disk use over time. > - Rebalancing blocks locally. This would help equalize disk use as disks are > added/replaced in older cluster nodes. > Datanodes should actively manage their local disk so operator intervention is > not needed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-1312) Re-balance disks within a Datanode
[ https://issues.apache.org/jira/browse/HDFS-1312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-1312: --- Resolution: Fixed Status: Resolved (was: Patch Available) > Re-balance disks within a Datanode > -- > > Key: HDFS-1312 > URL: https://issues.apache.org/jira/browse/HDFS-1312 > Project: Hadoop HDFS > Issue Type: New Feature > Components: datanode >Reporter: Travis Crawford >Assignee: Anu Engineer > Attachments: Architecture_and_test_update.pdf, > Architecture_and_testplan.pdf, HDFS-1312.001.patch, HDFS-1312.002.patch, > HDFS-1312.003.patch, HDFS-1312.004.patch, HDFS-1312.005.patch, > HDFS-1312.006.patch, HDFS-1312.007.patch, disk-balancer-proposal.pdf > > > Filing this issue in response to ``full disk woes`` on hdfs-user. > Datanodes fill their storage directories unevenly, leading to situations > where certain disks are full while others are significantly less used. Users > at many different sites have experienced this issue, and HDFS administrators > are taking steps like: > - Manually rebalancing blocks in storage directories > - Decomissioning nodes & later readding them > There's a tradeoff between making use of all available spindles, and filling > disks at the sameish rate. Possible solutions include: > - Weighting less-used disks heavier when placing new blocks on the datanode. > In write-heavy environments this will still make use of all spindles, > equalizing disk use over time. > - Rebalancing blocks locally. This would help equalize disk use as disks are > added/replaced in older cluster nodes. > Datanodes should actively manage their local disk so operator intervention is > not needed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10559) DiskBalancer: Use SHA1 for Plan ID
[ https://issues.apache.org/jira/browse/HDFS-10559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-10559: Parent Issue: HDFS-10576 (was: HDFS-1312) > DiskBalancer: Use SHA1 for Plan ID > -- > > Key: HDFS-10559 > URL: https://issues.apache.org/jira/browse/HDFS-10559 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Affects Versions: HDFS-1312 >Reporter: Anu Engineer >Assignee: Xiaobing Zhou >Priority: Trivial > Labels: newbie > Attachments: HDFS-10559-HDFS-1312.000.patch, > HDFS-10559-HDFS-1312.001.patch > > > We should use SHA1 instead of Sha512 as the plan id. Since it is much shorter > and easier to handle. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10567) Improve plan command help message
[ https://issues.apache.org/jira/browse/HDFS-10567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-10567: Parent Issue: HDFS-10576 (was: HDFS-1312) > Improve plan command help message > - > > Key: HDFS-10567 > URL: https://issues.apache.org/jira/browse/HDFS-10567 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Lei (Eddy) Xu >Assignee: Xiaobing Zhou > > {code} > --bandwidth Maximum disk bandwidth to be consumed by > diskBalancer. e.g. 10 > --maxerror Describes how many errors can be > tolerated while copying between a pair > of disks. > --outFile to write output to, if not > specified defaults will be used. > --plan creates a plan for datanode. > --thresholdPercentagePercentage skew that wetolerate before > diskbalancer starts working e.g. 10 > --v Print out the summary of the plan on > console > {code} > We should > * Put the unit into {{--bandwidth}}, or its help message. Is it an integer or > float / double number? Not clear in CLI message. > * Give more details about {{--plan}}. It is not clear what the {{}} is > for. > * {{--thresholdPercentage}}, has typo {{wetolerate}} in the error message. > Also it needs to indicated that it is the difference between space > utilization between two disks / volumes. Is it an integer or float / double > number? > Thanks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10503) DiskBalancer: simplify adding command options
[ https://issues.apache.org/jira/browse/HDFS-10503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-10503: Parent Issue: HDFS-10576 (was: HDFS-1312) > DiskBalancer: simplify adding command options > -- > > Key: HDFS-10503 > URL: https://issues.apache.org/jira/browse/HDFS-10503 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > > The command options must be added in two places (e.g. > org.apache.hadoop.hdfs.tools#addXXCommand and XXCommand#XXCommand) in order > to run the commands correctly. This can be avoided to keep code succinct. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10566) Submit plan request should throw exception if Datanode is undergoing an upgrade.
[ https://issues.apache.org/jira/browse/HDFS-10566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-10566: Parent Issue: HDFS-10576 (was: HDFS-1312) > Submit plan request should throw exception if Datanode is undergoing an > upgrade. > - > > Key: HDFS-10566 > URL: https://issues.apache.org/jira/browse/HDFS-10566 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Jitendra Nath Pandey >Assignee: Xiaobing Zhou > > If datanode is in upgrade, it might be simpler to just refuse balancing. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-9849) DiskBalancer : reduce lock path in shutdown code
[ https://issues.apache.org/jira/browse/HDFS-9849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-9849: --- Parent Issue: HDFS-10576 (was: HDFS-1312) > DiskBalancer : reduce lock path in shutdown code > > > Key: HDFS-9849 > URL: https://issues.apache.org/jira/browse/HDFS-9849 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Affects Versions: HDFS-1312 >Reporter: Anu Engineer >Assignee: Anu Engineer > Fix For: HDFS-1312 > > > In HDFS-9671, [~arpitagarwal] commented that we can possibly reduce the code > path that is holding a lock while shutting down the diskBalancer. This jira > tracks that improvement. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-9502) DiskBalancer : Replace Node and Data Density with Weighted Mean and Variance
[ https://issues.apache.org/jira/browse/HDFS-9502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-9502: --- Parent Issue: HDFS-10576 (was: HDFS-1312) > DiskBalancer : Replace Node and Data Density with Weighted Mean and Variance > > > Key: HDFS-9502 > URL: https://issues.apache.org/jira/browse/HDFS-9502 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Anu Engineer >Assignee: Anu Engineer > Attachments: HDFS-9502-HDFS-1312.001.patch > > > We use notions called Data Density which are based are similar to weighted > mean and variance. Make sure that computations map directly to these concepts > since it is easier to understand them than the density as defined in Disk > Balancer now. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-9850) DiskBalancer : Explore removing references to FsVolumeSpi
[ https://issues.apache.org/jira/browse/HDFS-9850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-9850: --- Parent Issue: HDFS-10576 (was: HDFS-1312) > DiskBalancer : Explore removing references to FsVolumeSpi > -- > > Key: HDFS-9850 > URL: https://issues.apache.org/jira/browse/HDFS-9850 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Affects Versions: HDFS-1312 >Reporter: Anu Engineer >Assignee: Anu Engineer > Fix For: HDFS-1312 > > > In HDFS-9671, [~arpitagarwal] commented that we should explore the > possibility of removing references to FsVolumeSpi at any point and only deal > with storage ID. We are not sure if this is possible, this JIRA is to explore > if that can be done without issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-9462) DiskBalancer: Add Scan Command
[ https://issues.apache.org/jira/browse/HDFS-9462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-9462: --- Parent Issue: HDFS-10576 (was: HDFS-1312) > DiskBalancer: Add Scan Command > -- > > Key: HDFS-9462 > URL: https://issues.apache.org/jira/browse/HDFS-9462 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-9462-HDFS-1312.000.patch > > > This is to propose being able to scan all the nodes that we send various > plans to. In order to do the scan, scan command will talk to all involved > data nodes through cluster interface(HDFS-9449) and data models(HDFS-9420) > and compare the hash tag that it gets back to make sure that the plan is that > we are interested in and print out the results. > As bonus, it should support the ability to print out the diff of what > happened when a DiskBalancer run is complete. Assuming the state of the > cluster is saved to file before.json. There should be two kinds of diffs: > 1. Overall what happened in the cluster vs. before.json -- just a summary > 2. for a specific node -- just like report command we should be able to pass > in a node and as see the changes against the before.json -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10514) Augment QueryDiskBalancerPlan to return storage id/type of source/dest volumes
[ https://issues.apache.org/jira/browse/HDFS-10514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-10514: Parent Issue: HDFS-10576 (was: HDFS-1312) > Augment QueryDiskBalancerPlan to return storage id/type of source/dest volumes > -- > > Key: HDFS-10514 > URL: https://issues.apache.org/jira/browse/HDFS-10514 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-10514-HDFS-1312.000.patch, > HDFS-10514-HDFS-1312.001.patch > > > DiskBalancerWorkEntry returned by QueryDiskBalancerPlan only contains paths > of source/dest volumes. It's preferable to get storage id/storage type too. > Scan command could show a rich set of information how data is moved between > different volumes. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-9550) DiskBalancer: Add Run Command
[ https://issues.apache.org/jira/browse/HDFS-9550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-9550: --- Parent Issue: HDFS-10576 (was: HDFS-1312) > DiskBalancer: Add Run Command > - > > Key: HDFS-9550 > URL: https://issues.apache.org/jira/browse/HDFS-9550 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-9550-HDFS-1312.000.patch > > > Run is kind of window dressing command that wraps plan and execute commands > to make it easy for a user to run disk balancer. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10559) DiskBalancer: Use SHA1 for Plan ID
[ https://issues.apache.org/jira/browse/HDFS-10559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348634#comment-15348634 ] Anu Engineer commented on HDFS-10559: - I agree, so till we have a better infra for testing commands, can you please run this command on a physical machine and post its output here. > DiskBalancer: Use SHA1 for Plan ID > -- > > Key: HDFS-10559 > URL: https://issues.apache.org/jira/browse/HDFS-10559 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Affects Versions: HDFS-1312 >Reporter: Anu Engineer >Assignee: Xiaobing Zhou >Priority: Trivial > Labels: newbie > Attachments: HDFS-10559-HDFS-1312.000.patch, > HDFS-10559-HDFS-1312.001.patch > > > We should use SHA1 instead of Sha512 as the plan id. Since it is much shorter > and easier to handle. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10558) DiskBalancer: Print the full path to plan file
[ https://issues.apache.org/jira/browse/HDFS-10558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-10558: Parent Issue: HDFS-10576 (was: HDFS-1312) > DiskBalancer: Print the full path to plan file > --- > > Key: HDFS-10558 > URL: https://issues.apache.org/jira/browse/HDFS-10558 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Affects Versions: HDFS-1312 >Reporter: Anu Engineer >Assignee: Xiaobing Zhou >Priority: Minor > Labels: newbie > Attachments: HDFS-10558-HDFS-1312.000.patch > > > We should print the full path to plan file when plan command is being run. > That makes it easy to give that path to -execute option. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10560) DiskBalancer: Reuse ObjectMapper instance to improve the performance
[ https://issues.apache.org/jira/browse/HDFS-10560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-10560: Parent Issue: HDFS-10576 (was: HDFS-1312) > DiskBalancer: Reuse ObjectMapper instance to improve the performance > > > Key: HDFS-10560 > URL: https://issues.apache.org/jira/browse/HDFS-10560 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Affects Versions: HDFS-1312 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Attachments: HDFS-10560-HDFS-1312.001.patch, > HDFS-10560-HDFS-1312.002.patch, HDFS-10560-HDFS-1312.003.patch > > > In branch HDFS-1312, there are many places use {{ObjectMapper}} instances to > do the json-object transform. But {{ObjectMapper}} instance is relatively > heavy, we should reuse them as possible. And In addition, {{ObjectMapper}} is > thread safe, can see this link:http://wiki.fasterxml.com/JacksonFAQ. > Here are related issues: HDFS-9724, HDFS-9768. We can see detail info in this > issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10553) DiskBalancer: Rename Tools/DiskBalancer class to Tools/DiskBalancerCLI
[ https://issues.apache.org/jira/browse/HDFS-10553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-10553: Parent Issue: HDFS-10576 (was: HDFS-1312) > DiskBalancer: Rename Tools/DiskBalancer class to Tools/DiskBalancerCLI > -- > > Key: HDFS-10553 > URL: https://issues.apache.org/jira/browse/HDFS-10553 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Affects Versions: HDFS-1312 >Reporter: Anu Engineer >Assignee: Anu Engineer >Priority: Minor > Fix For: HDFS-1312 > > > Rename the Tools/DiskBalancer, since we have server/DiskBalancer class. This > is confusing when reading code. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10562) DiskBalancer: update documentation on how to report issues and debug
[ https://issues.apache.org/jira/browse/HDFS-10562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-10562: Parent Issue: HDFS-10576 (was: HDFS-1312) > DiskBalancer: update documentation on how to report issues and debug > > > Key: HDFS-10562 > URL: https://issues.apache.org/jira/browse/HDFS-10562 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Affects Versions: HDFS-1312 >Reporter: Anu Engineer >Assignee: Anu Engineer >Priority: Minor > Fix For: HDFS-1312 > > Attachments: HDFS-10562-HDFS-1312.001.patch > > > Add a section in the diskbalancer documentation on how to report issues and > how to debug diskbalancer usage. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-10576) DiskBalancer future work items
Anu Engineer created HDFS-10576: --- Summary: DiskBalancer future work items Key: HDFS-10576 URL: https://issues.apache.org/jira/browse/HDFS-10576 Project: Hadoop HDFS Issue Type: Bug Components: balancer & mover Affects Versions: 2.9.0 Reporter: Anu Engineer Assignee: Anu Engineer Fix For: 2.9.0 This is a master JIRA for future work items for disk balancer. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10559) DiskBalancer: Use SHA1 for Plan ID
[ https://issues.apache.org/jira/browse/HDFS-10559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-10559: - Attachment: HDFS-10559-HDFS-1312.001.patch Thanks [~anu] for review. Posted patch v001 by running thorough grep to find omission. Execute command is not in test path, that's why. > DiskBalancer: Use SHA1 for Plan ID > -- > > Key: HDFS-10559 > URL: https://issues.apache.org/jira/browse/HDFS-10559 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Affects Versions: HDFS-1312 >Reporter: Anu Engineer >Assignee: Xiaobing Zhou >Priority: Trivial > Labels: newbie > Attachments: HDFS-10559-HDFS-1312.000.patch, > HDFS-10559-HDFS-1312.001.patch > > > We should use SHA1 instead of Sha512 as the plan id. Since it is much shorter > and easier to handle. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10575) webhdfs fails with filenames including semicolons
[ https://issues.apache.org/jira/browse/HDFS-10575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bob Hansen updated HDFS-10575: -- Attachment: curl_request.txt dfs_copyfrom_local_traffic.txt > webhdfs fails with filenames including semicolons > - > > Key: HDFS-10575 > URL: https://issues.apache.org/jira/browse/HDFS-10575 > Project: Hadoop HDFS > Issue Type: Bug > Components: webhdfs >Affects Versions: 2.7.0 >Reporter: Bob Hansen > Attachments: curl_request.txt, dfs_copyfrom_local_traffic.txt > > > Via webhdfs or native HDFS, we can create files with semicolons in their > names: > {code} > bhansen@::1 /tmp$ hdfs dfs -copyFromLocal /tmp/data > "webhdfs://localhost:50070/foo;bar" > bhansen@::1 /tmp$ hadoop fs -ls / > Found 1 items > -rw-r--r-- 2 bhansen supergroup 9 2016-06-24 12:20 /foo;bar > {code} > Attempting to fetch the file via webhdfs fails: > {code} > bhansen@::1 /tmp$ curl -L > "http://localhost:50070/webhdfs/v1/foo%3Bbar?user.name=bhansen=OPEN; > {"RemoteException":{"exception":"FileNotFoundException","javaClassName":"java.io.FileNotFoundException","message":"File > does not exist: /foo\n\tat > org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:66)\n\tat > > org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:56)\n\tat > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1891)\n\tat > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1832)\n\tat > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1812)\n\tat > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1784)\n\tat > > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:542)\n\tat > > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:362)\n\tat > > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat > > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\n\tat > org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\n\tat > org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\n\tat > org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\n\tat > java.security.AccessController.doPrivileged(Native Method)\n\tat > javax.security.auth.Subject.doAs(Subject.java:422)\n\tat > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\n\tat > org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\n"}} > {code} > It appears (from the attached TCP dump in curl_request.txt) that the > namenode's redirect unescapes the semicolon, and the DataNode's HTTP server > is splitting the request at the semicolon, and failing to find the file "foo". > Interesting side notes: > * In the attached dfs_copyfrom_local_traffic.txt, you can see the > copyFromLocal command writing the data to "foo;bar_COPYING_", which is then > redirected and just writes to "foo". The subsequent rename attempts to > rename "foo;bar_COPYING_" to "foo;bar", but has the same parsing bug so > effectively renames "foo" to "foo;bar". > Here is the full range of special characters that we initially started with > that led to the minimal reproducer above: > {code} > hdfs dfs -copyFromLocal /tmp/data webhdfs://localhost:50070/'~`!@#$%^& > ()-_=+|<.>]}",\\\[\{\*\?\;'\''data' > curl -L > "http://localhost:50070/webhdfs/v1/%7E%60%21%40%23%24%25%5E%26+%28%29-_%3D%2B%7C%3C.%3E%5D%7D%22%2C%5C%5B%7B*%3F%3B%27data?user.name=bhansen=OPEN=0; > {code} > Thanks to [~anatoli.shein] for making a concise reproducer. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-10575) webhdfs fails with filenames including semicolons
Bob Hansen created HDFS-10575: - Summary: webhdfs fails with filenames including semicolons Key: HDFS-10575 URL: https://issues.apache.org/jira/browse/HDFS-10575 Project: Hadoop HDFS Issue Type: Bug Components: webhdfs Affects Versions: 2.7.0 Reporter: Bob Hansen Via webhdfs or native HDFS, we can create files with semicolons in their names: {code} bhansen@::1 /tmp$ hdfs dfs -copyFromLocal /tmp/data "webhdfs://localhost:50070/foo;bar" bhansen@::1 /tmp$ hadoop fs -ls / Found 1 items -rw-r--r-- 2 bhansen supergroup 9 2016-06-24 12:20 /foo;bar {code} Attempting to fetch the file via webhdfs fails: {code} bhansen@::1 /tmp$ curl -L "http://localhost:50070/webhdfs/v1/foo%3Bbar?user.name=bhansen=OPEN; {"RemoteException":{"exception":"FileNotFoundException","javaClassName":"java.io.FileNotFoundException","message":"File does not exist: /foo\n\tat org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:66)\n\tat org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:56)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1891)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1832)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1812)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1784)\n\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:542)\n\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:362)\n\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\n\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\n\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\n\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\n\tat java.security.AccessController.doPrivileged(Native Method)\n\tat javax.security.auth.Subject.doAs(Subject.java:422)\n\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\n\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\n"}} {code} It appears (from the attached TCP dump in curl_request.txt) that the namenode's redirect unescapes the semicolon, and the DataNode's HTTP server is splitting the request at the semicolon, and failing to find the file "foo". Interesting side notes: * In the attached dfs_copyfrom_local_traffic.txt, you can see the copyFromLocal command writing the data to "foo;bar_COPYING_", which is then redirected and just writes to "foo". The subsequent rename attempts to rename "foo;bar_COPYING_" to "foo;bar", but has the same parsing bug so effectively renames "foo" to "foo;bar". Here is the full range of special characters that we initially started with that led to the minimal reproducer above: {code} hdfs dfs -copyFromLocal /tmp/data webhdfs://localhost:50070/'~`!@#$%^& ()-_=+|<.>]}",\\\[\{\*\?\;'\''data' curl -L "http://localhost:50070/webhdfs/v1/%7E%60%21%40%23%24%25%5E%26+%28%29-_%3D%2B%7C%3C.%3E%5D%7D%22%2C%5C%5B%7B*%3F%3B%27data?user.name=bhansen=OPEN=0; {code} Thanks to [~anatoli.shein] for making a concise reproducer. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-10574) webhdfs fails with filenames including semicolons
Bob Hansen created HDFS-10574: - Summary: webhdfs fails with filenames including semicolons Key: HDFS-10574 URL: https://issues.apache.org/jira/browse/HDFS-10574 Project: Hadoop HDFS Issue Type: Bug Components: webhdfs Affects Versions: 2.7.0 Reporter: Bob Hansen Via webhdfs or native HDFS, we can create files with semicolons in their names: {code} bhansen@::1 /tmp$ hdfs dfs -copyFromLocal /tmp/data "webhdfs://localhost:50070/foo;bar" bhansen@::1 /tmp$ hadoop fs -ls / Found 1 items -rw-r--r-- 2 bhansen supergroup 9 2016-06-24 12:20 /foo;bar {code} Attempting to fetch the file via webhdfs fails: {code} bhansen@::1 /tmp$ curl -L "http://localhost:50070/webhdfs/v1/foo%3Bbar?user.name=bhansen=OPEN; {"RemoteException":{"exception":"FileNotFoundException","javaClassName":"java.io.FileNotFoundException","message":"File does not exist: /foo\n\tat org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:66)\n\tat org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:56)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1891)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1832)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1812)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1784)\n\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:542)\n\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:362)\n\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\n\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\n\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\n\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\n\tat java.security.AccessController.doPrivileged(Native Method)\n\tat javax.security.auth.Subject.doAs(Subject.java:422)\n\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\n\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\n"}} {code} It appears (from the attached TCP dump in curl_request.txt) that the namenode's redirect unescapes the semicolon, and the DataNode's HTTP server is splitting the request at the semicolon, and failing to find the file "foo". Interesting side notes: * In the attached dfs_copyfrom_local_traffic.txt, you can see the copyFromLocal command writing the data to "foo;bar_COPYING_", which is then redirected and just writes to "foo". The subsequent rename attempts to rename "foo;bar_COPYING_" to "foo;bar", but has the same parsing bug so effectively renames "foo" to "foo;bar". Here is the full range of special characters that we initially started with that led to the minimal reproducer above: {code} hdfs dfs -copyFromLocal /tmp/data webhdfs://localhost:50070/'~`!@#$%^& ()-_=+|<.>]}",\\\[\{\*\?\;'\''data' curl -L "http://localhost:50070/webhdfs/v1/%7E%60%21%40%23%24%25%5E%26+%28%29-_%3D%2B%7C%3C.%3E%5D%7D%22%2C%5C%5B%7B*%3F%3B%27data?user.name=bhansen=OPEN=0; {code} Thanks to [~anatoli.shein] for making a concise reproducer. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-7959) WebHdfs logging is missing on Datanode
[ https://issues.apache.org/jira/browse/HDFS-7959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348575#comment-15348575 ] Sangjin Lee commented on HDFS-7959: --- Thanks for updating the patch Kihwal! I'll take a look at it as soon as I can access the JIRA (it's not responding atm). > WebHdfs logging is missing on Datanode > -- > > Key: HDFS-7959 > URL: https://issues.apache.org/jira/browse/HDFS-7959 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Kihwal Lee >Assignee: Kihwal Lee >Priority: Critical > Labels: BB2015-05-TBR > Attachments: HDFS-7959.1.branch-2.patch, HDFS-7959.1.trunk.patch, > HDFS-7959.2.branch-2.patch, HDFS-7959.2.trunk.patch, > HDFS-7959.3.branch-2.patch, HDFS-7959.3.trunk.patch, > HDFS-7959.branch-2.patch, HDFS-7959.patch, HDFS-7959.patch, HDFS-7959.patch, > HDFS-7959.trunk.patch > > > After the conversion to netty, webhdfs requests are not logged on datanodes. > The existing jetty log only logs the non-webhdfs requests that come through > the internal proxy. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-7959) WebHdfs logging is missing on Datanode
[ https://issues.apache.org/jira/browse/HDFS-7959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348570#comment-15348570 ] Kihwal Lee commented on HDFS-7959: -- {{TestHttpServerLifecycle}} works fine. Ran multiple times. {noformat} --- T E S T S --- Running org.apache.hadoop.http.TestHttpServerLifecycle Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.307 sec - in org.apache.hadoop.http.TestHttpServerLifecycle Results : Tests run: 7, Failures: 0, Errors: 0, Skipped: 0 {noformat} {{TestOfflineEditsViewer}} HDFS-10572 > WebHdfs logging is missing on Datanode > -- > > Key: HDFS-7959 > URL: https://issues.apache.org/jira/browse/HDFS-7959 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Kihwal Lee >Assignee: Kihwal Lee >Priority: Critical > Labels: BB2015-05-TBR > Attachments: HDFS-7959.1.branch-2.patch, HDFS-7959.1.trunk.patch, > HDFS-7959.2.branch-2.patch, HDFS-7959.2.trunk.patch, > HDFS-7959.3.branch-2.patch, HDFS-7959.3.trunk.patch, > HDFS-7959.branch-2.patch, HDFS-7959.patch, HDFS-7959.patch, HDFS-7959.patch, > HDFS-7959.trunk.patch > > > After the conversion to netty, webhdfs requests are not logged on datanodes. > The existing jetty log only logs the non-webhdfs requests that come through > the internal proxy. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10534) NameNode WebUI should display DataNode usage rate with a certain percentile
[ https://issues.apache.org/jira/browse/HDFS-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang updated HDFS-10534: - Hadoop Flags: (was: Reviewed) > NameNode WebUI should display DataNode usage rate with a certain percentile > --- > > Key: HDFS-10534 > URL: https://issues.apache.org/jira/browse/HDFS-10534 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode, ui >Reporter: Zhe Zhang >Assignee: Kai Sasaki > Attachments: HDFS-10534.01.patch, HDFS-10534.02.patch, > HDFS-10534.03.patch, HDFS-10534.04.patch, HDFS-10534.05.patch, Screen Shot > 2016-06-23 at 6.25.50 AM.png > > > In addition of *Min/Median/Max*, another meaningful metric for cluster > balance is DN usage rate at a certain percentile (e.g. 90 or 95). We should > add a config option, and another filed on NN WebUI, to display this. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10534) NameNode WebUI should display DataNode usage rate with a certain percentile
[ https://issues.apache.org/jira/browse/HDFS-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348563#comment-15348563 ] Zhe Zhang commented on HDFS-10534: -- Thanks Andrew. I just reverted the change. bq. Why not present a histogram rather than a single threshold like this? That way we don't add a new config, present more info, and don't require a restart to change this threshold. In our case we are mostly interested in the 95th percentile because it serves as an alarm that 5% DNs are becoming hot nodes and will likely cause job failures. A histogram is a nice idea actually. We can think about an appropriate granularity (e.g. every 5%?) for it. The only drawback is that it will add more content to NN web UI and make it busier -- I imagine it will a table. bq. This is also a metric that could be calculated in client-side JS from existing information. True. But I think showing on NN web UI is more convenient for admins. We proposed the change because median (50th percentile) is actually a poor metric to illustrate imbalance level; especially in a busy cluster with say > 70% overall utilization. We therefore wanted a "better median". bq. the config says it's a percentile, but it's really a quantile. Good catch. We could change the config to be a real percentile to be b/w 0 and 100. Per above, we could also show a histogram instead. So overall I like the histogram idea. [~lewuathe] What are you thoughts? > NameNode WebUI should display DataNode usage rate with a certain percentile > --- > > Key: HDFS-10534 > URL: https://issues.apache.org/jira/browse/HDFS-10534 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode, ui >Reporter: Zhe Zhang >Assignee: Kai Sasaki > Attachments: HDFS-10534.01.patch, HDFS-10534.02.patch, > HDFS-10534.03.patch, HDFS-10534.04.patch, HDFS-10534.05.patch, Screen Shot > 2016-06-23 at 6.25.50 AM.png > > > In addition of *Min/Median/Max*, another meaningful metric for cluster > balance is DN usage rate at a certain percentile (e.g. 90 or 95). We should > add a config option, and another filed on NN WebUI, to display this. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10534) NameNode WebUI should display DataNode usage rate with a certain percentile
[ https://issues.apache.org/jira/browse/HDFS-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang updated HDFS-10534: - Target Version/s: 2.8.0, 2.7.3, 2.9.0, 2.6.5, 3.0.0-alpha1 (was: 3.0.0-alpha1) > NameNode WebUI should display DataNode usage rate with a certain percentile > --- > > Key: HDFS-10534 > URL: https://issues.apache.org/jira/browse/HDFS-10534 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode, ui >Reporter: Zhe Zhang >Assignee: Kai Sasaki > Attachments: HDFS-10534.01.patch, HDFS-10534.02.patch, > HDFS-10534.03.patch, HDFS-10534.04.patch, HDFS-10534.05.patch, Screen Shot > 2016-06-23 at 6.25.50 AM.png > > > In addition of *Min/Median/Max*, another meaningful metric for cluster > balance is DN usage rate at a certain percentile (e.g. 90 or 95). We should > add a config option, and another filed on NN WebUI, to display this. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-8940) Support for large-scale multi-tenant inotify service
[ https://issues.apache.org/jira/browse/HDFS-8940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348554#comment-15348554 ] Colin Patrick McCabe commented on HDFS-8940: bq. You mean reading inotify messages from the SbNN? It's a very attractive idea from scalability angle. But how would we handle the staleness? The SbNN could be a few mins behind ANN right? Sorry for the misunderstanding. I wasn't talking about HDFS HA. The point that I was making is that you don't want a single point of failure in whatever service you are using to fetch the events from HDFS and put them in Kafka. Perhaps you could also execute the code which fetches events in the context of Kafka itself somehow, to avoid creating a new service? I'm not familiar with the programming model there. > Support for large-scale multi-tenant inotify service > > > Key: HDFS-8940 > URL: https://issues.apache.org/jira/browse/HDFS-8940 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ming Ma > Attachments: Large-Scale-Multi-Tenant-Inotify-Service.pdf > > > HDFS-6634 provides the core inotify functionality. We would like to extend > that to provide a large-scale service that ten of thousands of clients can > subscribe to. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Reopened] (HDFS-10534) NameNode WebUI should display DataNode usage rate with a certain percentile
[ https://issues.apache.org/jira/browse/HDFS-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang reopened HDFS-10534: -- > NameNode WebUI should display DataNode usage rate with a certain percentile > --- > > Key: HDFS-10534 > URL: https://issues.apache.org/jira/browse/HDFS-10534 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode, ui >Reporter: Zhe Zhang >Assignee: Kai Sasaki > Attachments: HDFS-10534.01.patch, HDFS-10534.02.patch, > HDFS-10534.03.patch, HDFS-10534.04.patch, HDFS-10534.05.patch, Screen Shot > 2016-06-23 at 6.25.50 AM.png > > > In addition of *Min/Median/Max*, another meaningful metric for cluster > balance is DN usage rate at a certain percentile (e.g. 90 or 95). We should > add a config option, and another filed on NN WebUI, to display this. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10534) NameNode WebUI should display DataNode usage rate with a certain percentile
[ https://issues.apache.org/jira/browse/HDFS-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang updated HDFS-10534: - Fix Version/s: (was: 3.0.0-alpha1) > NameNode WebUI should display DataNode usage rate with a certain percentile > --- > > Key: HDFS-10534 > URL: https://issues.apache.org/jira/browse/HDFS-10534 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode, ui >Reporter: Zhe Zhang >Assignee: Kai Sasaki > Attachments: HDFS-10534.01.patch, HDFS-10534.02.patch, > HDFS-10534.03.patch, HDFS-10534.04.patch, HDFS-10534.05.patch, Screen Shot > 2016-06-23 at 6.25.50 AM.png > > > In addition of *Min/Median/Max*, another meaningful metric for cluster > balance is DN usage rate at a certain percentile (e.g. 90 or 95). We should > add a config option, and another filed on NN WebUI, to display this. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10534) NameNode WebUI should display DataNode usage rate with a certain percentile
[ https://issues.apache.org/jira/browse/HDFS-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348540#comment-15348540 ] Andrew Wang commented on HDFS-10534: Adding a new metric is compatible, but upon looking at this patch, I had a few questions. * Why not present a histogram rather than a single threshold like this? That way we don't add a new config, present more info, and don't require a restart to change this threshold. * This is also a metric that could be calculated in client-side JS from existing information. * Finally, the config says it's a percentile, but it's really a quantile. Percentile are 0-100, quantiles are 0-1.0. Seems like the precondition check should also be inclusive of 0 and 1.0, since they are valid quantiles. I'd prefer if we backed out this change while we thought about these issues. Thanks folks. > NameNode WebUI should display DataNode usage rate with a certain percentile > --- > > Key: HDFS-10534 > URL: https://issues.apache.org/jira/browse/HDFS-10534 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode, ui >Reporter: Zhe Zhang >Assignee: Kai Sasaki > Fix For: 3.0.0-alpha1 > > Attachments: HDFS-10534.01.patch, HDFS-10534.02.patch, > HDFS-10534.03.patch, HDFS-10534.04.patch, HDFS-10534.05.patch, Screen Shot > 2016-06-23 at 6.25.50 AM.png > > > In addition of *Min/Median/Max*, another meaningful metric for cluster > balance is DN usage rate at a certain percentile (e.g. 90 or 95). We should > add a config option, and another filed on NN WebUI, to display this. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-7959) WebHdfs logging is missing on Datanode
[ https://issues.apache.org/jira/browse/HDFS-7959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348525#comment-15348525 ] Hadoop QA commented on HDFS-7959: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 42s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 53s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 27s{color} | {color:green} root: The patch generated 0 new + 13 unchanged - 1 fixed = 13 total (was 14) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 52s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 24s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 49s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}127m 38s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer | | Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:85209cc | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12813094/HDFS-7959.3.trunk.patch | | JIRA Issue | HDFS-7959 | | Optional Tests | asflicense mvnsite unit compile javac javadoc mvninstall findbugs checkstyle | | uname | Linux eb1a2860a75c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 6314843 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/15904/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | unit |
[jira] [Updated] (HDFS-10543) hdfsRead read stops at block boundary
[ https://issues.apache.org/jira/browse/HDFS-10543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] James Clampffer updated HDFS-10543: --- Resolution: Fixed Status: Resolved (was: Patch Available) > hdfsRead read stops at block boundary > - > > Key: HDFS-10543 > URL: https://issues.apache.org/jira/browse/HDFS-10543 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Xiaowei Zhu > Attachments: HDFS-10543.HDFS-8707.000.patch, > HDFS-10543.HDFS-8707.001.patch, HDFS-10543.HDFS-8707.002.patch, > HDFS-10543.HDFS-8707.003.patch, HDFS-10543.HDFS-8707.004.patch > > > Reproducer: > char *buf2 = new char[file_info->mSize]; > memset(buf2, 0, (size_t)file_info->mSize); > int ret = hdfsRead(fs, file, buf2, file_info->mSize); > delete [] buf2; > if(ret != file_info->mSize) { > std::stringstream ss; > ss << "tried to read " << file_info->mSize << " bytes. but read " << > ret << " bytes"; > ReportError(ss.str()); > hdfsCloseFile(fs, file); > continue; > } > When it runs with a file ~1.4GB large, it will return an error like "tried to > read 146890 bytes. but read 134217728 bytes". The HDFS cluster it runs > against has a block size of 134217728 bytes. So it seems hdfsRead will stop > at a block boundary. Looks like a regression. We should add retry to continue > reading cross blocks in case of files w/ multiple blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10543) hdfsRead read stops at block boundary
[ https://issues.apache.org/jira/browse/HDFS-10543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348508#comment-15348508 ] James Clampffer commented on HDFS-10543: Thanks for those last fixes Xiaowei. In case it wasn't clear Xiaowei resolved this, but I can't seem to assign jiras to him at the moment. Committed to HDFS-8707. > hdfsRead read stops at block boundary > - > > Key: HDFS-10543 > URL: https://issues.apache.org/jira/browse/HDFS-10543 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Xiaowei Zhu > Attachments: HDFS-10543.HDFS-8707.000.patch, > HDFS-10543.HDFS-8707.001.patch, HDFS-10543.HDFS-8707.002.patch, > HDFS-10543.HDFS-8707.003.patch, HDFS-10543.HDFS-8707.004.patch > > > Reproducer: > char *buf2 = new char[file_info->mSize]; > memset(buf2, 0, (size_t)file_info->mSize); > int ret = hdfsRead(fs, file, buf2, file_info->mSize); > delete [] buf2; > if(ret != file_info->mSize) { > std::stringstream ss; > ss << "tried to read " << file_info->mSize << " bytes. but read " << > ret << " bytes"; > ReportError(ss.str()); > hdfsCloseFile(fs, file); > continue; > } > When it runs with a file ~1.4GB large, it will return an error like "tried to > read 146890 bytes. but read 134217728 bytes". The HDFS cluster it runs > against has a block size of 134217728 bytes. So it seems hdfsRead will stop > at a block boundary. Looks like a regression. We should add retry to continue > reading cross blocks in case of files w/ multiple blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10534) NameNode WebUI should display DataNode usage rate with a certain percentile
[ https://issues.apache.org/jira/browse/HDFS-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348500#comment-15348500 ] Zhe Zhang commented on HDFS-10534: -- Should this (adding a metric and adding it to web UI) be considered an incompatible change? If not I'd like to backport to 2.6. [~andrew.wang] [~vinodkv] Any advice? Thanks. > NameNode WebUI should display DataNode usage rate with a certain percentile > --- > > Key: HDFS-10534 > URL: https://issues.apache.org/jira/browse/HDFS-10534 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode, ui >Reporter: Zhe Zhang >Assignee: Kai Sasaki > Fix For: 3.0.0-alpha1 > > Attachments: HDFS-10534.01.patch, HDFS-10534.02.patch, > HDFS-10534.03.patch, HDFS-10534.04.patch, HDFS-10534.05.patch, Screen Shot > 2016-06-23 at 6.25.50 AM.png > > > In addition of *Min/Median/Max*, another meaningful metric for cluster > balance is DN usage rate at a certain percentile (e.g. 90 or 95). We should > add a config option, and another filed on NN WebUI, to display this. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10543) hdfsRead read stops at block boundary
[ https://issues.apache.org/jira/browse/HDFS-10543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] James Clampffer updated HDFS-10543: --- Assignee: (was: James Clampffer) > hdfsRead read stops at block boundary > - > > Key: HDFS-10543 > URL: https://issues.apache.org/jira/browse/HDFS-10543 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Xiaowei Zhu > Attachments: HDFS-10543.HDFS-8707.000.patch, > HDFS-10543.HDFS-8707.001.patch, HDFS-10543.HDFS-8707.002.patch, > HDFS-10543.HDFS-8707.003.patch, HDFS-10543.HDFS-8707.004.patch > > > Reproducer: > char *buf2 = new char[file_info->mSize]; > memset(buf2, 0, (size_t)file_info->mSize); > int ret = hdfsRead(fs, file, buf2, file_info->mSize); > delete [] buf2; > if(ret != file_info->mSize) { > std::stringstream ss; > ss << "tried to read " << file_info->mSize << " bytes. but read " << > ret << " bytes"; > ReportError(ss.str()); > hdfsCloseFile(fs, file); > continue; > } > When it runs with a file ~1.4GB large, it will return an error like "tried to > read 146890 bytes. but read 134217728 bytes". The HDFS cluster it runs > against has a block size of 134217728 bytes. So it seems hdfsRead will stop > at a block boundary. Looks like a regression. We should add retry to continue > reading cross blocks in case of files w/ multiple blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10534) NameNode WebUI should display DataNode usage rate with a certain percentile
[ https://issues.apache.org/jira/browse/HDFS-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348493#comment-15348493 ] Hudson commented on HDFS-10534: --- SUCCESS: Integrated in Hadoop-trunk-Commit #10016 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10016/]) HDFS-10534. NameNode WebUI should display DataNode usage rate with a (zhz: rev 0424056a77002f4a2334ee2eb240fbc67b676471) * hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html * hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeMXBean.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java > NameNode WebUI should display DataNode usage rate with a certain percentile > --- > > Key: HDFS-10534 > URL: https://issues.apache.org/jira/browse/HDFS-10534 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode, ui >Reporter: Zhe Zhang >Assignee: Kai Sasaki > Fix For: 3.0.0-alpha1 > > Attachments: HDFS-10534.01.patch, HDFS-10534.02.patch, > HDFS-10534.03.patch, HDFS-10534.04.patch, HDFS-10534.05.patch, Screen Shot > 2016-06-23 at 6.25.50 AM.png > > > In addition of *Min/Median/Max*, another meaningful metric for cluster > balance is DN usage rate at a certain percentile (e.g. 90 or 95). We should > add a config option, and another filed on NN WebUI, to display this. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10534) NameNode WebUI should display DataNode usage rate with a certain percentile
[ https://issues.apache.org/jira/browse/HDFS-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang updated HDFS-10534: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-alpha1 Status: Resolved (was: Patch Available) Thanks [~lewuathe] for updating the patch and the screenshot. I verified the reported test failures and committed the patch to trunk. Good work! > NameNode WebUI should display DataNode usage rate with a certain percentile > --- > > Key: HDFS-10534 > URL: https://issues.apache.org/jira/browse/HDFS-10534 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode, ui >Reporter: Zhe Zhang >Assignee: Kai Sasaki > Fix For: 3.0.0-alpha1 > > Attachments: HDFS-10534.01.patch, HDFS-10534.02.patch, > HDFS-10534.03.patch, HDFS-10534.04.patch, HDFS-10534.05.patch, Screen Shot > 2016-06-23 at 6.25.50 AM.png > > > In addition of *Min/Median/Max*, another meaningful metric for cluster > balance is DN usage rate at a certain percentile (e.g. 90 or 95). We should > add a config option, and another filed on NN WebUI, to display this. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10569) A bug causes OutOfIndex error in BlockListAsLongs
[ https://issues.apache.org/jira/browse/HDFS-10569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348414#comment-15348414 ] Weiwei Yang commented on HDFS-10569: Hello [~daryn] [~kihwal] Please help to check if my latest comment and the modifications to the UT makes sense. :) > A bug causes OutOfIndex error in BlockListAsLongs > - > > Key: HDFS-10569 > URL: https://issues.apache.org/jira/browse/HDFS-10569 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.0 >Reporter: Weiwei Yang >Assignee: Weiwei Yang >Priority: Minor > Attachments: HDFS-10569.001.patch, HDFS-10569.002.patch > > > An obvious bug in LongsDecoder.getBlockListAsLongs(), the size of var *longs* > is +2 to the size of *values*, but the for-loop accesses *values* using > *longs* index. This will cause OutOfIndex. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-10569) A bug causes OutOfIndex error in BlockListAsLongs
[ https://issues.apache.org/jira/browse/HDFS-10569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348414#comment-15348414 ] Weiwei Yang edited comment on HDFS-10569 at 6/24/16 3:18 PM: - Hello [~daryn], [~kihwal] Please help to check if my latest comment and the modifications to the UT makes sense. :) was (Author: cheersyang): Hello [~daryn] [~kihwal] Please help to check if my latest comment and the modifications to the UT makes sense. :) > A bug causes OutOfIndex error in BlockListAsLongs > - > > Key: HDFS-10569 > URL: https://issues.apache.org/jira/browse/HDFS-10569 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.0 >Reporter: Weiwei Yang >Assignee: Weiwei Yang >Priority: Minor > Attachments: HDFS-10569.001.patch, HDFS-10569.002.patch > > > An obvious bug in LongsDecoder.getBlockListAsLongs(), the size of var *longs* > is +2 to the size of *values*, but the for-loop accesses *values* using > *longs* index. This will cause OutOfIndex. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10440) Improve DataNode web UI
[ https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-10440: --- Status: Patch Available (was: In Progress) > Improve DataNode web UI > --- > > Key: HDFS-10440 > URL: https://issues.apache.org/jira/browse/HDFS-10440 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode, ui >Affects Versions: 2.7.0 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-10440.001.patch, HDFS-10440.002.patch, > HDFS-10440.003.patch, HDFS-10440.004.patch, HDFS-10440.005.patch, > HDFS-10440.006.patch, HDFS-10440.007.patch, datanode_2nns.html.002.jpg, > datanode_html.001.jpg, datanode_loading_err.002.jpg, > datanode_utilities.001.jpg, datanode_utilities.002.jpg, dn_web_ui.003.jpg, > dn_web_ui_mockup.jpg, nn_dfs_storage_types.jpg > > > At present, datanode web UI doesn't have much information except for node > name and port. Propose to add more information similar to namenode UI, > including, > * Static info (version, block pool and cluster ID) > * Block pools info (BP IDs, namenode address, actor states) > * Storage info (Volumes, capacity used, reserved, left) > * Utilities (logs) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10440) Improve DataNode web UI
[ https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-10440: --- Status: In Progress (was: Patch Available) > Improve DataNode web UI > --- > > Key: HDFS-10440 > URL: https://issues.apache.org/jira/browse/HDFS-10440 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode, ui >Affects Versions: 2.7.0 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-10440.001.patch, HDFS-10440.002.patch, > HDFS-10440.003.patch, HDFS-10440.004.patch, HDFS-10440.005.patch, > HDFS-10440.006.patch, HDFS-10440.007.patch, datanode_2nns.html.002.jpg, > datanode_html.001.jpg, datanode_loading_err.002.jpg, > datanode_utilities.001.jpg, datanode_utilities.002.jpg, dn_web_ui.003.jpg, > dn_web_ui_mockup.jpg, nn_dfs_storage_types.jpg > > > At present, datanode web UI doesn't have much information except for node > name and port. Propose to add more information similar to namenode UI, > including, > * Static info (version, block pool and cluster ID) > * Block pools info (BP IDs, namenode address, actor states) > * Storage info (Volumes, capacity used, reserved, left) > * Utilities (logs) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10440) Improve DataNode web UI
[ https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348411#comment-15348411 ] Weiwei Yang commented on HDFS-10440: Hello [~vinayrpet] Thank you for the input.I thought IPC port would be useful because there are quite some dfsadmin command utilities using {{datanode_host:ipc_port}} as an identifier. But I also agree with you, because I noticed the *Node* entry on Namenode UI -> Datanodes lists datanode with its data port, so it's more consistent. I just uploaded v7 patch for this. > Improve DataNode web UI > --- > > Key: HDFS-10440 > URL: https://issues.apache.org/jira/browse/HDFS-10440 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode, ui >Affects Versions: 2.7.0 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-10440.001.patch, HDFS-10440.002.patch, > HDFS-10440.003.patch, HDFS-10440.004.patch, HDFS-10440.005.patch, > HDFS-10440.006.patch, HDFS-10440.007.patch, datanode_2nns.html.002.jpg, > datanode_html.001.jpg, datanode_loading_err.002.jpg, > datanode_utilities.001.jpg, datanode_utilities.002.jpg, dn_web_ui.003.jpg, > dn_web_ui_mockup.jpg, nn_dfs_storage_types.jpg > > > At present, datanode web UI doesn't have much information except for node > name and port. Propose to add more information similar to namenode UI, > including, > * Static info (version, block pool and cluster ID) > * Block pools info (BP IDs, namenode address, actor states) > * Storage info (Volumes, capacity used, reserved, left) > * Utilities (logs) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10440) Improve DataNode web UI
[ https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-10440: --- Attachment: HDFS-10440.007.patch > Improve DataNode web UI > --- > > Key: HDFS-10440 > URL: https://issues.apache.org/jira/browse/HDFS-10440 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode, ui >Affects Versions: 2.7.0 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-10440.001.patch, HDFS-10440.002.patch, > HDFS-10440.003.patch, HDFS-10440.004.patch, HDFS-10440.005.patch, > HDFS-10440.006.patch, HDFS-10440.007.patch, datanode_2nns.html.002.jpg, > datanode_html.001.jpg, datanode_loading_err.002.jpg, > datanode_utilities.001.jpg, datanode_utilities.002.jpg, dn_web_ui.003.jpg, > dn_web_ui_mockup.jpg, nn_dfs_storage_types.jpg > > > At present, datanode web UI doesn't have much information except for node > name and port. Propose to add more information similar to namenode UI, > including, > * Static info (version, block pool and cluster ID) > * Block pools info (BP IDs, namenode address, actor states) > * Storage info (Volumes, capacity used, reserved, left) > * Utilities (logs) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-7959) WebHdfs logging is missing on Datanode
[ https://issues.apache.org/jira/browse/HDFS-7959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kihwal Lee updated HDFS-7959: - Attachment: HDFS-7959.3.trunk.patch HDFS-7959.3.branch-2.patch > WebHdfs logging is missing on Datanode > -- > > Key: HDFS-7959 > URL: https://issues.apache.org/jira/browse/HDFS-7959 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Kihwal Lee >Assignee: Kihwal Lee >Priority: Critical > Labels: BB2015-05-TBR > Attachments: HDFS-7959.1.branch-2.patch, HDFS-7959.1.trunk.patch, > HDFS-7959.2.branch-2.patch, HDFS-7959.2.trunk.patch, > HDFS-7959.3.branch-2.patch, HDFS-7959.3.trunk.patch, > HDFS-7959.branch-2.patch, HDFS-7959.patch, HDFS-7959.patch, HDFS-7959.patch, > HDFS-7959.trunk.patch > > > After the conversion to netty, webhdfs requests are not logged on datanodes. > The existing jetty log only logs the non-webhdfs requests that come through > the internal proxy. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10440) Improve DataNode web UI
[ https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348146#comment-15348146 ] Vinayakumar B commented on HDFS-10440: -- I meant 'Data port' Not IPC port. Since data port is the one which is used everywhere, in logs, in fsck, etc. > Improve DataNode web UI > --- > > Key: HDFS-10440 > URL: https://issues.apache.org/jira/browse/HDFS-10440 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode, ui >Affects Versions: 2.7.0 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-10440.001.patch, HDFS-10440.002.patch, > HDFS-10440.003.patch, HDFS-10440.004.patch, HDFS-10440.005.patch, > HDFS-10440.006.patch, datanode_2nns.html.002.jpg, datanode_html.001.jpg, > datanode_loading_err.002.jpg, datanode_utilities.001.jpg, > datanode_utilities.002.jpg, dn_web_ui.003.jpg, dn_web_ui_mockup.jpg, > nn_dfs_storage_types.jpg > > > At present, datanode web UI doesn't have much information except for node > name and port. Propose to add more information similar to namenode UI, > including, > * Static info (version, block pool and cluster ID) > * Block pools info (BP IDs, namenode address, actor states) > * Storage info (Volumes, capacity used, reserved, left) > * Utilities (logs) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10440) Improve DataNode web UI
[ https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348145#comment-15348145 ] Hadoop QA commented on HDFS-10440: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 31s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 30s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 96m 45s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer | | Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:85209cc | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12813030/HDFS-10440.006.patch | | JIRA Issue | HDFS-10440 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 56717ccf2075 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 6314843 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/15903/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/15903/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/15903/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Improve DataNode web UI > --- > > Key: HDFS-10440 > URL: https://issues.apache.org/jira/browse/HDFS-10440 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode, ui >Affects Versions: 2.7.0 >Reporter: Weiwei
[jira] [Commented] (HDFS-10536) Standby NN can not trigger log roll after EditLogTailer thread failed 3 times in EditLogTailer.triggerActiveLogRoll method.
[ https://issues.apache.org/jira/browse/HDFS-10536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348064#comment-15348064 ] Hadoop QA commented on HDFS-10536: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 52s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 91m 34s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:85209cc | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12813021/HDFS-10536.02.patch | | JIRA Issue | HDFS-10536 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux f93dcbea928b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 6314843 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/15902/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/15902/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/15902/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Standby NN can not trigger log roll after EditLogTailer thread failed 3 times > in EditLogTailer.triggerActiveLogRoll method. > --- > >
[jira] [Commented] (HDFS-10440) Improve DataNode web UI
[ https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15348048#comment-15348048 ] Weiwei Yang commented on HDFS-10440: V6 patch has addressed [~vinayrpet]'s [comments|https://issues.apache.org/jira/browse/HDFS-10440?focusedCommentId=15346544=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15346544], details: # {{DataNode on 9.30.150.129:9867}} now uses IPC port instead of HTTP port, so user could read and use the address in places like "hdfs dfsadmin -getDatanodeInfo", "-triggerBlockReport" etc. # {{Namenode Address}} now uses {{namenodeHostName:RpcPort}}, this correction is done in DN metrics {{NamenodeAddresses}} and {{BPServiceActorInfo}} > Improve DataNode web UI > --- > > Key: HDFS-10440 > URL: https://issues.apache.org/jira/browse/HDFS-10440 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode, ui >Affects Versions: 2.7.0 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-10440.001.patch, HDFS-10440.002.patch, > HDFS-10440.003.patch, HDFS-10440.004.patch, HDFS-10440.005.patch, > HDFS-10440.006.patch, datanode_2nns.html.002.jpg, datanode_html.001.jpg, > datanode_loading_err.002.jpg, datanode_utilities.001.jpg, > datanode_utilities.002.jpg, dn_web_ui.003.jpg, dn_web_ui_mockup.jpg, > nn_dfs_storage_types.jpg > > > At present, datanode web UI doesn't have much information except for node > name and port. Propose to add more information similar to namenode UI, > including, > * Static info (version, block pool and cluster ID) > * Block pools info (BP IDs, namenode address, actor states) > * Storage info (Volumes, capacity used, reserved, left) > * Utilities (logs) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10440) Improve DataNode web UI
[ https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-10440: --- Status: Patch Available (was: In Progress) > Improve DataNode web UI > --- > > Key: HDFS-10440 > URL: https://issues.apache.org/jira/browse/HDFS-10440 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode, ui >Affects Versions: 2.7.0 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-10440.001.patch, HDFS-10440.002.patch, > HDFS-10440.003.patch, HDFS-10440.004.patch, HDFS-10440.005.patch, > HDFS-10440.006.patch, datanode_2nns.html.002.jpg, datanode_html.001.jpg, > datanode_loading_err.002.jpg, datanode_utilities.001.jpg, > datanode_utilities.002.jpg, dn_web_ui.003.jpg, dn_web_ui_mockup.jpg, > nn_dfs_storage_types.jpg > > > At present, datanode web UI doesn't have much information except for node > name and port. Propose to add more information similar to namenode UI, > including, > * Static info (version, block pool and cluster ID) > * Block pools info (BP IDs, namenode address, actor states) > * Storage info (Volumes, capacity used, reserved, left) > * Utilities (logs) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10440) Improve DataNode web UI
[ https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-10440: --- Attachment: HDFS-10440.006.patch > Improve DataNode web UI > --- > > Key: HDFS-10440 > URL: https://issues.apache.org/jira/browse/HDFS-10440 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode, ui >Affects Versions: 2.7.0 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-10440.001.patch, HDFS-10440.002.patch, > HDFS-10440.003.patch, HDFS-10440.004.patch, HDFS-10440.005.patch, > HDFS-10440.006.patch, datanode_2nns.html.002.jpg, datanode_html.001.jpg, > datanode_loading_err.002.jpg, datanode_utilities.001.jpg, > datanode_utilities.002.jpg, dn_web_ui.003.jpg, dn_web_ui_mockup.jpg, > nn_dfs_storage_types.jpg > > > At present, datanode web UI doesn't have much information except for node > name and port. Propose to add more information similar to namenode UI, > including, > * Static info (version, block pool and cluster ID) > * Block pools info (BP IDs, namenode address, actor states) > * Storage info (Volumes, capacity used, reserved, left) > * Utilities (logs) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-6962) ACLs inheritance conflict with umaskmode
[ https://issues.apache.org/jira/browse/HDFS-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HDFS-6962: - Attachment: HDFS-6962.003.patch Patch 003 (almost complete, unit tests passed): * Create a new class {{FsPermissionDuo}} that extends {{FsPermission}} to store both masked and unmasked permissions. HDFS client uses it to sneak unmasked permission from FileContext/FileSystem -> AFS -> Hdfs -> DFSClient -> RPC. NN uses it to sneak unmasked permission from RPC -> NameNodeRpcServer.create (placed into PermissionStatus) -> FSNamesystem.startFile -> FSDirWriterFileOp.startFile/addFile -> INodeFile -> INodeWithAdditionalFields. * Add field {{unmasked}} to protobuf message {{CreateRequestProto}} and {{MkdirsRequestProto}} * Modify {{copyINodeDefaultAcl}} to switch between old and new ACL inheritance behavior. * Add 2 unit tests to {{FSAclBaseTest}} Questions: * {{PermissionStatus#applyUMask}} never used, remove it? * {{DFSClient#mkdirs}} and DFSClient#primitiveMkdir}} use file default if permission is null. Should use dir default permission? * Better name for {{FsPermissionDuo}}? TODO: * Investigate why TestWebHDFSAcl does not support the 2 new unit tests * Run system tests and compatibility tests * Update {{HdfsPermissionsGuide.md}} * Investigate the use of permissions in FSDirMkdirOp.createAncestorDirectories call tree [~cnauroth], please take a look at 003 which plugs some loose ends and adds 2 unit tests. > ACLs inheritance conflict with umaskmode > > > Key: HDFS-6962 > URL: https://issues.apache.org/jira/browse/HDFS-6962 > Project: Hadoop HDFS > Issue Type: Bug > Components: security >Affects Versions: 2.4.1 > Environment: CentOS release 6.5 (Final) >Reporter: LINTE >Assignee: John Zhuge >Priority: Critical > Labels: hadoop, security > Attachments: HDFS-6962.001.patch, HDFS-6962.002.patch, > HDFS-6962.003.patch, HDFS-6962.1.patch > > > In hdfs-site.xml > > dfs.umaskmode > 027 > > 1/ Create a directory as superuser > bash# hdfs dfs -mkdir /tmp/ACLS > 2/ set default ACLs on this directory rwx access for group readwrite and user > toto > bash# hdfs dfs -setfacl -m default:group:readwrite:rwx /tmp/ACLS > bash# hdfs dfs -setfacl -m default:user:toto:rwx /tmp/ACLS > 3/ check ACLs /tmp/ACLS/ > bash# hdfs dfs -getfacl /tmp/ACLS/ > # file: /tmp/ACLS > # owner: hdfs > # group: hadoop > user::rwx > group::r-x > other::--- > default:user::rwx > default:user:toto:rwx > default:group::r-x > default:group:readwrite:rwx > default:mask::rwx > default:other::--- > user::rwx | group::r-x | other::--- matches with the umaskmode defined in > hdfs-site.xml, everything ok ! > default:group:readwrite:rwx allow readwrite group with rwx access for > inhéritance. > default:user:toto:rwx allow toto user with rwx access for inhéritance. > default:mask::rwx inhéritance mask is rwx, so no mask > 4/ Create a subdir to test inheritance of ACL > bash# hdfs dfs -mkdir /tmp/ACLS/hdfs > 5/ check ACLs /tmp/ACLS/hdfs > bash# hdfs dfs -getfacl /tmp/ACLS/hdfs > # file: /tmp/ACLS/hdfs > # owner: hdfs > # group: hadoop > user::rwx > user:toto:rwx #effective:r-x > group::r-x > group:readwrite:rwx #effective:r-x > mask::r-x > other::--- > default:user::rwx > default:user:toto:rwx > default:group::r-x > default:group:readwrite:rwx > default:mask::rwx > default:other::--- > Here we can see that the readwrite group has rwx ACL bu only r-x is effective > because the mask is r-x (mask::r-x) in spite of default mask for inheritance > is set to default:mask::rwx on /tmp/ACLS/ > 6/ Modifiy hdfs-site.xml et restart namenode > > dfs.umaskmode > 010 > > 7/ Create a subdir to test inheritance of ACL with new parameter umaskmode > bash# hdfs dfs -mkdir /tmp/ACLS/hdfs2 > 8/ Check ACL on /tmp/ACLS/hdfs2 > bash# hdfs dfs -getfacl /tmp/ACLS/hdfs2 > # file: /tmp/ACLS/hdfs2 > # owner: hdfs > # group: hadoop > user::rwx > user:toto:rwx #effective:rw- > group::r-x #effective:r-- > group:readwrite:rwx #effective:rw- > mask::rw- > other::--- > default:user::rwx > default:user:toto:rwx > default:group::r-x > default:group:readwrite:rwx > default:mask::rwx > default:other::--- > So HDFS masks the ACL value (user, group and other -- exepted the POSIX > owner -- ) with the group mask of dfs.umaskmode properties when creating > directory with inherited ACL. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10536) Standby NN can not trigger log roll after EditLogTailer thread failed 3 times in EditLogTailer.triggerActiveLogRoll method.
[ https://issues.apache.org/jira/browse/HDFS-10536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] XingFeng Shen updated HDFS-10536: - Status: Patch Available (was: Open) > Standby NN can not trigger log roll after EditLogTailer thread failed 3 times > in EditLogTailer.triggerActiveLogRoll method. > --- > > Key: HDFS-10536 > URL: https://issues.apache.org/jira/browse/HDFS-10536 > Project: Hadoop HDFS > Issue Type: Bug > Components: auto-failover >Affects Versions: 3.0.0-alpha1 >Reporter: XingFeng Shen >Priority: Critical > Labels: patch > Fix For: 3.0.0-alpha1 > > Attachments: HDFS-10536-02.patch, HDFS-10536.02.patch, > HDFS-10536.patch > > > When all NameNodes become standby, EditLogTailer will retry 3 times to > trigger log roll, then it will be failed and throw Exception "Cannot find any > valid remote NN to service request!". After one namenode become active, > standby NN still can not trigger log roll again because variable > "nnLoopCount" is still 3, it can not init to 0. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10536) Standby NN can not trigger log roll after EditLogTailer thread failed 3 times in EditLogTailer.triggerActiveLogRoll method.
[ https://issues.apache.org/jira/browse/HDFS-10536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] XingFeng Shen updated HDFS-10536: - Attachment: HDFS-10536.02.patch same patch again to rebuild hadoop CI > Standby NN can not trigger log roll after EditLogTailer thread failed 3 times > in EditLogTailer.triggerActiveLogRoll method. > --- > > Key: HDFS-10536 > URL: https://issues.apache.org/jira/browse/HDFS-10536 > Project: Hadoop HDFS > Issue Type: Bug > Components: auto-failover >Affects Versions: 3.0.0-alpha1 >Reporter: XingFeng Shen >Priority: Critical > Labels: patch > Fix For: 3.0.0-alpha1 > > Attachments: HDFS-10536-02.patch, HDFS-10536.02.patch, > HDFS-10536.patch > > > When all NameNodes become standby, EditLogTailer will retry 3 times to > trigger log roll, then it will be failed and throw Exception "Cannot find any > valid remote NN to service request!". After one namenode become active, > standby NN still can not trigger log roll again because variable > "nnLoopCount" is still 3, it can not init to 0. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10536) Standby NN can not trigger log roll after EditLogTailer thread failed 3 times in EditLogTailer.triggerActiveLogRoll method.
[ https://issues.apache.org/jira/browse/HDFS-10536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] XingFeng Shen updated HDFS-10536: - Status: Open (was: Patch Available) > Standby NN can not trigger log roll after EditLogTailer thread failed 3 times > in EditLogTailer.triggerActiveLogRoll method. > --- > > Key: HDFS-10536 > URL: https://issues.apache.org/jira/browse/HDFS-10536 > Project: Hadoop HDFS > Issue Type: Bug > Components: auto-failover >Affects Versions: 3.0.0-alpha1 >Reporter: XingFeng Shen >Priority: Critical > Labels: patch > Fix For: 3.0.0-alpha1 > > Attachments: HDFS-10536-02.patch, HDFS-10536.patch > > > When all NameNodes become standby, EditLogTailer will retry 3 times to > trigger log roll, then it will be failed and throw Exception "Cannot find any > valid remote NN to service request!". After one namenode become active, > standby NN still can not trigger log roll again because variable > "nnLoopCount" is still 3, it can not init to 0. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-9772) TestBlockReplacement#testThrottler doesn't work as expected
[ https://issues.apache.org/jira/browse/HDFS-9772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HDFS-9772: Issue Type: Bug (was: Test) > TestBlockReplacement#testThrottler doesn't work as expected > --- > > Key: HDFS-9772 > URL: https://issues.apache.org/jira/browse/HDFS-9772 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.1 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Labels: test > Fix For: 2.7.3 > > Attachments: HDFS.001.patch > > > In {{TestBlockReplacement#testThrottler}}, it use a fault variable to > calculate the ended bandwidth. It use variable {{totalBytes}} rathe than > final variable {{TOTAL_BYTES}}. And the value of {{TOTAL_BYTES}} is set to > {{bytesToSend}}. The {{totalBytes}} looks no meaning here and this will make > {{totalBytes*1000/(end-start)}} always be 0 and the comparison always true. > The method code is below: > {code} > @Test > public void testThrottler() throws IOException { > Configuration conf = new HdfsConfiguration(); > FileSystem.setDefaultUri(conf, "hdfs://localhost:0"); > long bandwidthPerSec = 1024*1024L; > final long TOTAL_BYTES =6*bandwidthPerSec; > long bytesToSend = TOTAL_BYTES; > long start = Time.monotonicNow(); > DataTransferThrottler throttler = new > DataTransferThrottler(bandwidthPerSec); > long totalBytes = 0L; > long bytesSent = 1024*512L; // 0.5MB > throttler.throttle(bytesSent); > bytesToSend -= bytesSent; > bytesSent = 1024*768L; // 0.75MB > throttler.throttle(bytesSent); > bytesToSend -= bytesSent; > try { > Thread.sleep(1000); > } catch (InterruptedException ignored) {} > throttler.throttle(bytesToSend); > long end = Time.monotonicNow(); > assertTrue(totalBytes*1000/(end-start)<=bandwidthPerSec); > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-1312) Re-balance disks within a Datanode
[ https://issues.apache.org/jira/browse/HDFS-1312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15347829#comment-15347829 ] Arpit Agarwal commented on HDFS-1312: - The checkstyle failures were 'hides a field' and one long method which was not added by this patch. I've merged the HDFS-1312 feature branch to trunk. Thanks for the code contribution [~anu], [~xiaobingo], [~eddyxu] and [~linyiqun]. Thanks to everyone else who contributed ideas and feedback on this historical jira. :) Users frequently request this feature and it felt good to commit it. Anu or I will resolve this Jira shortly and move out the remaining sub-tasks to a follow-up Jira. > Re-balance disks within a Datanode > -- > > Key: HDFS-1312 > URL: https://issues.apache.org/jira/browse/HDFS-1312 > Project: Hadoop HDFS > Issue Type: New Feature > Components: datanode >Reporter: Travis Crawford >Assignee: Anu Engineer > Attachments: Architecture_and_test_update.pdf, > Architecture_and_testplan.pdf, HDFS-1312.001.patch, HDFS-1312.002.patch, > HDFS-1312.003.patch, HDFS-1312.004.patch, HDFS-1312.005.patch, > HDFS-1312.006.patch, HDFS-1312.007.patch, disk-balancer-proposal.pdf > > > Filing this issue in response to ``full disk woes`` on hdfs-user. > Datanodes fill their storage directories unevenly, leading to situations > where certain disks are full while others are significantly less used. Users > at many different sites have experienced this issue, and HDFS administrators > are taking steps like: > - Manually rebalancing blocks in storage directories > - Decomissioning nodes & later readding them > There's a tradeoff between making use of all available spindles, and filling > disks at the sameish rate. Possible solutions include: > - Weighting less-used disks heavier when placing new blocks on the datanode. > In write-heavy environments this will still make use of all spindles, > equalizing disk use over time. > - Rebalancing blocks locally. This would help equalize disk use as disks are > added/replaced in older cluster nodes. > Datanodes should actively manage their local disk so operator intervention is > not needed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10569) A bug causes OutOfIndex error in BlockListAsLongs
[ https://issues.apache.org/jira/browse/HDFS-10569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-10569: --- Status: Patch Available (was: In Progress) > A bug causes OutOfIndex error in BlockListAsLongs > - > > Key: HDFS-10569 > URL: https://issues.apache.org/jira/browse/HDFS-10569 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.0 >Reporter: Weiwei Yang >Assignee: Weiwei Yang >Priority: Minor > Attachments: HDFS-10569.001.patch, HDFS-10569.002.patch > > > An obvious bug in LongsDecoder.getBlockListAsLongs(), the size of var *longs* > is +2 to the size of *values*, but the for-loop accesses *values* using > *longs* index. This will cause OutOfIndex. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10569) A bug causes OutOfIndex error in BlockListAsLongs
[ https://issues.apache.org/jira/browse/HDFS-10569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15347826#comment-15347826 ] Weiwei Yang commented on HDFS-10569: It looks like the 2nd test in {{TestBlockListAsLongs#testDatanodeDetect}} was still using new-style BP, that's why the code was not called in UT. I modified that part in 002 patch. Without the patch, it will throw OutOfIndex error, the patch fixes this issue when DN sends old-style BR. Please take a look at the new patch and the test case. Thanks a lot. > A bug causes OutOfIndex error in BlockListAsLongs > - > > Key: HDFS-10569 > URL: https://issues.apache.org/jira/browse/HDFS-10569 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.0 >Reporter: Weiwei Yang >Assignee: Weiwei Yang >Priority: Minor > Attachments: HDFS-10569.001.patch, HDFS-10569.002.patch > > > An obvious bug in LongsDecoder.getBlockListAsLongs(), the size of var *longs* > is +2 to the size of *values*, but the for-loop accesses *values* using > *longs* index. This will cause OutOfIndex. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10545) DiskBalancer: PlanCommand should use -fs instead of -uri to be consistent with other hdfs commands
[ https://issues.apache.org/jira/browse/HDFS-10545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15347818#comment-15347818 ] Hudson commented on HDFS-10545: --- SUCCESS: Integrated in Hadoop-trunk-Commit #10014 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10014/]) HDFS-10545. DiskBalancer: PlanCommand should use -fs instead of -uri to (arp: rev 0774412e41856b4ed3eccfa9270165e216d10ab8) * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DiskBalancer.java > DiskBalancer: PlanCommand should use -fs instead of -uri to be consistent > with other hdfs commands > -- > > Key: HDFS-10545 > URL: https://issues.apache.org/jira/browse/HDFS-10545 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Affects Versions: HDFS-1312 >Reporter: Lei (Eddy) Xu >Assignee: Anu Engineer >Priority: Minor > Attachments: HDFS-10545-HDFS-1312.001.patch > > > PlanCommand currently uses {{-uri}} to specify NameNode, while in all other > hdfs commands (i.e., {{hdfs dfsadmin}} and {{hdfs balancer}})) they use > {{-fs}} to specify NameNode. > It'd be better to use {{-fs}} here. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9543) DiskBalancer : Add Data mover
[ https://issues.apache.org/jira/browse/HDFS-9543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15347794#comment-15347794 ] Hudson commented on HDFS-9543: -- SUCCESS: Integrated in Hadoop-trunk-Commit #10014 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10014/]) HDFS-9543. DiskBalancer: Add Data mover. Contributed by Anu Engineer. (arp: rev 1594b472bb9df7537dbc001411c99058cc11ba41) * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/planner/MoveStep.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/datamodel/DiskBalancerVolume.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/datamodel/DiskBalancerVolumeSet.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/datamodel/DiskBalancerDataNode.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancer.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/diskbalancer/TestDiskBalancer.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/planner/Step.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/diskbalancer/TestPlanner.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java > DiskBalancer : Add Data mover > -- > > Key: HDFS-9543 > URL: https://issues.apache.org/jira/browse/HDFS-9543 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Anu Engineer >Assignee: Anu Engineer > Fix For: HDFS-1312 > > Attachments: HDFS-9543-HDFS-1312.001.patch, > HDFS-9543-HDFS-1312.002.patch, HDFS-9543-HDFS-1312.003.patch, > HDFS-9543-HDFS-1312.004.patch > > > This patch adds the actual mover logic to the datanode. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9683) DiskBalancer : Add cancelPlan implementation
[ https://issues.apache.org/jira/browse/HDFS-9683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15347792#comment-15347792 ] Hudson commented on HDFS-9683: -- SUCCESS: Integrated in Hadoop-trunk-Commit #10014 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10014/]) HDFS-9683. DiskBalancer: Add cancelPlan implementation. (Contributed by (arp: rev 9847640603ace60d169206a40a256f988b314983) * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/diskbalancer/TestDiskBalancerRPC.java * hadoop-hdfs-project/hadoop-hdfs/HDFS-1312_CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancer.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/DiskBalancerException.java > DiskBalancer : Add cancelPlan implementation > > > Key: HDFS-9683 > URL: https://issues.apache.org/jira/browse/HDFS-9683 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Affects Versions: HDFS-1312 >Reporter: Anu Engineer >Assignee: Anu Engineer > Fix For: HDFS-1312 > > Attachments: HDFS-9683-HDFS-1312.001.patch, > HDFS-9683-HDFS-1312.002.patch > > > Add datanode side code for Cancel Plan -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9469) DiskBalancer : Add Planner
[ https://issues.apache.org/jira/browse/HDFS-9469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15347817#comment-15347817 ] Hudson commented on HDFS-9469: -- SUCCESS: Integrated in Hadoop-trunk-Commit #10014 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10014/]) HDFS-9469. DiskBalancer: Add Planner. (Contributed by Anu Engineer) (arp: rev 5724a103161424f4b293ba937f0d0540179f36ac) * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/planner/MoveStep.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/diskbalancer/DiskBalancerTestUtil.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/planner/PlannerFactory.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/planner/GreedyPlanner.java * hadoop-hdfs-project/hadoop-hdfs/HDFS-1312_CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/planner/NodePlan.java * hadoop-hdfs-project/hadoop-hdfs/src/test/resources/diskBalancer/data-cluster-3node-3disk.json * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/planner/Step.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/planner/Planner.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/diskbalancer/TestPlanner.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/planner/package-info.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/datamodel/DiskBalancerCluster.java > DiskBalancer : Add Planner > --- > > Key: HDFS-9469 > URL: https://issues.apache.org/jira/browse/HDFS-9469 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Affects Versions: 2.8.0 >Reporter: Anu Engineer >Assignee: Anu Engineer > Fix For: HDFS-1312 > > Attachments: HDFS-9469-HDFS-1312.001.patch, > HDFS-9469-HDFS-1312.002.patch, HDFS-9469-HDFS-1312.003.patch, > HDFS-9469-HDFS-1312.004.patch, HDFS-9469-HDFS-1312.005.patch > > > Disk Balancer reads the cluster data and then creates a plan for the data > moves based on the snap-shot of the data read from the nodes. This plan is > later submitted to data nodes for execution. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10520) DiskBalancer: Fix Checkstyle issues in test code
[ https://issues.apache.org/jira/browse/HDFS-10520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15347821#comment-15347821 ] Hudson commented on HDFS-10520: --- SUCCESS: Integrated in Hadoop-trunk-Commit #10014 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10014/]) HDFS-10520. DiskBalancer: Fix Checkstyle issues in test code. (arp: rev 3225c24e0efb8627ea84ba23ad09859942cd81f0) * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/DiskBalancerException.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/diskbalancer/TestConnectors.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/diskbalancer/TestDiskBalancer.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/diskbalancer/DiskBalancerResultVerifier.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/diskbalancer/TestPlanner.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/diskbalancer/TestDiskBalancerRPC.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/diskbalancer/TestDataModels.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/diskbalancer/TestDiskBalancerWithMockMover.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/diskbalancer/DiskBalancerTestUtil.java > DiskBalancer: Fix Checkstyle issues in test code > > > Key: HDFS-10520 > URL: https://issues.apache.org/jira/browse/HDFS-10520 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Affects Versions: HDFS-1312 >Reporter: Anu Engineer >Assignee: Anu Engineer > Fix For: HDFS-1312 > > Attachments: HDFS-10520-HDFS-1312.001.patch > > > Most of the test code in HDFS-1312 went in when we did not have checkstyle > enabled for tests. But checkstyle is enabled on trunk now and when we merge > this will create lot of messages. This patch cleans up important checkstyle > issues like missing JavaDoc etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9702) DiskBalancer : getVolumeMap implementation
[ https://issues.apache.org/jira/browse/HDFS-9702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15347780#comment-15347780 ] Hudson commented on HDFS-9702: -- SUCCESS: Integrated in Hadoop-trunk-Commit #10014 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10014/]) HDFS-9702. DiskBalancer: getVolumeMap implementation. (Contributed by (arp: rev 918722bdd202acbeda92d650ff0dcecbcd8a0697) * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/DiskBalancerException.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancer.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/diskbalancer/TestDiskBalancerRPC.java > DiskBalancer : getVolumeMap implementation > -- > > Key: HDFS-9702 > URL: https://issues.apache.org/jira/browse/HDFS-9702 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Affects Versions: HDFS-1312 >Reporter: Anu Engineer >Assignee: Anu Engineer > Fix For: HDFS-1312 > > Attachments: HDFS-9702-HDFS-1312.001.patch, > HDFS-9702-HDFS-1312.002.patch > > > Add get volume map -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9547) DiskBalancer : Add user documentation
[ https://issues.apache.org/jira/browse/HDFS-9547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15347807#comment-15347807 ] Hudson commented on HDFS-9547: -- SUCCESS: Integrated in Hadoop-trunk-Commit #10014 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10014/]) HDFS-9547. DiskBalancer: Add user documentation. Contributed by Anu (arp: rev 06a9799d84bef013e1573d382f824b485aa0c329) * hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md * hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSDiskbalancer.md > DiskBalancer : Add user documentation > - > > Key: HDFS-9547 > URL: https://issues.apache.org/jira/browse/HDFS-9547 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Anu Engineer >Assignee: Anu Engineer > Attachments: HDFS-9547-HDFS-1312.001.patch, > HDFS-9547-HDFS-1312.002.patch > > > Write diskbalancer.md since this is a new tool and explain the usage with > examples. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10557) Fix handling of the -fs Generic option
[ https://issues.apache.org/jira/browse/HDFS-10557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15347779#comment-15347779 ] Hudson commented on HDFS-10557: --- SUCCESS: Integrated in Hadoop-trunk-Commit #10014 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10014/]) HDFS-10557. Fix handling of the -fs Generic option. (Arpit Agarwal) (arp: rev 66fa34c839c89733839cb67878fdfdc4b1f65ab8) * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DiskBalancer.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/ReportCommand.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/planner/GreedyPlanner.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/diskbalancer/command/TestDiskBalancerCommand.java * hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSDiskbalancer.md * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/PlanCommand.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/Command.java > Fix handling of the -fs Generic option > -- > > Key: HDFS-10557 > URL: https://issues.apache.org/jira/browse/HDFS-10557 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: diskbalancer >Affects Versions: HDFS-1312 >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Attachments: HDFS-10557-HDFS-1312.02.patch, > HDFS-10557-HDFS-1312.03.patch, HDFS-10557-HDFS-1312.04.patch > > > A recent change to DiskBalancer replaced the -uri option with -fs. However > -fs is a [generic > option|http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/CommandsManual.html#Generic_Options] > so it is consumed by the GenericOptionsParser. > We can update this option handling to make it similar to other hdfs commands. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9595) DiskBalancer : Add cancelPlan RPC
[ https://issues.apache.org/jira/browse/HDFS-9595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15347812#comment-15347812 ] Hudson commented on HDFS-9595: -- SUCCESS: Integrated in Hadoop-trunk-Commit #10014 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10014/]) HDFS-9595. DiskBalancer: Add cancelPlan RPC. (Contributed by Anu (arp: rev 0501d430e2f6111ad8b65dc36f4a98d94cb9589b) * hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientDatanodeProtocol.java * hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientDatanodeProtocol.proto * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/diskbalancer/TestDiskBalancerRPC.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/planner/GreedyPlanner.java * hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientDatanodeProtocolTranslatorPB.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientDatanodeProtocolServerSideTranslatorPB.java * hadoop-hdfs-project/hadoop-hdfs/HDFS-1312_CHANGES.txt > DiskBalancer : Add cancelPlan RPC > - > > Key: HDFS-9595 > URL: https://issues.apache.org/jira/browse/HDFS-9595 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Affects Versions: HDFS-1312 >Reporter: Anu Engineer >Assignee: Anu Engineer > Fix For: HDFS-1312 > > Attachments: HDFS-9595-HDFS-1312.001.patch, > HDFS-9595-HDFS-1312.002.patch > > > Add an RPC that allows users to cancel a running disk balancer plan -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10476) DiskBalancer: Plan command output directory should be a sub-directory
[ https://issues.apache.org/jira/browse/HDFS-10476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15347791#comment-15347791 ] Hudson commented on HDFS-10476: --- SUCCESS: Integrated in Hadoop-trunk-Commit #10014 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10014/]) HDFS-10476. DiskBalancer: Plan command output directory should be a (arp: rev 47dcb0f95288a5e6f05480d274f1ebd8cc873ef8) * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DiskBalancer.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/Command.java > DiskBalancer: Plan command output directory should be a sub-directory > - > > Key: HDFS-10476 > URL: https://issues.apache.org/jira/browse/HDFS-10476 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Affects Versions: HDFS-1312 >Reporter: Anu Engineer >Assignee: Anu Engineer > Fix For: HDFS-1312 > > Attachments: HDFS-10476-HDFS-1312.001.patch, > HDFS-10476-HDFS-1312.002.patch > > > The plan command output is is placed in a default directory of > /system/diskbalancer instead it should be placed in > /system/diskbalancer/ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10540) Diskbalancer: The CLI error message for disk balancer is not enabled is not clear.
[ https://issues.apache.org/jira/browse/HDFS-10540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15347803#comment-15347803 ] Hudson commented on HDFS-10540: --- SUCCESS: Integrated in Hadoop-trunk-Commit #10014 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10014/]) HDFS-10540. Diskbalancer: The CLI error message for disk balancer is not (arp: rev cb68e5b3bdb0079af867a9e49559827ecee03010) * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancer.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/Command.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DiskBalancer.java > Diskbalancer: The CLI error message for disk balancer is not enabled is not > clear. > -- > > Key: HDFS-10540 > URL: https://issues.apache.org/jira/browse/HDFS-10540 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Affects Versions: HDFS-1312 >Reporter: Lei (Eddy) Xu >Assignee: Anu Engineer > Attachments: HDFS-10540-HDFS-1312.001.patch > > > When running the {{hdfs diskbalancer}} against a DN whose disk balancer > feature is not enabled, it reports: > {code} > $ hdfs diskbalancer -plan 127.0.0.1 -uri hdfs://localhost > 16/06/16 18:03:29 WARN util.NativeCodeLoader: Unable to load native-hadoop > library for your platform... using builtin-java classes where applicable > Internal error, Unable to create JSON string. > at > org.apache.hadoop.hdfs.server.datanode.DiskBalancer.getVolumeNames(DiskBalancer.java:260) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.getDiskBalancerSetting(DataNode.java:3105) > at > org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getDiskBalancerSetting(ClientDatanodeProtocolServerSideTranslatorPB.java:359) > at > org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:17515) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) > Caused by: org.apache.hadoop.hdfs.server.diskbalancer.DiskBalancerException: > Disk Balancer is not enabled. > at > org.apache.hadoop.hdfs.server.datanode.DiskBalancer.checkDiskBalancerEnabled(DiskBalancer.java:293) > at > org.apache.hadoop.hdfs.server.datanode.DiskBalancer.getVolumeNames(DiskBalancer.java:251) > ... 11 more > {code} > We should not directly throw IOE to the user. And it should explicitly > explain the reason that the operation fails. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10569) A bug causes OutOfIndex error in BlockListAsLongs
[ https://issues.apache.org/jira/browse/HDFS-10569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-10569: --- Attachment: HDFS-10569.002.patch > A bug causes OutOfIndex error in BlockListAsLongs > - > > Key: HDFS-10569 > URL: https://issues.apache.org/jira/browse/HDFS-10569 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.0 >Reporter: Weiwei Yang >Assignee: Weiwei Yang >Priority: Minor > Attachments: HDFS-10569.001.patch, HDFS-10569.002.patch > > > An obvious bug in LongsDecoder.getBlockListAsLongs(), the size of var *longs* > is +2 to the size of *values*, but the for-loop accesses *values* using > *longs* index. This will cause OutOfIndex. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org