[jira] [Commented] (HDFS-11417) Add datanode admin command to get the storage info.

2017-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891759#comment-15891759
 ] 

Hadoop QA commented on HDFS-11417:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
6s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-hdfs-project: The patch generated 16 new 
+ 357 unchanged - 3 fixed = 373 total (was 360) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
54s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 52s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 97m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11417 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/1282/HDFS-11417.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 47732487ea74 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 4e14ead |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18510/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
 |
| unit | 

[jira] [Updated] (HDFS-11416) Refactor out system default erasure coding policy

2017-03-01 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11416:
---
Attachment: HDFS-11416.004.patch

Missed updating a different test case in that test class.

> Refactor out system default erasure coding policy
> -
>
> Key: HDFS-11416
> URL: https://issues.apache.org/jira/browse/HDFS-11416
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-11416.001.patch, HDFS-11416.002.patch, 
> HDFS-11416.003.patch, HDFS-11416.004.patch
>
>
> As discussed on HDFS-7859, the system default EC policy is mostly a relic 
> from development when the system only supported a single global policy. Now, 
> we support multiple policies, and the system default policy is mostly used by 
> tests.
> We should refactor to remove this concept.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11417) Add datanode admin command to get the storage info.

2017-03-01 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891682#comment-15891682
 ] 

Surendra Singh Lilhore commented on HDFS-11417:
---

Thanks [~vinayrpet] for review..
Attached updated patch, please review..

> Add datanode admin command to get the storage info.
> ---
>
> Key: HDFS-11417
> URL: https://issues.apache.org/jira/browse/HDFS-11417
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 2.7.3
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-11417.001.patch, HDFS-11417.002.patch, 
> HDFS-11417.003.patch
>
>
> It is good to add one admin command for datanode to get the data directory 
> info like storage type, directory path, number of block, capacity, used 
> space. This will be help full in large cluster where DN has multiple data 
> directory configured. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11417) Add datanode admin command to get the storage info.

2017-03-01 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-11417:
--
Attachment: HDFS-11417.003.patch

> Add datanode admin command to get the storage info.
> ---
>
> Key: HDFS-11417
> URL: https://issues.apache.org/jira/browse/HDFS-11417
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 2.7.3
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-11417.001.patch, HDFS-11417.002.patch, 
> HDFS-11417.003.patch
>
>
> It is good to add one admin command for datanode to get the data directory 
> info like storage type, directory path, number of block, capacity, used 
> space. This will be help full in large cluster where DN has multiple data 
> directory configured. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8132) Namenode Startup Failing When we add Jcarder.jar in class Path

2017-03-01 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891673#comment-15891673
 ] 

Rohith Sharma K S commented on HDFS-8132:
-

[~vijaykalluru] JCarder need not to update its build for attaching to Hadoop 
version. You can download existing JCarder-2.0 build and attache to hadoop. 
While attaching agent, make sure to add *-noverify* flag i.e {{-noverify 
-javaagent:/jcarder.jar=outputdir=/jcarder/rm}}

> Namenode Startup Failing When we add Jcarder.jar in class Path
> --
>
> Key: HDFS-8132
> URL: https://issues.apache.org/jira/browse/HDFS-8132
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>
>  *{color:blue}Namenode while Startup Args{color}*   ( Just added the jcarder 
> args)
> exec /home/hdfs/jdk1.7.0_72/bin/java -Dproc_namenode -Xmx1000m 
> -Djava.net.preferIPv4Stack=true 
> -Dhadoop.log.dir=/opt/ClusterSetup/Hadoop2.7/install/hadoop/namenode/logs 
> -Dhadoop.log.file=hadoop.log 
> -Dhadoop.home.dir=/opt/ClusterSetup/Hadoop2.7/install/hadoop/namenode 
> -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,console 
> -Djava.library.path=/opt/ClusterSetup/Hadoop2.7/install/hadoop/namenode/lib/native
>  -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true 
> -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender 
> {color:red}-javaagent:/opt/Jcarder/jcarder.jar=outputdir=/opt/Jcarder/Output/nn-jcarder{color}
>  -Dhadoop.security.logger=INFO,NullAppender 
> org.apache.hadoop.hdfs.server.namenode.NameNode
> Setting outputdir to /opt/Jcarder/Output/nn-jcarder
> Starting JCarder (2.0.0/6) agent
> Opening for writing: /opt/Jcarder/Output/nn-jcarder/jcarder_events.db
> Opening for writing: /opt/Jcarder/Output/nn-jcarder/jcarder_contexts.db
> Not instrumenting standard library classes (AWT, Swing, etc.)
> JCarder agent initialized
>  *{color:red}ERROR{color}* 
> {noformat}
> Exception in thread "main" java.lang.VerifyError: Expecting a stackmap frame 
> at branch target 21
> Exception Details:
>   Location:
> 
> org/apache/hadoop/hdfs/server/namenode/NameNode.createHAState(Lorg/apache/hadoop/hdfs/server/common/HdfsServerConstants$StartupOption;)Lorg/apache/hadoop/hdfs/server/namenode/ha/HAState;
>  @4: ifeq
>   Reason:
> Expected stackmap frame at this location.
>   Bytecode:
> 000: 2ab4 02d2 9900 112b b203 08a5 000a 2bb2
> 010: 030b a600 07b2 030d b0b2 030f b0   
>   at java.lang.Class.getDeclaredMethods0(Native Method)
>   at java.lang.Class.privateGetDeclaredMethods(Class.java:2615)
>   at java.lang.Class.getMethod0(Class.java:2856)
>   at java.lang.Class.getMethod(Class.java:1668)
>   at sun.launcher.LauncherHelper.getMainMethod(LauncherHelper.java:494)
>   at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:486)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11412) Maintenance minimum replication config value allowable range should be [0, DefaultReplication]

2017-03-01 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-11412:
--
Attachment: HDFS-11412-branch-2.01.patch

[~mingma], attached branch2 patch for the one committed to trunk. Kindly take a 
look. TestMaintenanceState, TestDecommission passes through. 

> Maintenance minimum replication config value allowable range should be [0, 
> DefaultReplication]
> --
>
> Key: HDFS-11412
> URL: https://issues.apache.org/jira/browse/HDFS-11412
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11412.01.patch, HDFS-11412.02.patch, 
> HDFS-11412-branch-2.01.patch
>
>
> Currently the allowed value range for Maintenance Min Replication 
> {{dfs.namenode.maintenance.replication.min}} is 0 to 
> {{dfs.namenode.replication.min}} (default=1). Users wanting not to affect the 
> performance of the cluster would wish to have the Maintenance Min Replication 
> number greater than 1, say 2. In the current design, it is possible to have 
> this Maintenance Min Replication configuration, but only after changing the 
> NameNode level Block Min Replication to 2, and which could slowdown the 
> overall latency for client writes.
> Technically speaking we should be allowing Maintenance Min Replication to be 
> in range 0 to dfs.replication.max.  
> * There is always config value of 0 for users not wanting any 
> availability/performance during maintenance. 
> * And, performance centric workloads can still get maintenance done without 
> major disruptions by having a bigger Maintenance Min Replication. Setting the 
> upper limit as dfs.replication.max could be an overkill as it could trigger 
> re-replication which Maintenance State is trying to avoid. So, we could allow 
> the {{dfs.namenode.maintenance.replication.min}} in the range {{0 to 
> dfs.replication}}
> {noformat}
> if (minMaintenanceR < 0) {
>   throw new IOException("Unexpected configuration parameters: "
>   + DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY
>   + " = " + minMaintenanceR + " < 0");
> }
> if (minMaintenanceR > minR) {
>   throw new IOException("Unexpected configuration parameters: "
>   + DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY
>   + " = " + minMaintenanceR + " > "
>   + DFSConfigKeys.DFS_NAMENODE_REPLICATION_MIN_KEY
>   + " = " + minR);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11478) Update EC commands in HDFSCommands.md

2017-03-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891645#comment-15891645
 ] 

Hudson commented on HDFS-11478:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11329 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11329/])
HDFS-11478. Update EC commands in HDFSCommands.md. Contributed by Yiqun (yqlin: 
rev 555d0c39950078e80a373f188c3b1529995d0af7)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md


> Update EC commands in HDFSCommands.md
> -
>
> Key: HDFS-11478
> URL: https://issues.apache.org/jira/browse/HDFS-11478
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, erasure-coding
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-11478.001.patch
>
>
> The EC commands in {{HDFSCommands.md}} is out of date. There are some places 
> need to update.
> Current EC commands in {{HDFSCommands.md}}:
> {code}
>hdfs ec [generic options]
>[-setPolicy [-p ] ]
>[-getPolicy ]
>[-listPolicies]
> {code}
> But after the work on HDFS-11426 and HDFS-11072, the EC commands usages 
> changed as followings that showed in {{HDFSErasureCoding.md}}:
> {code}
>hdfs ec [generic options]
>  [-setPolicy -policy  -path ]
>  [-getPolicy -path ]
>  [-unsetPolicy -path ]
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11412) Maintenance minimum replication config value allowable range should be [0, DefaultReplication]

2017-03-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891646#comment-15891646
 ] 

Hudson commented on HDFS-11412:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11329 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11329/])
HDFS-11412. Maintenance minimum replication config value allowable range 
(mingma: rev 25c84d279bcefb72a3dd8058f25bba1713504849)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceState.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java


> Maintenance minimum replication config value allowable range should be [0, 
> DefaultReplication]
> --
>
> Key: HDFS-11412
> URL: https://issues.apache.org/jira/browse/HDFS-11412
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11412.01.patch, HDFS-11412.02.patch
>
>
> Currently the allowed value range for Maintenance Min Replication 
> {{dfs.namenode.maintenance.replication.min}} is 0 to 
> {{dfs.namenode.replication.min}} (default=1). Users wanting not to affect the 
> performance of the cluster would wish to have the Maintenance Min Replication 
> number greater than 1, say 2. In the current design, it is possible to have 
> this Maintenance Min Replication configuration, but only after changing the 
> NameNode level Block Min Replication to 2, and which could slowdown the 
> overall latency for client writes.
> Technically speaking we should be allowing Maintenance Min Replication to be 
> in range 0 to dfs.replication.max.  
> * There is always config value of 0 for users not wanting any 
> availability/performance during maintenance. 
> * And, performance centric workloads can still get maintenance done without 
> major disruptions by having a bigger Maintenance Min Replication. Setting the 
> upper limit as dfs.replication.max could be an overkill as it could trigger 
> re-replication which Maintenance State is trying to avoid. So, we could allow 
> the {{dfs.namenode.maintenance.replication.min}} in the range {{0 to 
> dfs.replication}}
> {noformat}
> if (minMaintenanceR < 0) {
>   throw new IOException("Unexpected configuration parameters: "
>   + DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY
>   + " = " + minMaintenanceR + " < 0");
> }
> if (minMaintenanceR > minR) {
>   throw new IOException("Unexpected configuration parameters: "
>   + DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY
>   + " = " + minMaintenanceR + " > "
>   + DFSConfigKeys.DFS_NAMENODE_REPLICATION_MIN_KEY
>   + " = " + minR);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11450) HDFS specific network topology classes with storage type info included

2017-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891634#comment-15891634
 ] 

Hadoop QA commented on HDFS-11450:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
57s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 17s{color} | {color:orange} root: The patch generated 10 new + 306 unchanged 
- 4 fixed = 316 total (was 310) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
31s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 45s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}173m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11450 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855516/HDFS-11450.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 50938a81214a 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 
16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6f6dfe0 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18506/artifact/patchprocess/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18506/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-11416) Refactor out system default erasure coding policy

2017-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891627#comment-15891627
 ] 

Hadoop QA commented on HDFS-11416:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
52s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
13s{color} | {color:green} root: The patch generated 0 new + 298 unchanged - 1 
fixed = 298 total (was 299) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 17s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m  4s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}154m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ipc.TestRPCWaitForProxy |
|   | hadoop.hdfs.TestMaintenanceState |
|   | hadoop.hdfs.server.namenode.TestStripedINodeFile |
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11416 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855521/HDFS-11416.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0024e22bd652 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6f6dfe0 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18507/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit | 

[jira] [Commented] (HDFS-11412) Maintenance minimum replication config value allowable range should be [0, DefaultReplication]

2017-03-01 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891625#comment-15891625
 ] 

Manoj Govindassamy commented on HDFS-11412:
---

Thanks for the review and commit [~mingma]. Will provide branch-2 patch soon.

> Maintenance minimum replication config value allowable range should be [0, 
> DefaultReplication]
> --
>
> Key: HDFS-11412
> URL: https://issues.apache.org/jira/browse/HDFS-11412
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11412.01.patch, HDFS-11412.02.patch
>
>
> Currently the allowed value range for Maintenance Min Replication 
> {{dfs.namenode.maintenance.replication.min}} is 0 to 
> {{dfs.namenode.replication.min}} (default=1). Users wanting not to affect the 
> performance of the cluster would wish to have the Maintenance Min Replication 
> number greater than 1, say 2. In the current design, it is possible to have 
> this Maintenance Min Replication configuration, but only after changing the 
> NameNode level Block Min Replication to 2, and which could slowdown the 
> overall latency for client writes.
> Technically speaking we should be allowing Maintenance Min Replication to be 
> in range 0 to dfs.replication.max.  
> * There is always config value of 0 for users not wanting any 
> availability/performance during maintenance. 
> * And, performance centric workloads can still get maintenance done without 
> major disruptions by having a bigger Maintenance Min Replication. Setting the 
> upper limit as dfs.replication.max could be an overkill as it could trigger 
> re-replication which Maintenance State is trying to avoid. So, we could allow 
> the {{dfs.namenode.maintenance.replication.min}} in the range {{0 to 
> dfs.replication}}
> {noformat}
> if (minMaintenanceR < 0) {
>   throw new IOException("Unexpected configuration parameters: "
>   + DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY
>   + " = " + minMaintenanceR + " < 0");
> }
> if (minMaintenanceR > minR) {
>   throw new IOException("Unexpected configuration parameters: "
>   + DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY
>   + " = " + minMaintenanceR + " > "
>   + DFSConfigKeys.DFS_NAMENODE_REPLICATION_MIN_KEY
>   + " = " + minR);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11412) Maintenance minimum replication config value allowable range should be [0, DefaultReplication]

2017-03-01 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891612#comment-15891612
 ] 

Ming Ma commented on HDFS-11412:


+1. Committed to trunk. [~manojg], could you please provide another patch for 
branch-2 as it doesn't apply? Thanks.

> Maintenance minimum replication config value allowable range should be [0, 
> DefaultReplication]
> --
>
> Key: HDFS-11412
> URL: https://issues.apache.org/jira/browse/HDFS-11412
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11412.01.patch, HDFS-11412.02.patch
>
>
> Currently the allowed value range for Maintenance Min Replication 
> {{dfs.namenode.maintenance.replication.min}} is 0 to 
> {{dfs.namenode.replication.min}} (default=1). Users wanting not to affect the 
> performance of the cluster would wish to have the Maintenance Min Replication 
> number greater than 1, say 2. In the current design, it is possible to have 
> this Maintenance Min Replication configuration, but only after changing the 
> NameNode level Block Min Replication to 2, and which could slowdown the 
> overall latency for client writes.
> Technically speaking we should be allowing Maintenance Min Replication to be 
> in range 0 to dfs.replication.max.  
> * There is always config value of 0 for users not wanting any 
> availability/performance during maintenance. 
> * And, performance centric workloads can still get maintenance done without 
> major disruptions by having a bigger Maintenance Min Replication. Setting the 
> upper limit as dfs.replication.max could be an overkill as it could trigger 
> re-replication which Maintenance State is trying to avoid. So, we could allow 
> the {{dfs.namenode.maintenance.replication.min}} in the range {{0 to 
> dfs.replication}}
> {noformat}
> if (minMaintenanceR < 0) {
>   throw new IOException("Unexpected configuration parameters: "
>   + DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY
>   + " = " + minMaintenanceR + " < 0");
> }
> if (minMaintenanceR > minR) {
>   throw new IOException("Unexpected configuration parameters: "
>   + DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY
>   + " = " + minMaintenanceR + " > "
>   + DFSConfigKeys.DFS_NAMENODE_REPLICATION_MIN_KEY
>   + " = " + minR);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11412) Maintenance minimum replication config value allowable range should be [0, DefaultReplication]

2017-03-01 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-11412:
---
Summary: Maintenance minimum replication config value allowable range 
should be [0, DefaultReplication]  (was: Maintenance minimum replication config 
value allowable range should be {0 - DefaultReplication})

> Maintenance minimum replication config value allowable range should be [0, 
> DefaultReplication]
> --
>
> Key: HDFS-11412
> URL: https://issues.apache.org/jira/browse/HDFS-11412
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11412.01.patch, HDFS-11412.02.patch
>
>
> Currently the allowed value range for Maintenance Min Replication 
> {{dfs.namenode.maintenance.replication.min}} is 0 to 
> {{dfs.namenode.replication.min}} (default=1). Users wanting not to affect the 
> performance of the cluster would wish to have the Maintenance Min Replication 
> number greater than 1, say 2. In the current design, it is possible to have 
> this Maintenance Min Replication configuration, but only after changing the 
> NameNode level Block Min Replication to 2, and which could slowdown the 
> overall latency for client writes.
> Technically speaking we should be allowing Maintenance Min Replication to be 
> in range 0 to dfs.replication.max.  
> * There is always config value of 0 for users not wanting any 
> availability/performance during maintenance. 
> * And, performance centric workloads can still get maintenance done without 
> major disruptions by having a bigger Maintenance Min Replication. Setting the 
> upper limit as dfs.replication.max could be an overkill as it could trigger 
> re-replication which Maintenance State is trying to avoid. So, we could allow 
> the {{dfs.namenode.maintenance.replication.min}} in the range {{0 to 
> dfs.replication}}
> {noformat}
> if (minMaintenanceR < 0) {
>   throw new IOException("Unexpected configuration parameters: "
>   + DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY
>   + " = " + minMaintenanceR + " < 0");
> }
> if (minMaintenanceR > minR) {
>   throw new IOException("Unexpected configuration parameters: "
>   + DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY
>   + " = " + minMaintenanceR + " > "
>   + DFSConfigKeys.DFS_NAMENODE_REPLICATION_MIN_KEY
>   + " = " + minR);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11477) Combine FileIO Profiling Enable and Sampling Fraction Config Key into one

2017-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891601#comment-15891601
 ] 

Hadoop QA commented on HDFS-11477:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
33s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m  3s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}148m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11477 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855510/HDFS-11477.001.patch |
| Optional Tests |  asflicense  mvnsite  compile  javac  javadoc  mvninstall  
unit  findbugs  checkstyle  |
| uname | Linux 8666f92f4e1b 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6f6dfe0 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18505/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18505/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18505/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   

[jira] [Updated] (HDFS-11478) Update EC commands in HDFSCommands.md

2017-03-01 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11478:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha3
   Status: Resolved  (was: Patch Available)

Committed this to trunk, thanks [~andrew.wang] for the review!

> Update EC commands in HDFSCommands.md
> -
>
> Key: HDFS-11478
> URL: https://issues.apache.org/jira/browse/HDFS-11478
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, erasure-coding
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-11478.001.patch
>
>
> The EC commands in {{HDFSCommands.md}} is out of date. There are some places 
> need to update.
> Current EC commands in {{HDFSCommands.md}}:
> {code}
>hdfs ec [generic options]
>[-setPolicy [-p ] ]
>[-getPolicy ]
>[-listPolicies]
> {code}
> But after the work on HDFS-11426 and HDFS-11072, the EC commands usages 
> changed as followings that showed in {{HDFSErasureCoding.md}}:
> {code}
>hdfs ec [generic options]
>  [-setPolicy -policy  -path ]
>  [-getPolicy -path ]
>  [-unsetPolicy -path ]
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11384) Add option for balancer to disperse getBlocks calls to avoid NameNode's rpc.CallQueueLength spike

2017-03-01 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891587#comment-15891587
 ] 

Benoy Antony commented on HDFS-11384:
-

Thanks for the explanation, [~zhaoyunjiong].
The patch looks good.  I will commit this tomorrow if there are no other 
comments.

> Add option for balancer to disperse getBlocks calls to avoid NameNode's 
> rpc.CallQueueLength spike
> -
>
> Key: HDFS-11384
> URL: https://issues.apache.org/jira/browse/HDFS-11384
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer & mover
>Affects Versions: 2.7.3
>Reporter: yunjiong zhao
>Assignee: yunjiong zhao
> Attachments: balancer.day.png, balancer.week.png, HDFS-11384.001.patch
>
>
> When running balancer on hadoop cluster which have more than 3000 Datanodes 
> will cause NameNode's rpc.CallQueueLength spike. We observed this situation 
> could cause Hbase cluster failure due to RegionServer's WAL timeout.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11418) HttpFS should support old SSL clients

2017-03-01 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-11418:
--
   Resolution: Fixed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

Committed to branch-2.

Thanks [~xiaochen] and [~eddyxu] for the review!

Filed HDFS-11485 "HttpFS should warn about weak ssl ciphers" to follow up 
Eddy's suggestion.

> HttpFS should support old SSL clients
> -
>
> Key: HDFS-11418
> URL: https://issues.apache.org/jira/browse/HDFS-11418
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.8.0, 2.7.4, 2.6.6
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: HDFS-11418.branch-2.001.patch, 
> HDFS-11418.branch-2.002.patch, HDFS-11418.branch-2.003.patch
>
>
> HADOOP-13812 upgraded Tomcat to 6.0.48 which filters weak ciphers. Old SSL 
> clients such as curl stop working. The symptom is {{NSS error -12286}} when 
> running {{curl -v}}.
> Instead of forcing the SSL clients to upgrade, we can configure Tomcat to 
> explicitly allow enough weak ciphers so that old SSL clients can work.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11485) HttpFS should warn about weak ssl ciphers

2017-03-01 Thread John Zhuge (JIRA)
John Zhuge created HDFS-11485:
-

 Summary: HttpFS should warn about weak ssl ciphers
 Key: HDFS-11485
 URL: https://issues.apache.org/jira/browse/HDFS-11485
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: httpfs
Affects Versions: 2.9.0
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Minor


HDFS-11418 sets a list of default ciphers that contain a few weak ciphers in 
order to maintain backwards compatibility. In addition, users can select weak 
ciphers by env {{HTTPFS_SSL_CIPHERS}}. It'd nice to get warnings about the weak 
ciphers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11484) Ozone: Fix flaky TestNodeManager#testScmNodeReportUpdate

2017-03-01 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11484:
--
Status: Patch Available  (was: Open)

> Ozone: Fix flaky TestNodeManager#testScmNodeReportUpdate
> 
>
> Key: HDFS-11484
> URL: https://issues.apache.org/jira/browse/HDFS-11484
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-11484-HDFS-7240.001.patch
>
>
> Need to waitFor the right condition for reliable verification. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11484) Ozone: Fix flaky TestNodeManager#testScmNodeReportUpdate

2017-03-01 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11484:
--
Status: Open  (was: Patch Available)

> Ozone: Fix flaky TestNodeManager#testScmNodeReportUpdate
> 
>
> Key: HDFS-11484
> URL: https://issues.apache.org/jira/browse/HDFS-11484
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-11484-HDFS-7240.001.patch
>
>
> Need to waitFor the right condition for reliable verification. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11395) RequestHedgingProxyProvider#RequestHedgingInvocationHandler hides the Exception thrown from NameNode

2017-03-01 Thread Nandakumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891553#comment-15891553
 ] 

Nandakumar commented on HDFS-11395:
---

Thanks for the review [~jingzhao]. 

I agree, we should not simply throw the first exception. It will be helpful if 
you can provide some input on 

bq. Not mix detailed exception handling logic into RequestHedgingProxyProvider
True, but in case of non RemoteException from ExecutionException, what should 
be done?

bq. Then in RetryInvocationHandler#newRetryInfo, we should let this method 
return both the RetryInfo and the exception to throw from the MultiException.
In that case, is it ok to add an additional field to 
{{RetryInvocationHandler#RetryInfo}} for holding Exception when 
{{RetryInfo.action == RetryAction.RetryDecision.FAIL}} ?



> RequestHedgingProxyProvider#RequestHedgingInvocationHandler hides the 
> Exception thrown from NameNode
> 
>
> Key: HDFS-11395
> URL: https://issues.apache.org/jira/browse/HDFS-11395
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: HDFS-11395.000.patch, HDFS-11395.001.patch
>
>
> When using RequestHedgingProxyProvider, in case of Exception (like 
> FileNotFoundException) from ActiveNameNode, 
> {{RequestHedgingProxyProvider#RequestHedgingInvocationHandler.invoke}} 
> receives {{ExecutionException}} since we use {{CompletionService}} for the 
> call. The ExecutionException is put into a map and wrapped with 
> {{MultiException}}.
> So for a FileNotFoundException the client receives 
> {{MultiException(Map(ExecutionException(InvocationTargetException(RemoteException(FileNotFoundException)}}
> It will cause problem in clients which are handling RemoteExceptions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11480) Ozone: TestEndpoint task failure

2017-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891532#comment-15891532
 ] 

Hadoop QA commented on HDFS-11480:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  4m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
30s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 15s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}109m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.web.TestOzoneVolumes |
|   | hadoop.ozone.web.TestOzoneRestWithMiniCluster |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.ozone.scm.node.TestNodeManager |
|   | hadoop.cblock.TestCBlockServer |
|   | hadoop.hdfs.server.datanode.TestDataNodeMXBean |
|   | hadoop.hdfs.tools.TestDelegationTokenFetcher |
|   | hadoop.ozone.container.common.TestDatanodeStateMachine |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.ozone.web.TestOzoneWebAccess |
|   | hadoop.ozone.container.common.impl.TestContainerPersistence |
|   | hadoop.ozone.web.client.TestBuckets |
|   | hadoop.ozone.web.client.TestVolume |
|   | hadoop.ozone.scm.TestAllocateContainer |
| Timed out junit tests | 
org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HDFS-11480 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855502/HDFS-11480-HDFS-7240.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e5e560c076a4 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 7aa0a44 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18503/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-11484) Ozone: Fix flaky TestNodeManager#testScmNodeReportUpdate

2017-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891518#comment-15891518
 ] 

Hadoop QA commented on HDFS-11484:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  0m  
3s{color} | {color:red} Docker failed to build yetus/hadoop:e809691. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-11484 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855532/HDFS-11484-HDFS-7240.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18508/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: Fix flaky TestNodeManager#testScmNodeReportUpdate
> 
>
> Key: HDFS-11484
> URL: https://issues.apache.org/jira/browse/HDFS-11484
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-11484-HDFS-7240.001.patch
>
>
> Need to waitFor the right condition for reliable verification. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11484) Ozone: Fix flaky TestNodeManager#testScmNodeReportUpdate

2017-03-01 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891512#comment-15891512
 ] 

Xiaoyu Yao commented on HDFS-11484:
---

Run the test with the patch 30 times repeatedly using intelliJ and test passed 
locally. 

> Ozone: Fix flaky TestNodeManager#testScmNodeReportUpdate
> 
>
> Key: HDFS-11484
> URL: https://issues.apache.org/jira/browse/HDFS-11484
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-11484-HDFS-7240.001.patch
>
>
> Need to waitFor the right condition for reliable verification. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11484) Ozone: Fix flaky TestNodeManager#testScmNodeReportUpdate

2017-03-01 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11484:
--
Status: Patch Available  (was: Open)

> Ozone: Fix flaky TestNodeManager#testScmNodeReportUpdate
> 
>
> Key: HDFS-11484
> URL: https://issues.apache.org/jira/browse/HDFS-11484
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-11484-HDFS-7240.001.patch
>
>
> Need to waitFor the right condition for reliable verification. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11484) Ozone: Fix flaky TestNodeManager#testScmNodeReportUpdate

2017-03-01 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11484:
--
Attachment: HDFS-11484-HDFS-7240.001.patch

> Ozone: Fix flaky TestNodeManager#testScmNodeReportUpdate
> 
>
> Key: HDFS-11484
> URL: https://issues.apache.org/jira/browse/HDFS-11484
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-11484-HDFS-7240.001.patch
>
>
> Need to waitFor the right condition for reliable verification. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11484) Ozone: Fix flaky TestNodeManager#testScmNodeReportUpdate

2017-03-01 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11484:
--
Affects Version/s: HDFS-7240

> Ozone: Fix flaky TestNodeManager#testScmNodeReportUpdate
> 
>
> Key: HDFS-11484
> URL: https://issues.apache.org/jira/browse/HDFS-11484
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>
> Need to waitFor the right condition for reliable verification. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11484) Ozone: Fix flaky TestNodeManager#testScmNodeReportUpdate

2017-03-01 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-11484:
-

 Summary: Ozone: Fix flaky TestNodeManager#testScmNodeReportUpdate
 Key: HDFS-11484
 URL: https://issues.apache.org/jira/browse/HDFS-11484
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


Need to waitFor the right condition for reliable verification. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11363) Need more diagnosis info when seeing Slow waitForAckedSeqno

2017-03-01 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891487#comment-15891487
 ] 

Brahma Reddy Battula commented on HDFS-11363:
-

any plan to backport to branch-2.8 and branch-2.7..?

> Need more diagnosis info when seeing Slow waitForAckedSeqno
> ---
>
> Key: HDFS-11363
> URL: https://issues.apache.org/jira/browse/HDFS-11363
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Xiao Chen
>  Labels: supportability
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HDFS-11363.01.patch, HDFS-11363.02.patch, 
> HDFS-11363.03.patch
>
>
> When client writes a file, it may get the following message when ACK doesn't 
> get back in a timely manner:
> WARN hdfs.DFSClient: Slow waitForAckedSeqno took 39264ms (threshold=3ms)
> It would be nice to tell what file it's writing, and what DataNodes are in 
> the pipeline, together with this message, to facilitate investigating related 
> performance issue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11412) Maintenance minimum replication config value allowable range should be {0 - DefaultReplication}

2017-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891485#comment-15891485
 ] 

Hadoop QA commented on HDFS-11412:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 68m 
56s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11412 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855498/HDFS-11412.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9515e41fe471 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 899d5c4 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18502/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18502/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Maintenance minimum replication config value allowable range should be {0 - 
> DefaultReplication}
> ---
>
> Key: HDFS-11412
> URL: https://issues.apache.org/jira/browse/HDFS-11412
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11412.01.patch, HDFS-11412.02.patch
>
>
> Currently the allowed value range for 

[jira] [Updated] (HDFS-11416) Refactor out system default erasure coding policy

2017-03-01 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11416:
---
Attachment: HDFS-11416.003.patch

Tiny rev to revert the change of Preconditions.checkArgument to checkNotNull, 
which caused the failing TestStripedINodeFile test.

> Refactor out system default erasure coding policy
> -
>
> Key: HDFS-11416
> URL: https://issues.apache.org/jira/browse/HDFS-11416
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-11416.001.patch, HDFS-11416.002.patch, 
> HDFS-11416.003.patch
>
>
> As discussed on HDFS-7859, the system default EC policy is mostly a relic 
> from development when the system only supported a single global policy. Now, 
> we support multiple policies, and the system default policy is mostly used by 
> tests.
> We should refactor to remove this concept.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11450) HDFS specific network topology classes with storage type info included

2017-03-01 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11450:
--
Attachment: HDFS-11450.004.patch

v004 patch to fix the checkstyle warnings. Note that some warnings are due to 
files not in this patch. These files are not changed.

The failed tests are unrelated.

> HDFS specific network topology classes with storage type info included
> --
>
> Key: HDFS-11450
> URL: https://issues.apache.org/jira/browse/HDFS-11450
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11450.001.patch, HDFS-11450.002.patch, 
> HDFS-11450.003.patch, HDFS-11450.004.patch
>
>
> This JIRA adds storage type info into network topology.
> More specifically, this JIRA adds a storage type map by extending 
> {{InnerNodeImpl}} to describe the available storages under the current node's 
> subtree. This map is updated when a node is added/removed from the subtree.
> With this info, when choosing a random node with storage type requirement, 
> the search could then decide to/not to go deeper into a subtree by examining 
> the available storage types first.
> One to-do item still, is that, we might still need to separately handle the 
> cases where a Datanodes restarts, or a disk is hot-swapped, will file another 
> JIRA in that case.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11483) hdfs-bin files not selected in tags

2017-03-01 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891446#comment-15891446
 ] 

Allen Wittenauer commented on HDFS-11483:
-


I'm not sure I follow.

release-3.0.0-alpha2-RC0 s the tag for Apache Hadoop 3.0.0 alpha2.  Looking at 
https://github.com/apache/hadoop/blob/release-3.0.0-alpha2-RC0/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/start-dfs.sh,
 it's clearly had changes since May 2015, including several of my own patches.  
If we grab the 3.x source and binary distros from the ASF website, it matches.

Looking at the release-2.7.3-RC2 tag and comparing it to branch-2, branch-2.7, 
and other (relatively) recent Apache Hadoop 2.x tag and branches, the code for 
start-dfs.sh all match.  

So this all looks normal, given:

* Apache Hadoop 3.x alpha releases have the newer, incompatible-with-2.x bits.

* There have been no changes made to start-dfs.sh in the branch-2 stream in a 
very long time, thus code from May 2015 is correct.

> hdfs-bin files not selected in tags
> ---
>
> Key: HDFS-11483
> URL: https://issues.apache.org/jira/browse/HDFS-11483
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Ninad Chaudhari
>
> **Changes after 1fbefe5  on May 8, 2015
> have not been added to Release tags.**
> -Will explain by Specifically talking about 
> "hadoop-hdfs/src/main/bin/start-dfs.sh"
> https://github.com/apache/hadoop/blob/0eb4b513b76bc944c31b15cd6558901ae44bf931/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/start-dfs.sh
> - If we switch to Tag 2.7.3 or anything stable or even 3.0
> this file has not been changed. 
> It has been commited ... but commits have not been tagged.
> -Result causes a file that is 5 years old to be included in Final release.
> -Affects many versions..
> To reproduce: 
> Download the latest build from any apache hadoop mirrors and compare the file 
> with the newer version avaliable on git.
> by the years, many corrections have been made for this file. 
> Again , look at "SECONDARY_NAMENODE" staring configuration.
> On git its : 
> "
> # secondary namenodes (if any)
> SECONDARY_NAMENODES=$("${HADOOP_HDFS_HOME}/bin/hdfs" getconf 
> -secondarynamenodes 2>/dev/null)
> if [[ "${SECONDARY_NAMENODES}" == "0.0.0.0" ]]; then
>   SECONDARY_NAMENODES=$(hostname)
> fi
> if [[ -n "${SECONDARY_NAMENODES}" ]]; then
>   echo "Starting secondary namenodes [${SECONDARY_NAMENODES}]"
>   
>   "${bin}/hadoop-daemons.sh" \
>   --config "${HADOOP_CONF_DIR}" \
>   --hostnames "${SECONDARY_NAMENODES}" \
>   start secondarynamenode
> fi
> "
> But on tag 2.7+ Its still : 
> "
> SECONDARY_NAMENODES=$($HADOOP_PREFIX/bin/hdfs getconf -secondarynamenodes 
> 2>/dev/null)
> if [ -n "$SECONDARY_NAMENODES" ]; then
>   echo "Starting secondary namenodes [$SECONDARY_NAMENODES]"
>   "$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \
>   --config "$HADOOP_CONF_DIR" \
>   --hostnames "$SECONDARY_NAMENODES" \
>   --script "$bin/hdfs" start secondarynamenode
> fi
> "
> Commits after May2015 have not been merged !!!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11477) Combine FileIO Profiling Enable and Sampling Fraction Config Key into one

2017-03-01 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-11477:
--
Attachment: HDFS-11477.001.patch

Thank you [~arpitagarwal] for reviewing the patch. I have addressed your 
comments in patch v01.

> Combine FileIO Profiling Enable and Sampling Fraction Config Key into one
> -
>
> Key: HDFS-11477
> URL: https://issues.apache.org/jira/browse/HDFS-11477
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Attachments: HDFS-11477.000.patch, HDFS-11477.001.patch
>
>
> For Profiling FileIO events, there are 2 keys:
> - DFS_DATANODE_ENABLE_FILEIO_PROFILING_KEY for enabling the hooks
> - DFS_DATANODE_FILEIO_PROFILING_SAMPLING_FRACTION_KEY for setting the 
> sampling fraction 
> We can instead have only the sampling fraction key and set it to 0 if we want 
> to disable profiling.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11481) hdfs snapshotDiff /.reserved/raw/... fails on snapshottable directories

2017-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891404#comment-15891404
 ] 

Hadoop QA commented on HDFS-11481:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
4s{color} | {color:blue} The patch file was not named according to hadoop's 
naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute 
for instructions. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HDFS-11481 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-11481 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855504/HDFS-11481.patch.1 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18504/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> hdfs snapshotDiff /.reserved/raw/... fails on snapshottable directories
> ---
>
> Key: HDFS-11481
> URL: https://issues.apache.org/jira/browse/HDFS-11481
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: Mavin Martin
>Priority: Minor
> Attachments: HDFS-11481.patch.1
>
>
> Successful command:
> {code}
> #> hdfs snapshotDiff /tmp/dir s1 s2
> Difference between snapshot s1 and snapshot s2 under directory /tmp/dir:
> M   .
> +   ./file1.txt
> {code}
> Unsuccessful command:
> {code}
> #> hdfs snapshotDiff /.reserved/raw/tmp/dir s1 s2
> snapshotDiff: Directory does not exist: /.reserved/raw/tmp/dir
> {code}
> Prefixing with raw path should run successfully and return same output.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11450) HDFS specific network topology classes with storage type info included

2017-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891402#comment-15891402
 ] 

Hadoop QA commented on HDFS-11450:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
48s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  3s{color} | {color:orange} root: The patch generated 19 new + 306 unchanged 
- 4 fixed = 325 total (was 310) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
18s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}110m 48s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}188m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11450 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855463/HDFS-11450.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ef2aab5d7f90 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 899d5c4 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18496/artifact/patchprocess/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18496/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-11480) Ozone: TestEndpoint task failure

2017-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891401#comment-15891401
 ] 

Hadoop QA commented on HDFS-11480:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
47s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}116m 42s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}138m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.tools.TestDelegationTokenFetcher |
|   | hadoop.ozone.web.TestOzoneVolumes |
|   | hadoop.ozone.container.common.impl.TestContainerPersistence |
|   | hadoop.ozone.web.TestOzoneRestWithMiniCluster |
|   | hadoop.ozone.scm.node.TestNodeManager |
|   | hadoop.ozone.container.common.TestDatanodeStateMachine |
|   | hadoop.ozone.web.client.TestBuckets |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.ozone.scm.TestAllocateContainer |
|   | hadoop.ozone.web.client.TestVolume |
|   | hadoop.ozone.web.TestOzoneWebAccess |
| Timed out junit tests | 
org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HDFS-11480 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855480/HDFS-11480-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0f320956fab0 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 7aa0a44 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18499/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18499/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs 

[jira] [Commented] (HDFS-11450) HDFS specific network topology classes with storage type info included

2017-03-01 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891395#comment-15891395
 ] 

Arpit Agarwal commented on HDFS-11450:
--

Thanks for the updated patch [~vagarychen].

+1 pending Jenkins.

> HDFS specific network topology classes with storage type info included
> --
>
> Key: HDFS-11450
> URL: https://issues.apache.org/jira/browse/HDFS-11450
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11450.001.patch, HDFS-11450.002.patch, 
> HDFS-11450.003.patch
>
>
> This JIRA adds storage type info into network topology.
> More specifically, this JIRA adds a storage type map by extending 
> {{InnerNodeImpl}} to describe the available storages under the current node's 
> subtree. This map is updated when a node is added/removed from the subtree.
> With this info, when choosing a random node with storage type requirement, 
> the search could then decide to/not to go deeper into a subtree by examining 
> the available storage types first.
> One to-do item still, is that, we might still need to separately handle the 
> cases where a Datanodes restarts, or a disk is hot-swapped, will file another 
> JIRA in that case.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11481) hdfs snapshotDiff /.reserved/raw/... fails on snapshottable directories

2017-03-01 Thread Mavin Martin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mavin Martin updated HDFS-11481:

Attachment: HDFS-11481.patch.1

> hdfs snapshotDiff /.reserved/raw/... fails on snapshottable directories
> ---
>
> Key: HDFS-11481
> URL: https://issues.apache.org/jira/browse/HDFS-11481
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: Mavin Martin
>Priority: Minor
> Attachments: HDFS-11481.patch.1
>
>
> Successful command:
> {code}
> #> hdfs snapshotDiff /tmp/dir s1 s2
> Difference between snapshot s1 and snapshot s2 under directory /tmp/dir:
> M   .
> +   ./file1.txt
> {code}
> Unsuccessful command:
> {code}
> #> hdfs snapshotDiff /.reserved/raw/tmp/dir s1 s2
> snapshotDiff: Directory does not exist: /.reserved/raw/tmp/dir
> {code}
> Prefixing with raw path should run successfully and return same output.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11481) hdfs snapshotDiff /.reserved/raw/... fails on snapshottable directories

2017-03-01 Thread Mavin Martin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mavin Martin updated HDFS-11481:

Status: Patch Available  (was: Open)

> hdfs snapshotDiff /.reserved/raw/... fails on snapshottable directories
> ---
>
> Key: HDFS-11481
> URL: https://issues.apache.org/jira/browse/HDFS-11481
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: Mavin Martin
>Priority: Minor
>
> Successful command:
> {code}
> #> hdfs snapshotDiff /tmp/dir s1 s2
> Difference between snapshot s1 and snapshot s2 under directory /tmp/dir:
> M   .
> +   ./file1.txt
> {code}
> Unsuccessful command:
> {code}
> #> hdfs snapshotDiff /.reserved/raw/tmp/dir s1 s2
> snapshotDiff: Directory does not exist: /.reserved/raw/tmp/dir
> {code}
> Prefixing with raw path should run successfully and return same output.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11481) hdfs snapshotDiff /.reserved/raw/... fails on snapshottable directories

2017-03-01 Thread Mavin Martin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mavin Martin updated HDFS-11481:

Affects Version/s: 2.6.0

> hdfs snapshotDiff /.reserved/raw/... fails on snapshottable directories
> ---
>
> Key: HDFS-11481
> URL: https://issues.apache.org/jira/browse/HDFS-11481
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: Mavin Martin
>Priority: Minor
>
> Successful command:
> {code}
> #> hdfs snapshotDiff /tmp/dir s1 s2
> Difference between snapshot s1 and snapshot s2 under directory /tmp/dir:
> M   .
> +   ./file1.txt
> {code}
> Unsuccessful command:
> {code}
> #> hdfs snapshotDiff /.reserved/raw/tmp/dir s1 s2
> snapshotDiff: Directory does not exist: /.reserved/raw/tmp/dir
> {code}
> Prefixing with raw path should run successfully and return same output.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11418) HttpFS should support old SSL clients

2017-03-01 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891393#comment-15891393
 ] 

Lei (Eddy) Xu commented on HDFS-11418:
--

Hi, [~jzhuge]

The patch LGTM overall. +1. 

One thing that might be worth to do is adding a warning message to the script 
when HttpFS uses old SSL clients.  We can do it in a follow up JIRA though. 

Thanks for the patch!



> HttpFS should support old SSL clients
> -
>
> Key: HDFS-11418
> URL: https://issues.apache.org/jira/browse/HDFS-11418
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.8.0, 2.7.4, 2.6.6
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HDFS-11418.branch-2.001.patch, 
> HDFS-11418.branch-2.002.patch, HDFS-11418.branch-2.003.patch
>
>
> HADOOP-13812 upgraded Tomcat to 6.0.48 which filters weak ciphers. Old SSL 
> clients such as curl stop working. The symptom is {{NSS error -12286}} when 
> running {{curl -v}}.
> Instead of forcing the SSL clients to upgrade, we can configure Tomcat to 
> explicitly allow enough weak ciphers so that old SSL clients can work.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11480) Ozone: TestEndpoint task failure

2017-03-01 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11480:
--
Attachment: HDFS-11480-HDFS-7240.002.patch

> Ozone: TestEndpoint task failure
> 
>
> Key: HDFS-11480
> URL: https://issues.apache.org/jira/browse/HDFS-11480
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Xiaoyu Yao
> Fix For: HDFS-7240
>
> Attachments: HDFS-11480-HDFS-7240.001.patch, 
> HDFS-11480-HDFS-7240.002.patch
>
>
> During a test run it seems that TestEndPoint test failed with null pointer 
> access
> {code}
> Running org.apache.hadoop.ozone.container.common.TestEndPoint
> Tests run: 13, Failures: 0, Errors: 5, Skipped: 0, Time elapsed: 4.413 sec 
> <<< FAILURE! - in org.apache.hadoop.ozone.container.common.TestEndPoint
> testHeartbeatTaskToInvalidNode(org.apache.hadoop.ozone.container.common.TestEndPoint)
>   Time elapsed: 0.029 sec  <<< ERROR!
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.ozone.container.common.states.endpoint.HeartbeatEndpointTask.call(HeartbeatEndpointTask.java:93)
>   at 
> org.apache.hadoop.ozone.container.common.TestEndPoint.heartbeatTaskHelper(TestEndPoint.java:262)
>   at 
> org.apache.hadoop.ozone.container.common.TestEndPoint.heartbeatTaskHelper(TestEndPoint.java:270)
>   at 
> org.apache.hadoop.ozone.container.common.TestEndPoint.testHeartbeatTaskToInvalidNode(TestEndPoint.java:284)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11416) Refactor out system default erasure coding policy

2017-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891379#comment-15891379
 ] 

Hadoop QA commented on HDFS-11416:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
58s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
58s{color} | {color:green} root: The patch generated 0 new + 298 unchanged - 1 
fixed = 298 total (was 299) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 58s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 54s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}140m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
|   | hadoop.hdfs.TestDFSClientRetries |
|   | hadoop.hdfs.server.namenode.TestStripedINodeFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11416 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855468/HDFS-11416.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e536d4de17ce 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 899d5c4 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18498/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18498/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Comment Edited] (HDFS-11412) Maintenance minimum replication config value allowable range should be {0 - DefaultReplication}

2017-03-01 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891363#comment-15891363
 ] 

Manoj Govindassamy edited comment on HDFS-11412 at 3/2/17 12:34 AM:


[~mingma],

bq. Maybe we can modify getMinReplicationToBeInMaintenance to return the less 
of {file replication factor, minReplicationToBeInMaintenance}

This sounds good to me. This will cover for the files whose block replication 
factor is less than maintenance min, and will not trigger unnecessary 
re-replication.  {{BlockManager#getMinMaintenanceStorageNum()}} is modified to 
return the min value.

{{BlockManager#getExpectedLiveRedundancyNum()}} is a common routine used for 
reconstruction work apart from DecommissionManager. The current implementation 
of this routine looks good to me.
* (A) In the context of general reconstruction needed for a block and when 
there is no maintenance operations, the expected live redundancy for any block 
should be equal to its block replication factor.
* (B) When the blocks are on maintenance nodes, then the expected live 
redundancy for the block is the min of its block replication factor or 
maintenance min, that is BlockManager#getMinMaintenanceStorageNum()
* And, BlockManager#getExpectedLiveRedundancyNum() should be the Max(A, B) to 
work for both non-maintenance and maintenance operations. If you set this to 
Min(A, B), getExpectedLiveRedundancyNum() will end up as Min(A, Min(block_repl, 
maint_min) => which can become 0 whenever maintenance min is 0 and can cause 
adverse affects. 

Can you please take a look at the latest patch and share your comments ?


was (Author: manojg):
[~mingma],

bq. Maybe we can modify getMinReplicationToBeInMaintenance to return the less 
of {file replication factor, minReplicationToBeInMaintenance}

This sounds good to me. This will cover for the files whose block replication 
factor is less than maintenance min, and will not trigger unnecessary 
re-replication.  {{BlockManager#getMinMaintenanceStorageNum()}} is modified to 
return the min value.

{{BlockManager#getExpectedLiveRedundancyNum()}} is a common routine used for 
reconstruction work apart from DecommissionManager. The current implementation 
of this routine looks good to me.
-- (A) In the context of general reconstruction needed for a block and when 
there is no maintenance operations, the expected live redundancy for any block 
should be equal to its block replication factor.
-- (B) When the blocks are on maintenance nodes, then the expected live 
redundancy for the block is the min of its block replication factor or 
maintenance min, that is BlockManager#getMinMaintenanceStorageNum()
-- And, BlockManager#getExpectedLiveRedundancyNum() should be the Max(A, B) to 
work for both non-maintenance and maintenance operations. If you set this to 
Min(A, B), getExpectedLiveRedundancyNum() will end up as Min(A, Min(block_repl, 
maint_min) => which can become 0 whenever maintenance min is 0 and can cause 
adverse affects. 

Can you please take a look at the latest patch and share your comments ?

> Maintenance minimum replication config value allowable range should be {0 - 
> DefaultReplication}
> ---
>
> Key: HDFS-11412
> URL: https://issues.apache.org/jira/browse/HDFS-11412
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11412.01.patch, HDFS-11412.02.patch
>
>
> Currently the allowed value range for Maintenance Min Replication 
> {{dfs.namenode.maintenance.replication.min}} is 0 to 
> {{dfs.namenode.replication.min}} (default=1). Users wanting not to affect the 
> performance of the cluster would wish to have the Maintenance Min Replication 
> number greater than 1, say 2. In the current design, it is possible to have 
> this Maintenance Min Replication configuration, but only after changing the 
> NameNode level Block Min Replication to 2, and which could slowdown the 
> overall latency for client writes.
> Technically speaking we should be allowing Maintenance Min Replication to be 
> in range 0 to dfs.replication.max.  
> * There is always config value of 0 for users not wanting any 
> availability/performance during maintenance. 
> * And, performance centric workloads can still get maintenance done without 
> major disruptions by having a bigger Maintenance Min Replication. Setting the 
> upper limit as dfs.replication.max could be an overkill as it could trigger 
> re-replication which Maintenance State is trying to avoid. So, we could allow 
> the {{dfs.namenode.maintenance.replication.min}} in the range {{0 to 
> dfs.replication}}
> {noformat}
> if 

[jira] [Updated] (HDFS-11412) Maintenance minimum replication config value allowable range should be {0 - DefaultReplication}

2017-03-01 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-11412:
--
Attachment: HDFS-11412.02.patch

[~mingma],

bq. Maybe we can modify getMinReplicationToBeInMaintenance to return the less 
of {file replication factor, minReplicationToBeInMaintenance}

This sounds good to me. This will cover for the files whose block replication 
factor is less than maintenance min, and will not trigger unnecessary 
re-replication.  {{BlockManager#getMinMaintenanceStorageNum()}} is modified to 
return the min value.

{{BlockManager#getExpectedLiveRedundancyNum()}} is a common routine used for 
reconstruction work apart from DecommissionManager. The current implementation 
of this routine looks good to me.
-- (A) In the context of general reconstruction needed for a block and when 
there is no maintenance operations, the expected live redundancy for any block 
should be equal to its block replication factor.
-- (B) When the blocks are on maintenance nodes, then the expected live 
redundancy for the block is the min of its block replication factor or 
maintenance min, that is BlockManager#getMinMaintenanceStorageNum()
-- And, BlockManager#getExpectedLiveRedundancyNum() should be the Max(A, B) to 
work for both non-maintenance and maintenance operations. If you set this to 
Min(A, B), getExpectedLiveRedundancyNum() will end up as Min(A, Min(block_repl, 
maint_min) => which can become 0 whenever maintenance min is 0 and can cause 
adverse affects. 

Can you please take a look at the latest patch and share your comments ?

> Maintenance minimum replication config value allowable range should be {0 - 
> DefaultReplication}
> ---
>
> Key: HDFS-11412
> URL: https://issues.apache.org/jira/browse/HDFS-11412
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11412.01.patch, HDFS-11412.02.patch
>
>
> Currently the allowed value range for Maintenance Min Replication 
> {{dfs.namenode.maintenance.replication.min}} is 0 to 
> {{dfs.namenode.replication.min}} (default=1). Users wanting not to affect the 
> performance of the cluster would wish to have the Maintenance Min Replication 
> number greater than 1, say 2. In the current design, it is possible to have 
> this Maintenance Min Replication configuration, but only after changing the 
> NameNode level Block Min Replication to 2, and which could slowdown the 
> overall latency for client writes.
> Technically speaking we should be allowing Maintenance Min Replication to be 
> in range 0 to dfs.replication.max.  
> * There is always config value of 0 for users not wanting any 
> availability/performance during maintenance. 
> * And, performance centric workloads can still get maintenance done without 
> major disruptions by having a bigger Maintenance Min Replication. Setting the 
> upper limit as dfs.replication.max could be an overkill as it could trigger 
> re-replication which Maintenance State is trying to avoid. So, we could allow 
> the {{dfs.namenode.maintenance.replication.min}} in the range {{0 to 
> dfs.replication}}
> {noformat}
> if (minMaintenanceR < 0) {
>   throw new IOException("Unexpected configuration parameters: "
>   + DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY
>   + " = " + minMaintenanceR + " < 0");
> }
> if (minMaintenanceR > minR) {
>   throw new IOException("Unexpected configuration parameters: "
>   + DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY
>   + " = " + minMaintenanceR + " > "
>   + DFSConfigKeys.DFS_NAMENODE_REPLICATION_MIN_KEY
>   + " = " + minR);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11314) Enforce set of enabled EC policies on the NameNode

2017-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891344#comment-15891344
 ] 

Hadoop QA commented on HDFS-11314:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 43s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 4 new + 709 unchanged - 1 fixed = 713 total (was 710) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 55s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 98m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestMaintenanceState |
|   | hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11314 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855469/HDFS-11314.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux df02c401942d 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 899d5c4 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18497/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18497/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18497/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDFS-11418) HttpFS should support old SSL clients

2017-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891340#comment-15891340
 ] 

Hadoop QA commented on HDFS-11418:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 
58s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
0s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
39s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 8s{color} | {color:green} The patch generated 0 new + 511 unchanged - 4 fixed 
= 511 total (was 515) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
26s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed with JDK 
v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | HDFS-11418 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855489/HDFS-11418.branch-2.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  shellcheck  shelldocs  |
| uname | Linux 8edb594f9ab7 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / ff08f54 |
| Default Java | 1.7.0_121 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_121 

[jira] [Updated] (HDFS-11314) Enforce set of enabled EC policies on the NameNode

2017-03-01 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11314:
---
Status: Open  (was: Patch Available)

Cancelling patch while we wait for HDFS-11416, they conflict and 11416 should 
be easier to get in first.

> Enforce set of enabled EC policies on the NameNode
> --
>
> Key: HDFS-11314
> URL: https://issues.apache.org/jira/browse/HDFS-11314
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11314.001.patch, HDFS-11314.002.patch
>
>
> Filing based on discussion in HDFS-8095. A user might specify a policy that 
> is not appropriate for the cluster, e.g. a RS (10,4) policy when the cluster 
> only has 10 nodes. The NN should only allow the client to choose from a 
> pre-approved list determined by the cluster administrator.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11483) hdfs-bin files not selected in tags

2017-03-01 Thread Ninad Chaudhari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ninad Chaudhari updated HDFS-11483:
---
Description: 
** Changes after 1fbefe5  on May 8, 2015
have not been added to Release tags. **

-Will explain by Specifically talking about 
"hadoop-hdfs/src/main/bin/start-dfs.sh"
https://github.com/apache/hadoop/blob/0eb4b513b76bc944c31b15cd6558901ae44bf931/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/start-dfs.sh

- If we switch to Tag 2.7.3 or anything stable or even 3.0
this file has not been changed. 
It has been commited ... but commits have not been tagged.

-Result causes a file that is 5 years old to be included in Final release.
-Affects many versions..

To reproduce: 
Download the latest build from any apache hadoop mirrors and compare the file 
with the newer version avaliable on git.

by the years, many corrections have been made for this file. 
Again , look at "SECONDARY_NAMENODE" staring configuration.

On git its : 
"
# secondary namenodes (if any)

SECONDARY_NAMENODES=$("${HADOOP_HDFS_HOME}/bin/hdfs" getconf 
-secondarynamenodes 2>/dev/null)

if [[ "${SECONDARY_NAMENODES}" == "0.0.0.0" ]]; then
  SECONDARY_NAMENODES=$(hostname)
fi

if [[ -n "${SECONDARY_NAMENODES}" ]]; then
  echo "Starting secondary namenodes [${SECONDARY_NAMENODES}]"
  
  "${bin}/hadoop-daemons.sh" \
  --config "${HADOOP_CONF_DIR}" \
  --hostnames "${SECONDARY_NAMENODES}" \
  start secondarynamenode
fi

"

But on tag 2.7+ Its still : 
"
SECONDARY_NAMENODES=$($HADOOP_PREFIX/bin/hdfs getconf -secondarynamenodes 
2>/dev/null)

if [ -n "$SECONDARY_NAMENODES" ]; then
  echo "Starting secondary namenodes [$SECONDARY_NAMENODES]"

  "$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \
  --config "$HADOOP_CONF_DIR" \
  --hostnames "$SECONDARY_NAMENODES" \
  --script "$bin/hdfs" start secondarynamenode
fi

"

Commits after May2015 have not been merged !!!




  was:
**Changes after 1fbefe5  on May 8, 2015
have not been added to Release tags.**

-Will explain by Specifically talking about 
"hadoop-hdfs/src/main/bin/start-dfs.sh"
https://github.com/apache/hadoop/blob/0eb4b513b76bc944c31b15cd6558901ae44bf931/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/start-dfs.sh

- If we switch to Tag 2.7.3 or anything stable or even 3.0
this file has not been changed. 
It has been commited ... but commits have not been tagged.

-Result causes a file that is 5 years old to be included in Final release.
-Affects many versions..

To reproduce: 
Download the latest build from any apache hadoop mirrors and compare the file 
with the newer version avaliable on git.

by the years, many corrections have been made for this file. 
Again , look at "SECONDARY_NAMENODE" staring configuration.

On git its : 
"
# secondary namenodes (if any)

SECONDARY_NAMENODES=$("${HADOOP_HDFS_HOME}/bin/hdfs" getconf 
-secondarynamenodes 2>/dev/null)

if [[ "${SECONDARY_NAMENODES}" == "0.0.0.0" ]]; then
  SECONDARY_NAMENODES=$(hostname)
fi

if [[ -n "${SECONDARY_NAMENODES}" ]]; then
  echo "Starting secondary namenodes [${SECONDARY_NAMENODES}]"
  
  "${bin}/hadoop-daemons.sh" \
  --config "${HADOOP_CONF_DIR}" \
  --hostnames "${SECONDARY_NAMENODES}" \
  start secondarynamenode
fi

"

But on tag 2.7+ Its still : 
"
SECONDARY_NAMENODES=$($HADOOP_PREFIX/bin/hdfs getconf -secondarynamenodes 
2>/dev/null)

if [ -n "$SECONDARY_NAMENODES" ]; then
  echo "Starting secondary namenodes [$SECONDARY_NAMENODES]"

  "$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \
  --config "$HADOOP_CONF_DIR" \
  --hostnames "$SECONDARY_NAMENODES" \
  --script "$bin/hdfs" start secondarynamenode
fi

"

Commits after May2015 have not been merged !!!





> hdfs-bin files not selected in tags
> ---
>
> Key: HDFS-11483
> URL: https://issues.apache.org/jira/browse/HDFS-11483
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Ninad Chaudhari
>
> ** Changes after 1fbefe5  on May 8, 2015
> have not been added to Release tags. **
> -Will explain by Specifically talking about 
> "hadoop-hdfs/src/main/bin/start-dfs.sh"
> https://github.com/apache/hadoop/blob/0eb4b513b76bc944c31b15cd6558901ae44bf931/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/start-dfs.sh
> - If we switch to Tag 2.7.3 or anything stable or even 3.0
> this file has not been changed. 
> It has been commited ... but commits have not been tagged.
> -Result causes a file that is 5 years old to be included in Final release.
> -Affects many versions..
> To reproduce: 
> Download the latest build from any apache hadoop mirrors and compare the file 
> with the newer version avaliable on git.
> by the years, many corrections have been made for this file. 
> Again , look at "SECONDARY_NAMENODE" staring configuration.
> On git its : 
> "
> # secondary namenodes (if any)
> SECONDARY_NAMENODES=$("${HADOOP_HDFS_HOME}/bin/hdfs" 

[jira] [Updated] (HDFS-11483) hdfs-bin files not selected in tags

2017-03-01 Thread Ninad Chaudhari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ninad Chaudhari updated HDFS-11483:
---
Description: 
**Changes after 1fbefe5  on May 8, 2015
have not been added to Release tags.**

-Will explain by Specifically talking about 
"hadoop-hdfs/src/main/bin/start-dfs.sh"
https://github.com/apache/hadoop/blob/0eb4b513b76bc944c31b15cd6558901ae44bf931/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/start-dfs.sh

- If we switch to Tag 2.7.3 or anything stable or even 3.0
this file has not been changed. 
It has been commited ... but commits have not been tagged.

-Result causes a file that is 5 years old to be included in Final release.
-Affects many versions..

To reproduce: 
Download the latest build from any apache hadoop mirrors and compare the file 
with the newer version avaliable on git.

by the years, many corrections have been made for this file. 
Again , look at "SECONDARY_NAMENODE" staring configuration.

On git its : 
"
# secondary namenodes (if any)

SECONDARY_NAMENODES=$("${HADOOP_HDFS_HOME}/bin/hdfs" getconf 
-secondarynamenodes 2>/dev/null)

if [[ "${SECONDARY_NAMENODES}" == "0.0.0.0" ]]; then
  SECONDARY_NAMENODES=$(hostname)
fi

if [[ -n "${SECONDARY_NAMENODES}" ]]; then
  echo "Starting secondary namenodes [${SECONDARY_NAMENODES}]"
  
  "${bin}/hadoop-daemons.sh" \
  --config "${HADOOP_CONF_DIR}" \
  --hostnames "${SECONDARY_NAMENODES}" \
  start secondarynamenode
fi

"

But on tag 2.7+ Its still : 
"
SECONDARY_NAMENODES=$($HADOOP_PREFIX/bin/hdfs getconf -secondarynamenodes 
2>/dev/null)

if [ -n "$SECONDARY_NAMENODES" ]; then
  echo "Starting secondary namenodes [$SECONDARY_NAMENODES]"

  "$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \
  --config "$HADOOP_CONF_DIR" \
  --hostnames "$SECONDARY_NAMENODES" \
  --script "$bin/hdfs" start secondarynamenode
fi

"

Commits after May2015 have not been merged !!!




  was:
** Changes after 1fbefe5  on May 8, 2015
have not been added to Release tags. **

-Will explain by Specifically talking about 
"hadoop-hdfs/src/main/bin/start-dfs.sh"
https://github.com/apache/hadoop/blob/0eb4b513b76bc944c31b15cd6558901ae44bf931/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/start-dfs.sh

- If we switch to Tag 2.7.3 or anything stable or even 3.0
this file has not been changed. 
It has been commited ... but commits have not been tagged.

-Result causes a file that is 5 years old to be included in Final release.
-Affects many versions..

To reproduce: 
Download the latest build from any apache hadoop mirrors and compare the file 
with the newer version avaliable on git.

by the years, many corrections have been made for this file. 
Again , look at "SECONDARY_NAMENODE" staring configuration.

On git its : 
"
# secondary namenodes (if any)

SECONDARY_NAMENODES=$("${HADOOP_HDFS_HOME}/bin/hdfs" getconf 
-secondarynamenodes 2>/dev/null)

if [[ "${SECONDARY_NAMENODES}" == "0.0.0.0" ]]; then
  SECONDARY_NAMENODES=$(hostname)
fi

if [[ -n "${SECONDARY_NAMENODES}" ]]; then
  echo "Starting secondary namenodes [${SECONDARY_NAMENODES}]"
  
  "${bin}/hadoop-daemons.sh" \
  --config "${HADOOP_CONF_DIR}" \
  --hostnames "${SECONDARY_NAMENODES}" \
  start secondarynamenode
fi

"

But on tag 2.7+ Its still : 
"
SECONDARY_NAMENODES=$($HADOOP_PREFIX/bin/hdfs getconf -secondarynamenodes 
2>/dev/null)

if [ -n "$SECONDARY_NAMENODES" ]; then
  echo "Starting secondary namenodes [$SECONDARY_NAMENODES]"

  "$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \
  --config "$HADOOP_CONF_DIR" \
  --hostnames "$SECONDARY_NAMENODES" \
  --script "$bin/hdfs" start secondarynamenode
fi

"

Commits after May2015 have not been merged !!!





> hdfs-bin files not selected in tags
> ---
>
> Key: HDFS-11483
> URL: https://issues.apache.org/jira/browse/HDFS-11483
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Ninad Chaudhari
>
> **Changes after 1fbefe5  on May 8, 2015
> have not been added to Release tags.**
> -Will explain by Specifically talking about 
> "hadoop-hdfs/src/main/bin/start-dfs.sh"
> https://github.com/apache/hadoop/blob/0eb4b513b76bc944c31b15cd6558901ae44bf931/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/start-dfs.sh
> - If we switch to Tag 2.7.3 or anything stable or even 3.0
> this file has not been changed. 
> It has been commited ... but commits have not been tagged.
> -Result causes a file that is 5 years old to be included in Final release.
> -Affects many versions..
> To reproduce: 
> Download the latest build from any apache hadoop mirrors and compare the file 
> with the newer version avaliable on git.
> by the years, many corrections have been made for this file. 
> Again , look at "SECONDARY_NAMENODE" staring configuration.
> On git its : 
> "
> # secondary namenodes (if any)
> SECONDARY_NAMENODES=$("${HADOOP_HDFS_HOME}/bin/hdfs" 

[jira] [Updated] (HDFS-11483) hdfs-bin files not selected in tags

2017-03-01 Thread Ninad Chaudhari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ninad Chaudhari updated HDFS-11483:
---
Description: 
**Changes after 1fbefe5  on May 8, 2015
have not been added to Release tags.**

-Will explain by Specifically talking about 
"hadoop-hdfs/src/main/bin/start-dfs.sh"
https://github.com/apache/hadoop/blob/0eb4b513b76bc944c31b15cd6558901ae44bf931/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/start-dfs.sh

- If we switch to Tag 2.7.3 or anything stable or even 3.0
this file has not been changed. 
It has been commited ... but commits have not been tagged.

-Result causes a file that is 5 years old to be included in Final release.
-Affects many versions..

To reproduce: 
Download the latest build from any apache hadoop mirrors and compare the file 
with the newer version avaliable on git.

by the years, many corrections have been made for this file. 
Again , look at "SECONDARY_NAMENODE" staring configuration.

On git its : 
"
# secondary namenodes (if any)

SECONDARY_NAMENODES=$("${HADOOP_HDFS_HOME}/bin/hdfs" getconf 
-secondarynamenodes 2>/dev/null)

if [[ "${SECONDARY_NAMENODES}" == "0.0.0.0" ]]; then
  SECONDARY_NAMENODES=$(hostname)
fi

if [[ -n "${SECONDARY_NAMENODES}" ]]; then
  echo "Starting secondary namenodes [${SECONDARY_NAMENODES}]"
  
  "${bin}/hadoop-daemons.sh" \
  --config "${HADOOP_CONF_DIR}" \
  --hostnames "${SECONDARY_NAMENODES}" \
  start secondarynamenode
fi

"

But on tag 2.7+ Its still : 
"
SECONDARY_NAMENODES=$($HADOOP_PREFIX/bin/hdfs getconf -secondarynamenodes 
2>/dev/null)

if [ -n "$SECONDARY_NAMENODES" ]; then
  echo "Starting secondary namenodes [$SECONDARY_NAMENODES]"

  "$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \
  --config "$HADOOP_CONF_DIR" \
  --hostnames "$SECONDARY_NAMENODES" \
  --script "$bin/hdfs" start secondarynamenode
fi

"

Commits after May2015 have not been merged !!!




  was:
**Changes after 1fbefe5  on May 8, 2015
have not been added to Release tags.**

-Will explain by Specifically talking about 
"hadoop-hdfs/src/main/bin/start-dfs.sh"
https://github.com/apache/hadoop/blob/0eb4b513b76bc944c31b15cd6558901ae44bf931/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/start-dfs.sh

- If we switch to Tag 2.7.3 or anything stable or even 3.0
this file has not been changed. 
It has been commited ... but commits have not been tagged.

-Result causes a file that is 5 years old to be included in Final release.
-Affects many versions..

To reproduce: 
Download the latest build from any apache hadoop mirrors and compare the file 
with the newer version avaliable on git.

by the years, many corrections have been made for this file. 
Again , look at "SECONDARY_NAMENODE" staring configuration.

On git its : 
"
# secondary namenodes (if any)

SECONDARY_NAMENODES=$("${HADOOP_HDFS_HOME}/bin/hdfs" getconf 
-secondarynamenodes 2>/dev/null)

if [[ "${SECONDARY_NAMENODES}" == "0.0.0.0" ]]; then
  SECONDARY_NAMENODES=$(hostname)
fi

if [[ -n "${SECONDARY_NAMENODES}" ]]; then
  echo "Starting secondary namenodes [${SECONDARY_NAMENODES}]"
  
  "${bin}/hadoop-daemons.sh" \
  --config "${HADOOP_CONF_DIR}" \
  --hostnames "${SECONDARY_NAMENODES}" \
  start secondarynamenode
fi

"

But on tag 2.7+ Its still : 
"
SECONDARY_NAMENODES=$($HADOOP_PREFIX/bin/hdfs getconf -secondarynamenodes 
2>/dev/null)

if [ -n "$SECONDARY_NAMENODES" ]; then
  echo "Starting secondary namenodes [$SECONDARY_NAMENODES]"

  "$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \
  --config "$HADOOP_CONF_DIR" \
  --hostnames "$SECONDARY_NAMENODES" \
  --script "$bin/hdfs" start secondarynamenode
fi

"

Commits after May2015 have not been merged !!!


-When we run sbin/start-dfs.sh , even if the 


-When we run sbin/start-dfs.sh , even if the the secondary namenode is 0.0.0.0 
Its instructed to start...

To avoid this, protection was added, which is there in recent version,
but this file was not tagged to release.
Hence if we download the hadoop tar from mirror and try to execute it,
it starts secondarynamenode on 0.0.0.0
when it is supposed to switch to localhost

> hdfs-bin files not selected in tags
> ---
>
> Key: HDFS-11483
> URL: https://issues.apache.org/jira/browse/HDFS-11483
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Ninad Chaudhari
>
> **Changes after 1fbefe5  on May 8, 2015
> have not been added to Release tags.**
> -Will explain by Specifically talking about 
> "hadoop-hdfs/src/main/bin/start-dfs.sh"
> https://github.com/apache/hadoop/blob/0eb4b513b76bc944c31b15cd6558901ae44bf931/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/start-dfs.sh
> - If we switch to Tag 2.7.3 or anything stable or even 3.0
> this file has not been changed. 
> It has been commited ... but commits have not been tagged.
> -Result causes a file that is 5 years old to be included in Final 

[jira] [Updated] (HDFS-11483) hdfs-bin files not selected in tags

2017-03-01 Thread Ninad Chaudhari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ninad Chaudhari updated HDFS-11483:
---
Description: 
**Changes after 1fbefe5  on May 8, 2015
have not been added to Release tags.**

-Will explain by Specifically talking about 
"hadoop-hdfs/src/main/bin/start-dfs.sh"
https://github.com/apache/hadoop/blob/0eb4b513b76bc944c31b15cd6558901ae44bf931/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/start-dfs.sh

- If we switch to Tag 2.7.3 or anything stable or even 3.0
this file has not been changed. 
It has been commited ... but commits have not been tagged.

-Result causes a file that is 5 years old to be included in Final release.
-Affects many versions..

To reproduce: 
Download the latest build from any apache hadoop mirrors and compare the file 
with the newer version avaliable on git.

by the years, many corrections have been made for this file. 
Again , look at "SECONDARY_NAMENODE" staring configuration.

On git its : 
"
# secondary namenodes (if any)

SECONDARY_NAMENODES=$("${HADOOP_HDFS_HOME}/bin/hdfs" getconf 
-secondarynamenodes 2>/dev/null)

if [[ "${SECONDARY_NAMENODES}" == "0.0.0.0" ]]; then
  SECONDARY_NAMENODES=$(hostname)
fi

if [[ -n "${SECONDARY_NAMENODES}" ]]; then
  echo "Starting secondary namenodes [${SECONDARY_NAMENODES}]"
  
  "${bin}/hadoop-daemons.sh" \
  --config "${HADOOP_CONF_DIR}" \
  --hostnames "${SECONDARY_NAMENODES}" \
  start secondarynamenode
fi

"

But on tag 2.7+ Its still : 
"
SECONDARY_NAMENODES=$($HADOOP_PREFIX/bin/hdfs getconf -secondarynamenodes 
2>/dev/null)

if [ -n "$SECONDARY_NAMENODES" ]; then
  echo "Starting secondary namenodes [$SECONDARY_NAMENODES]"

  "$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \
  --config "$HADOOP_CONF_DIR" \
  --hostnames "$SECONDARY_NAMENODES" \
  --script "$bin/hdfs" start secondarynamenode
fi

"

Commits after May2015 have not been merged !!!


-When we run sbin/start-dfs.sh , even if the 

  was:
-Will explain by Specifically talking about 
"hadoop-hdfs/src/main/bin/start-dfs.sh"
https://github.com/apache/hadoop/blob/0eb4b513b76bc944c31b15cd6558901ae44bf931/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/start-dfs.sh

- If we switch to Tag 2.7.3 or anything stable or even 3.0
this file has not been changed. 
It has been commited ... but commits have not been tagged.

-Result causes a file that is 5 years old to be included in Final release.
-Affects many versions..

To reproduce: 
Download the latest build from any apache hadoop mirrors and compare the file 
with the newer version avaliable on git.

by the years, many corrections have been made for this file. 
Again , look at "SECONDARY_NAMENODE" staring configuration.

On git its : 
"
# secondary namenodes (if any)

SECONDARY_NAMENODES=$("${HADOOP_HDFS_HOME}/bin/hdfs" getconf 
-secondarynamenodes 2>/dev/null)

if [[ "${SECONDARY_NAMENODES}" == "0.0.0.0" ]]; then
  SECONDARY_NAMENODES=$(hostname)
fi

if [[ -n "${SECONDARY_NAMENODES}" ]]; then
  echo "Starting secondary namenodes [${SECONDARY_NAMENODES}]"
  
  "${bin}/hadoop-daemons.sh" \
  --config "${HADOOP_CONF_DIR}" \
  --hostnames "${SECONDARY_NAMENODES}" \
  start secondarynamenode
fi

"

But on tag 2.7+ Its still : 
"
SECONDARY_NAMENODES=$($HADOOP_PREFIX/bin/hdfs getconf -secondarynamenodes 
2>/dev/null)

if [ -n "$SECONDARY_NAMENODES" ]; then
  echo "Starting secondary namenodes [$SECONDARY_NAMENODES]"

  "$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \
  --config "$HADOOP_CONF_DIR" \
  --hostnames "$SECONDARY_NAMENODES" \
  --script "$bin/hdfs" start secondarynamenode
fi

"

Commits after May2015 have not been merged !!!


-When we run sbin/start-dfs.sh , even if the 


> hdfs-bin files not selected in tags
> ---
>
> Key: HDFS-11483
> URL: https://issues.apache.org/jira/browse/HDFS-11483
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Ninad Chaudhari
>
> **Changes after 1fbefe5  on May 8, 2015
> have not been added to Release tags.**
> -Will explain by Specifically talking about 
> "hadoop-hdfs/src/main/bin/start-dfs.sh"
> https://github.com/apache/hadoop/blob/0eb4b513b76bc944c31b15cd6558901ae44bf931/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/start-dfs.sh
> - If we switch to Tag 2.7.3 or anything stable or even 3.0
> this file has not been changed. 
> It has been commited ... but commits have not been tagged.
> -Result causes a file that is 5 years old to be included in Final release.
> -Affects many versions..
> To reproduce: 
> Download the latest build from any apache hadoop mirrors and compare the file 
> with the newer version avaliable on git.
> by the years, many corrections have been made for this file. 
> Again , look at "SECONDARY_NAMENODE" staring configuration.
> On git its : 
> "
> # secondary namenodes (if any)
> 

[jira] [Commented] (HDFS-11314) Enforce set of enabled EC policies on the NameNode

2017-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891323#comment-15891323
 ] 

Hadoop QA commented on HDFS-11314:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} HDFS-11314 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-11314 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855492/HDFS-11314.002.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18501/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Enforce set of enabled EC policies on the NameNode
> --
>
> Key: HDFS-11314
> URL: https://issues.apache.org/jira/browse/HDFS-11314
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11314.001.patch, HDFS-11314.002.patch
>
>
> Filing based on discussion in HDFS-8095. A user might specify a policy that 
> is not appropriate for the cluster, e.g. a RS (10,4) policy when the cluster 
> only has 10 nodes. The NN should only allow the client to choose from a 
> pre-approved list determined by the cluster administrator.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11483) hdfs-bin files not selected in tags

2017-03-01 Thread Ninad Chaudhari (JIRA)
Ninad Chaudhari created HDFS-11483:
--

 Summary: hdfs-bin files not selected in tags
 Key: HDFS-11483
 URL: https://issues.apache.org/jira/browse/HDFS-11483
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Reporter: Ninad Chaudhari


-Will explain by Specifically talking about 
"hadoop-hdfs/src/main/bin/start-dfs.sh"
https://github.com/apache/hadoop/blob/0eb4b513b76bc944c31b15cd6558901ae44bf931/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/start-dfs.sh

- If we switch to Tag 2.7.3 or anything stable or even 3.0
this file has not been changed. 
It has been commited ... but commits have not been tagged.

-Result causes a file that is 5 years old to be included in Final release.
-Affects many versions..

To reproduce: 
Download the latest build from any apache hadoop mirrors and compare the file 
with the newer version avaliable on git.

by the years, many corrections have been made for this file. 
Again , look at "SECONDARY_NAMENODE" staring configuration.

On git its : 
"
# secondary namenodes (if any)

SECONDARY_NAMENODES=$("${HADOOP_HDFS_HOME}/bin/hdfs" getconf 
-secondarynamenodes 2>/dev/null)

if [[ "${SECONDARY_NAMENODES}" == "0.0.0.0" ]]; then
  SECONDARY_NAMENODES=$(hostname)
fi

if [[ -n "${SECONDARY_NAMENODES}" ]]; then
  echo "Starting secondary namenodes [${SECONDARY_NAMENODES}]"
  
  "${bin}/hadoop-daemons.sh" \
  --config "${HADOOP_CONF_DIR}" \
  --hostnames "${SECONDARY_NAMENODES}" \
  start secondarynamenode
fi

"

But on tag 2.7+ Its still : 
"
SECONDARY_NAMENODES=$($HADOOP_PREFIX/bin/hdfs getconf -secondarynamenodes 
2>/dev/null)

if [ -n "$SECONDARY_NAMENODES" ]; then
  echo "Starting secondary namenodes [$SECONDARY_NAMENODES]"

  "$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \
  --config "$HADOOP_CONF_DIR" \
  --hostnames "$SECONDARY_NAMENODES" \
  --script "$bin/hdfs" start secondarynamenode
fi

"

Commits after May2015 have not been merged !!!


-When we run sbin/start-dfs.sh , even if the 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11314) Enforce set of enabled EC policies on the NameNode

2017-03-01 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11314:
---
Attachment: HDFS-11314.002.patch

Glad I added the additional tests, I caught some places where we were querying 
the set of active policies rather than the set of system policies. I cleaned up 
the getters in ECPolicyManager to clarify this.

> Enforce set of enabled EC policies on the NameNode
> --
>
> Key: HDFS-11314
> URL: https://issues.apache.org/jira/browse/HDFS-11314
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11314.001.patch, HDFS-11314.002.patch
>
>
> Filing based on discussion in HDFS-8095. A user might specify a policy that 
> is not appropriate for the cluster, e.g. a RS (10,4) policy when the cluster 
> only has 10 nodes. The NN should only allow the client to choose from a 
> pre-approved list determined by the cluster administrator.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11418) HttpFS should support old SSL clients

2017-03-01 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-11418:
--
Attachment: HDFS-11418.branch-2.003.patch

Patch branch-2.003
* Fix the issue similar to HADOOP-14131

> HttpFS should support old SSL clients
> -
>
> Key: HDFS-11418
> URL: https://issues.apache.org/jira/browse/HDFS-11418
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.8.0, 2.7.4, 2.6.6
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HDFS-11418.branch-2.001.patch, 
> HDFS-11418.branch-2.002.patch, HDFS-11418.branch-2.003.patch
>
>
> HADOOP-13812 upgraded Tomcat to 6.0.48 which filters weak ciphers. Old SSL 
> clients such as curl stop working. The symptom is {{NSS error -12286}} when 
> running {{curl -v}}.
> Instead of forcing the SSL clients to upgrade, we can configure Tomcat to 
> explicitly allow enough weak ciphers so that old SSL clients can work.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11418) HttpFS should support old SSL clients

2017-03-01 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-11418:
--
Target Version/s: 2.9.0  (was: 2.8.0, 2.7.4, 2.6.6)

> HttpFS should support old SSL clients
> -
>
> Key: HDFS-11418
> URL: https://issues.apache.org/jira/browse/HDFS-11418
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.8.0, 2.7.4, 2.6.6
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HDFS-11418.branch-2.001.patch, 
> HDFS-11418.branch-2.002.patch, HDFS-11418.branch-2.003.patch
>
>
> HADOOP-13812 upgraded Tomcat to 6.0.48 which filters weak ciphers. Old SSL 
> clients such as curl stop working. The symptom is {{NSS error -12286}} when 
> running {{curl -v}}.
> Instead of forcing the SSL clients to upgrade, we can configure Tomcat to 
> explicitly allow enough weak ciphers so that old SSL clients can work.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11418) HttpFS should support old SSL clients

2017-03-01 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-11418:
--
Status: Patch Available  (was: Open)

> HttpFS should support old SSL clients
> -
>
> Key: HDFS-11418
> URL: https://issues.apache.org/jira/browse/HDFS-11418
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.8.0, 2.7.4, 2.6.6
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HDFS-11418.branch-2.001.patch, 
> HDFS-11418.branch-2.002.patch, HDFS-11418.branch-2.003.patch
>
>
> HADOOP-13812 upgraded Tomcat to 6.0.48 which filters weak ciphers. Old SSL 
> clients such as curl stop working. The symptom is {{NSS error -12286}} when 
> running {{curl -v}}.
> Instead of forcing the SSL clients to upgrade, we can configure Tomcat to 
> explicitly allow enough weak ciphers so that old SSL clients can work.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11480) Ozone: TestEndpoint task failure

2017-03-01 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11480:
--
Status: Patch Available  (was: Open)

> Ozone: TestEndpoint task failure
> 
>
> Key: HDFS-11480
> URL: https://issues.apache.org/jira/browse/HDFS-11480
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Xiaoyu Yao
> Fix For: HDFS-7240
>
> Attachments: HDFS-11480-HDFS-7240.001.patch
>
>
> During a test run it seems that TestEndPoint test failed with null pointer 
> access
> {code}
> Running org.apache.hadoop.ozone.container.common.TestEndPoint
> Tests run: 13, Failures: 0, Errors: 5, Skipped: 0, Time elapsed: 4.413 sec 
> <<< FAILURE! - in org.apache.hadoop.ozone.container.common.TestEndPoint
> testHeartbeatTaskToInvalidNode(org.apache.hadoop.ozone.container.common.TestEndPoint)
>   Time elapsed: 0.029 sec  <<< ERROR!
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.ozone.container.common.states.endpoint.HeartbeatEndpointTask.call(HeartbeatEndpointTask.java:93)
>   at 
> org.apache.hadoop.ozone.container.common.TestEndPoint.heartbeatTaskHelper(TestEndPoint.java:262)
>   at 
> org.apache.hadoop.ozone.container.common.TestEndPoint.heartbeatTaskHelper(TestEndPoint.java:270)
>   at 
> org.apache.hadoop.ozone.container.common.TestEndPoint.testHeartbeatTaskToInvalidNode(TestEndPoint.java:284)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11480) Ozone: TestEndpoint task failure

2017-03-01 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11480:
--
Attachment: HDFS-11480-HDFS-7240.001.patch

Attach a patch that fixed the TestEndpoint failure.

> Ozone: TestEndpoint task failure
> 
>
> Key: HDFS-11480
> URL: https://issues.apache.org/jira/browse/HDFS-11480
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Xiaoyu Yao
> Fix For: HDFS-7240
>
> Attachments: HDFS-11480-HDFS-7240.001.patch
>
>
> During a test run it seems that TestEndPoint test failed with null pointer 
> access
> {code}
> Running org.apache.hadoop.ozone.container.common.TestEndPoint
> Tests run: 13, Failures: 0, Errors: 5, Skipped: 0, Time elapsed: 4.413 sec 
> <<< FAILURE! - in org.apache.hadoop.ozone.container.common.TestEndPoint
> testHeartbeatTaskToInvalidNode(org.apache.hadoop.ozone.container.common.TestEndPoint)
>   Time elapsed: 0.029 sec  <<< ERROR!
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.ozone.container.common.states.endpoint.HeartbeatEndpointTask.call(HeartbeatEndpointTask.java:93)
>   at 
> org.apache.hadoop.ozone.container.common.TestEndPoint.heartbeatTaskHelper(TestEndPoint.java:262)
>   at 
> org.apache.hadoop.ozone.container.common.TestEndPoint.heartbeatTaskHelper(TestEndPoint.java:270)
>   at 
> org.apache.hadoop.ozone.container.common.TestEndPoint.testHeartbeatTaskToInvalidNode(TestEndPoint.java:284)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11482) Add storage type demand to into DFSNetworkTopology#chooseRandom

2017-03-01 Thread Chen Liang (JIRA)
Chen Liang created HDFS-11482:
-

 Summary: Add storage type demand to into 
DFSNetworkTopology#chooseRandom
 Key: HDFS-11482
 URL: https://issues.apache.org/jira/browse/HDFS-11482
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Chen Liang
Assignee: Chen Liang


HDFS-11450 adds storage type info into network topology, with on this info we 
may change chooseRandom to take storage type requirement as parameter, only 
checking subtrees required storage type available. This way we avoid blindly 
picking up nodes that are not applicable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11425) Ozone : add client-facing container APIs and container references

2017-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891219#comment-15891219
 ] 

Hadoop QA commented on HDFS-11425:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
55s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
24s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
26s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
32s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in HDFS-7240 
has 98 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} HDFS-7240 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
55s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}100m 15s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}129m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.tools.TestDelegationTokenFetcher |
|   | hadoop.ozone.web.TestOzoneVolumes |
|   | hadoop.ozone.container.common.impl.TestContainerPersistence |
|   | hadoop.ozone.web.TestOzoneRestWithMiniCluster |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
|   | hadoop.ozone.scm.node.TestNodeManager |
|   | hadoop.ozone.container.common.TestEndPoint |
|   | hadoop.ozone.container.common.TestDatanodeStateMachine |
|   | hadoop.ozone.TestContainerOperations |
|   | hadoop.hdfs.tools.TestDFSAdmin |
|   | hadoop.ozone.web.client.TestBuckets |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.ozone.scm.TestAllocateContainer |
|   | hadoop.ozone.web.client.TestVolume |
|   | hadoop.ozone.web.TestOzoneWebAccess |
| Timed out junit tests | 
org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HDFS-11425 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855449/HDFS-11425-HDFS-7240.006.patch
 |
| Optional Tests |  asflicense  

[jira] [Commented] (HDFS-11384) Add option for balancer to disperse getBlocks calls to avoid NameNode's rpc.CallQueueLength spike

2017-03-01 Thread yunjiong zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891198#comment-15891198
 ] 

yunjiong zhao commented on HDFS-11384:
--

Thank you [~benoyantony] for your time to review this patch.
{quote}Sleeping inside the Synchronized block should be avoided as it will 
prevent other threads from obtaining the lock while the thread is sleeping. 
{quote}
I did it on purpose for sleeping inside the Synchronized block.
In balancer there are multiple threads (by default 200) that may call getBlocks 
at same time, if user need to set dfs.balancer.getBlocks.interval.millis to 
slow down balancer, without a lock it won't work well due to at worst case 
there are still 200 getBlocks send to NameNode at same time.

{quote}It will be better to keep track of the interval between successive 
getBlocks and sleep only for the required time. {quote}
Since by default, this patch doesn't change anything, only add a option let 
user slow down balancer send getBlocks to NameNode, so I'd like to keep it as 
simple as possible.

> Add option for balancer to disperse getBlocks calls to avoid NameNode's 
> rpc.CallQueueLength spike
> -
>
> Key: HDFS-11384
> URL: https://issues.apache.org/jira/browse/HDFS-11384
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer & mover
>Affects Versions: 2.7.3
>Reporter: yunjiong zhao
>Assignee: yunjiong zhao
> Attachments: balancer.day.png, balancer.week.png, HDFS-11384.001.patch
>
>
> When running balancer on hadoop cluster which have more than 3000 Datanodes 
> will cause NameNode's rpc.CallQueueLength spike. We observed this situation 
> could cause Hbase cluster failure due to RegionServer's WAL timeout.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11314) Enforce set of enabled EC policies on the NameNode

2017-03-01 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11314:
---
Status: Patch Available  (was: Open)

I still want to add a few more unit tests, but posting this to get a precommit 
run.

> Enforce set of enabled EC policies on the NameNode
> --
>
> Key: HDFS-11314
> URL: https://issues.apache.org/jira/browse/HDFS-11314
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11314.001.patch
>
>
> Filing based on discussion in HDFS-8095. A user might specify a policy that 
> is not appropriate for the cluster, e.g. a RS (10,4) policy when the cluster 
> only has 10 nodes. The NN should only allow the client to choose from a 
> pre-approved list determined by the cluster administrator.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11314) Enforce set of enabled EC policies on the NameNode

2017-03-01 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11314:
---
Summary: Enforce set of enabled EC policies on the NameNode  (was: Validate 
client-provided EC schema on the NameNode)

> Enforce set of enabled EC policies on the NameNode
> --
>
> Key: HDFS-11314
> URL: https://issues.apache.org/jira/browse/HDFS-11314
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11314.001.patch
>
>
> Filing based on discussion in HDFS-8095. A user might specify a policy that 
> is not appropriate for the cluster, e.g. a RS (10,4) policy when the cluster 
> only has 10 nodes. The NN should only allow the client to choose from a 
> pre-approved list determined by the cluster administrator.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11314) Validate client-provided EC schema on the NameNode

2017-03-01 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11314:
---
Attachment: HDFS-11314.001.patch

Patch attached, this adds a new comma-delimited config key that allows users to 
specify a list of enabled policies that are enforced by the NN.

> Validate client-provided EC schema on the NameNode
> --
>
> Key: HDFS-11314
> URL: https://issues.apache.org/jira/browse/HDFS-11314
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11314.001.patch
>
>
> Filing based on discussion in HDFS-8095. A user might specify a policy that 
> is not appropriate for the cluster, e.g. a RS (10,4) policy when the cluster 
> only has 10 nodes. The NN should only allow the client to choose from a 
> pre-approved list determined by the cluster administrator.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11416) Refactor out system default erasure coding policy

2017-03-01 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11416:
---
Attachment: HDFS-11416.002.patch

I think this needed a slight rebase after HDFS-11382, attached.

> Refactor out system default erasure coding policy
> -
>
> Key: HDFS-11416
> URL: https://issues.apache.org/jira/browse/HDFS-11416
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-11416.001.patch, HDFS-11416.002.patch
>
>
> As discussed on HDFS-7859, the system default EC policy is mostly a relic 
> from development when the system only supported a single global policy. Now, 
> we support multiple policies, and the system default policy is mostly used by 
> tests.
> We should refactor to remove this concept.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11450) HDFS specific network topology classes with storage type info included

2017-03-01 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11450:
--
Attachment: HDFS-11450.003.patch

Thanks [~arpitagarwal] for the review! upload v003 patch to address the 
comments.

> HDFS specific network topology classes with storage type info included
> --
>
> Key: HDFS-11450
> URL: https://issues.apache.org/jira/browse/HDFS-11450
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11450.001.patch, HDFS-11450.002.patch, 
> HDFS-11450.003.patch
>
>
> This JIRA adds storage type info into network topology.
> More specifically, this JIRA adds a storage type map by extending 
> {{InnerNodeImpl}} to describe the available storages under the current node's 
> subtree. This map is updated when a node is added/removed from the subtree.
> With this info, when choosing a random node with storage type requirement, 
> the search could then decide to/not to go deeper into a subtree by examining 
> the available storage types first.
> One to-do item still, is that, we might still need to separately handle the 
> cases where a Datanodes restarts, or a disk is hot-swapped, will file another 
> JIRA in that case.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11477) Combine FileIO Profiling Enable and Sampling Fraction Config Key into one

2017-03-01 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891124#comment-15891124
 ] 

Arpit Agarwal commented on HDFS-11477:
--

Thanks for the improvement [~hanishakoneru]. Couple of comments:
# The {{fileIOSamplingFraction == 0}} check may not work. Instead you can check 
{{abs(fileIOSamplingFraction) < 0.01}}. If so, set isEnabled to false.
# Also we can add a check for negative numbers. i.e. if 
{{fileIOSamplingFraction < -0.01}}, log a warning and set isEnabled to 
false.
# The documentation in Metrics.md can state this should be a fraction between 
0.0 and 1.0, with 0.0 meaning profiling is not enabled.

> Combine FileIO Profiling Enable and Sampling Fraction Config Key into one
> -
>
> Key: HDFS-11477
> URL: https://issues.apache.org/jira/browse/HDFS-11477
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Attachments: HDFS-11477.000.patch
>
>
> For Profiling FileIO events, there are 2 keys:
> - DFS_DATANODE_ENABLE_FILEIO_PROFILING_KEY for enabling the hooks
> - DFS_DATANODE_FILEIO_PROFILING_SAMPLING_FRACTION_KEY for setting the 
> sampling fraction 
> We can instead have only the sampling fraction key and set it to 0 if we want 
> to disable profiling.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11478) Update EC commands in HDFSCommands.md

2017-03-01 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891098#comment-15891098
 ] 

Andrew Wang commented on HDFS-11478:


+1 thanks Yiqun!

> Update EC commands in HDFSCommands.md
> -
>
> Key: HDFS-11478
> URL: https://issues.apache.org/jira/browse/HDFS-11478
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, erasure-coding
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-11478.001.patch
>
>
> The EC commands in {{HDFSCommands.md}} is out of date. There are some places 
> need to update.
> Current EC commands in {{HDFSCommands.md}}:
> {code}
>hdfs ec [generic options]
>[-setPolicy [-p ] ]
>[-getPolicy ]
>[-listPolicies]
> {code}
> But after the work on HDFS-11426 and HDFS-11072, the EC commands usages 
> changed as followings that showed in {{HDFSErasureCoding.md}}:
> {code}
>hdfs ec [generic options]
>  [-setPolicy -policy  -path ]
>  [-getPolicy -path ]
>  [-unsetPolicy -path ]
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11479) Socket re-use address option should be used in SimpleUdpServer

2017-03-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891094#comment-15891094
 ] 

Hudson commented on HDFS-11479:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11327 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11327/])
HDFS-11479. Socket re-use address option should be used in (arp: rev 
899d5c4d49f00d90ddc3632efb2df92841867192)
* (edit) 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/SimpleUdpServer.java


> Socket re-use address option should be used in SimpleUdpServer
> --
>
> Key: HDFS-11479
> URL: https://issues.apache.org/jira/browse/HDFS-11479
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: 2.9.0
>
> Attachments: HDFS-11479.001.patch
>
>
> Nfs gateway restart can fail because of bind error in SimpleUdpServer.
> re-use address option should be used in SimpleUdpServer to so that socket 
> bind can happen when it is in TIME_WAIT state
> {noformat}
> 2017-02-28 04:19:53,495 FATAL mount.MountdBase 
> (MountdBase.java:startUDPServer(66)) - Failed to start the UDP server.
> org.jboss.netty.channel.ChannelException: Failed to bind to: 
> 0.0.0.0/0.0.0.0:4242
> at 
> org.jboss.netty.bootstrap.ConnectionlessBootstrap.bind(ConnectionlessBootstrap.java:204)
> at 
> org.apache.hadoop.oncrpc.SimpleUdpServer.run(SimpleUdpServer.java:68)
> at 
> org.apache.hadoop.mount.MountdBase.startUDPServer(MountdBase.java:64)
> at org.apache.hadoop.mount.MountdBase.start(MountdBase.java:97)
> at 
> org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startServiceInternal(Nfs3.java:56)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:69)
> at 
> org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter.start(PrivilegedNfsGatewayStarter.java:71)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
> Caused by: java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.DatagramChannelImpl.bind(DatagramChannelImpl.java:691)
> at 
> sun.nio.ch.DatagramSocketAdaptor.bind(DatagramSocketAdaptor.java:91)
> at 
> org.jboss.netty.channel.socket.nio.NioDatagramPipelineSink.bind(NioDatagramPipelineSink.java:129)
> at 
> org.jboss.netty.channel.socket.nio.NioDatagramPipelineSink.eventSunk(NioDatagramPipelineSink.java:77)
> at org.jboss.netty.channel.Channels.bind(Channels.java:561)
> at 
> org.jboss.netty.channel.AbstractChannel.bind(AbstractChannel.java:189)
> at 
> org.jboss.netty.bootstrap.ConnectionlessBootstrap.bind(ConnectionlessBootstrap.java:198)
> ... 11 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11450) HDFS specific network topology classes with storage type info included

2017-03-01 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891061#comment-15891061
 ] 

Arpit Agarwal commented on HDFS-11450:
--

The unit test also looks good to me. We can add the following check at the end 
of testAddAndRemoveTopology.
{code}
assertNull(cluster.getNode("/l1/d3"));
{code}

> HDFS specific network topology classes with storage type info included
> --
>
> Key: HDFS-11450
> URL: https://issues.apache.org/jira/browse/HDFS-11450
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11450.001.patch, HDFS-11450.002.patch
>
>
> This JIRA adds storage type info into network topology.
> More specifically, this JIRA adds a storage type map by extending 
> {{InnerNodeImpl}} to describe the available storages under the current node's 
> subtree. This map is updated when a node is added/removed from the subtree.
> With this info, when choosing a random node with storage type requirement, 
> the search could then decide to/not to go deeper into a subtree by examining 
> the available storage types first.
> One to-do item still, is that, we might still need to separately handle the 
> cases where a Datanodes restarts, or a disk is hot-swapped, will file another 
> JIRA in that case.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11196) Ozone: Improve logging and error handling in the container layer

2017-03-01 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891056#comment-15891056
 ] 

Chen Liang commented on HDFS-11196:
---

Thanks [~anu] for working on this, v001 patch LGTM, +1

> Ozone: Improve logging and error handling in the container layer
> 
>
> Key: HDFS-11196
> URL: https://issues.apache.org/jira/browse/HDFS-11196
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: HDFS-11196-HDFS-7240.001.patch
>
>
> Improve logging and error handling in container layer.
>  * With this change Storage Containers return StorageContainerException.
>  * Precondition checks fail with a human readable error.
>  * All failed requests are logged with traceID in the dispatcher.
>  * Returns proper error codes for corresponding failures.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11416) Refactor out system default erasure coding policy

2017-03-01 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11416:
---
Status: Patch Available  (was: Open)

> Refactor out system default erasure coding policy
> -
>
> Key: HDFS-11416
> URL: https://issues.apache.org/jira/browse/HDFS-11416
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-11416.001.patch
>
>
> As discussed on HDFS-7859, the system default EC policy is mostly a relic 
> from development when the system only supported a single global policy. Now, 
> we support multiple policies, and the system default policy is mostly used by 
> tests.
> We should refactor to remove this concept.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11479) Socket re-use address option should be used in SimpleUdpServer

2017-03-01 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-11479:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks for the contribution [~msingh].

> Socket re-use address option should be used in SimpleUdpServer
> --
>
> Key: HDFS-11479
> URL: https://issues.apache.org/jira/browse/HDFS-11479
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: 2.9.0
>
> Attachments: HDFS-11479.001.patch
>
>
> Nfs gateway restart can fail because of bind error in SimpleUdpServer.
> re-use address option should be used in SimpleUdpServer to so that socket 
> bind can happen when it is in TIME_WAIT state
> {noformat}
> 2017-02-28 04:19:53,495 FATAL mount.MountdBase 
> (MountdBase.java:startUDPServer(66)) - Failed to start the UDP server.
> org.jboss.netty.channel.ChannelException: Failed to bind to: 
> 0.0.0.0/0.0.0.0:4242
> at 
> org.jboss.netty.bootstrap.ConnectionlessBootstrap.bind(ConnectionlessBootstrap.java:204)
> at 
> org.apache.hadoop.oncrpc.SimpleUdpServer.run(SimpleUdpServer.java:68)
> at 
> org.apache.hadoop.mount.MountdBase.startUDPServer(MountdBase.java:64)
> at org.apache.hadoop.mount.MountdBase.start(MountdBase.java:97)
> at 
> org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startServiceInternal(Nfs3.java:56)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:69)
> at 
> org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter.start(PrivilegedNfsGatewayStarter.java:71)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
> Caused by: java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.DatagramChannelImpl.bind(DatagramChannelImpl.java:691)
> at 
> sun.nio.ch.DatagramSocketAdaptor.bind(DatagramSocketAdaptor.java:91)
> at 
> org.jboss.netty.channel.socket.nio.NioDatagramPipelineSink.bind(NioDatagramPipelineSink.java:129)
> at 
> org.jboss.netty.channel.socket.nio.NioDatagramPipelineSink.eventSunk(NioDatagramPipelineSink.java:77)
> at org.jboss.netty.channel.Channels.bind(Channels.java:561)
> at 
> org.jboss.netty.channel.AbstractChannel.bind(AbstractChannel.java:189)
> at 
> org.jboss.netty.bootstrap.ConnectionlessBootstrap.bind(ConnectionlessBootstrap.java:198)
> ... 11 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11481) hdfs snapshotDiff /.reserved/raw/... fails on snapshottable directories

2017-03-01 Thread Mavin Martin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mavin Martin updated HDFS-11481:

Priority: Minor  (was: Major)

> hdfs snapshotDiff /.reserved/raw/... fails on snapshottable directories
> ---
>
> Key: HDFS-11481
> URL: https://issues.apache.org/jira/browse/HDFS-11481
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Mavin Martin
>Priority: Minor
>
> Successful command:
> {code}
> hdfs snapshotDiff /tmp/dir s1 s2
> Difference between snapshot s1 and snapshot s2 under directory /tmp/dir:
> M   .
> +   ./file1.txt
> {code}
> Unsuccessful command:
> {code}
> hdfs snapshotDiff /.reserved/raw/tmp/dir s1 s2
> snapshotDiff: Directory does not exist: /.reserved/raw/tmp/dir
> {code}
> Prefixing with raw path should run successfully and return same output.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11481) hdfs snapshotDiff /.reserved/raw/... fails on snapshottable directories

2017-03-01 Thread Mavin Martin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mavin Martin updated HDFS-11481:

Description: 
Successful command:
{code}
#> hdfs snapshotDiff /tmp/dir s1 s2
Difference between snapshot s1 and snapshot s2 under directory /tmp/dir:
M   .
+   ./file1.txt
{code}

Unsuccessful command:
{code}
#> hdfs snapshotDiff /.reserved/raw/tmp/dir s1 s2
snapshotDiff: Directory does not exist: /.reserved/raw/tmp/dir
{code}

Prefixing with raw path should run successfully and return same output.

  was:
Successful command:
{code}
hdfs snapshotDiff /tmp/dir s1 s2
Difference between snapshot s1 and snapshot s2 under directory /tmp/dir:
M   .
+   ./file1.txt
{code}

Unsuccessful command:
{code}
hdfs snapshotDiff /.reserved/raw/tmp/dir s1 s2
snapshotDiff: Directory does not exist: /.reserved/raw/tmp/dir
{code}

Prefixing with raw path should run successfully and return same output.


> hdfs snapshotDiff /.reserved/raw/... fails on snapshottable directories
> ---
>
> Key: HDFS-11481
> URL: https://issues.apache.org/jira/browse/HDFS-11481
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Mavin Martin
>Priority: Minor
>
> Successful command:
> {code}
> #> hdfs snapshotDiff /tmp/dir s1 s2
> Difference between snapshot s1 and snapshot s2 under directory /tmp/dir:
> M   .
> +   ./file1.txt
> {code}
> Unsuccessful command:
> {code}
> #> hdfs snapshotDiff /.reserved/raw/tmp/dir s1 s2
> snapshotDiff: Directory does not exist: /.reserved/raw/tmp/dir
> {code}
> Prefixing with raw path should run successfully and return same output.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11481) hdfs snapshotDiff /.reserved/raw/... fails on snapshottable directories

2017-03-01 Thread Mavin Martin (JIRA)
Mavin Martin created HDFS-11481:
---

 Summary: hdfs snapshotDiff /.reserved/raw/... fails on 
snapshottable directories
 Key: HDFS-11481
 URL: https://issues.apache.org/jira/browse/HDFS-11481
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Reporter: Mavin Martin


Successful command:
{code}
hdfs snapshotDiff /tmp/dir s1 s2
Difference between snapshot s1 and snapshot s2 under directory /tmp/dir:
M   .
+   ./file1.txt
{code}

Unsuccessful command:
{code}
hdfs snapshotDiff /.reserved/raw/tmp/dir s1 s2
snapshotDiff: Directory does not exist: /.reserved/raw/tmp/dir
{code}

Prefixing with raw path should run successfully and return same output.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11425) Ozone : add client-facing container APIs and container references

2017-03-01 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890985#comment-15890985
 ] 

Anu Engineer commented on HDFS-11425:
-

+1, pending Jenkins for v6 patch.

> Ozone : add client-facing container APIs and container references
> -
>
> Key: HDFS-11425
> URL: https://issues.apache.org/jira/browse/HDFS-11425
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11425-HDFS-7240.001.patch, 
> HDFS-11425-HDFS-7240.002.patch, HDFS-11425-HDFS-7240.003.patch, 
> HDFS-11425-HDFS-7240.004.patch, HDFS-11425-HDFS-7240.005.patch, 
> HDFS-11425-HDFS-7240.006.patch
>
>
> This JIRA adds the container APIs, such as create container, delete container 
> etc exposed to users.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11447) Ozone: SCM: Send node report to SCM with heartbeat

2017-03-01 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890977#comment-15890977
 ] 

Xiaoyu Yao commented on HDFS-11447:
---

Thanks [~anu] for the review and commit.

> Ozone: SCM: Send node report to SCM with heartbeat
> --
>
> Key: HDFS-11447
> URL: https://issues.apache.org/jira/browse/HDFS-11447
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: HDFS-7240
>
> Attachments: HDFS-11447-HDFS-7240.001.patch, 
> HDFS-11447-HDFS-7240.002.patch, HDFS-11447-HDFS-7240.003.patch
>
>
> The storage utilization information on datanode should be reported to SCM to 
> help decide container allocation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11480) Ozone: TestEndpoint task failure

2017-03-01 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reassigned HDFS-11480:
-

Assignee: Xiaoyu Yao

> Ozone: TestEndpoint task failure
> 
>
> Key: HDFS-11480
> URL: https://issues.apache.org/jira/browse/HDFS-11480
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Xiaoyu Yao
> Fix For: HDFS-7240
>
>
> During a test run it seems that TestEndPoint test failed with null pointer 
> access
> {code}
> Running org.apache.hadoop.ozone.container.common.TestEndPoint
> Tests run: 13, Failures: 0, Errors: 5, Skipped: 0, Time elapsed: 4.413 sec 
> <<< FAILURE! - in org.apache.hadoop.ozone.container.common.TestEndPoint
> testHeartbeatTaskToInvalidNode(org.apache.hadoop.ozone.container.common.TestEndPoint)
>   Time elapsed: 0.029 sec  <<< ERROR!
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.ozone.container.common.states.endpoint.HeartbeatEndpointTask.call(HeartbeatEndpointTask.java:93)
>   at 
> org.apache.hadoop.ozone.container.common.TestEndPoint.heartbeatTaskHelper(TestEndPoint.java:262)
>   at 
> org.apache.hadoop.ozone.container.common.TestEndPoint.heartbeatTaskHelper(TestEndPoint.java:270)
>   at 
> org.apache.hadoop.ozone.container.common.TestEndPoint.testHeartbeatTaskToInvalidNode(TestEndPoint.java:284)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11425) Ozone : add client-facing container APIs and container references

2017-03-01 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11425:
--
Attachment: HDFS-11425-HDFS-7240.006.patch

v006 patch fix to fix the javadoc and checkstyle warnings. The licence warnings 
are due to files generated in tests. The findbugs warning are due to protobuf 
generated files. The failed tests are unrelated.

> Ozone : add client-facing container APIs and container references
> -
>
> Key: HDFS-11425
> URL: https://issues.apache.org/jira/browse/HDFS-11425
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11425-HDFS-7240.001.patch, 
> HDFS-11425-HDFS-7240.002.patch, HDFS-11425-HDFS-7240.003.patch, 
> HDFS-11425-HDFS-7240.004.patch, HDFS-11425-HDFS-7240.005.patch, 
> HDFS-11425-HDFS-7240.006.patch
>
>
> This JIRA adds the container APIs, such as create container, delete container 
> etc exposed to users.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11384) Add option for balancer to disperse getBlocks calls to avoid NameNode's rpc.CallQueueLength spike

2017-03-01 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890957#comment-15890957
 ] 

Benoy Antony commented on HDFS-11384:
-

Sleeping inside the *Synchronized* block should be avoided as it will lock 
prevent other threads from obtaining the lock while the thread is sleeping. 
One tradeoff in sleeping fixed vs variable time is that code gets complicated. 
Since by default, the delay is not applied, it is okay to sleep for a fixed 
interval after getBlocks(). 

> Add option for balancer to disperse getBlocks calls to avoid NameNode's 
> rpc.CallQueueLength spike
> -
>
> Key: HDFS-11384
> URL: https://issues.apache.org/jira/browse/HDFS-11384
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer & mover
>Affects Versions: 2.7.3
>Reporter: yunjiong zhao
>Assignee: yunjiong zhao
> Attachments: balancer.day.png, balancer.week.png, HDFS-11384.001.patch
>
>
> When running balancer on hadoop cluster which have more than 3000 Datanodes 
> will cause NameNode's rpc.CallQueueLength spike. We observed this situation 
> could cause Hbase cluster failure due to RegionServer's WAL timeout.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11384) Add option for balancer to disperse getBlocks calls to avoid NameNode's rpc.CallQueueLength spike

2017-03-01 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890957#comment-15890957
 ] 

Benoy Antony edited comment on HDFS-11384 at 3/1/17 8:17 PM:
-

Sleeping inside the *Synchronized* block should be avoided as it will prevent 
other threads from obtaining the lock while the thread is sleeping. 
One tradeoff in sleeping fixed vs variable time is that code gets complicated. 
Since by default, the delay is not applied, it is okay to sleep for a fixed 
interval after getBlocks(). 


was (Author: benoyantony):
Sleeping inside the *Synchronized* block should be avoided as it will lock 
prevent other threads from obtaining the lock while the thread is sleeping. 
One tradeoff in sleeping fixed vs variable time is that code gets complicated. 
Since by default, the delay is not applied, it is okay to sleep for a fixed 
interval after getBlocks(). 

> Add option for balancer to disperse getBlocks calls to avoid NameNode's 
> rpc.CallQueueLength spike
> -
>
> Key: HDFS-11384
> URL: https://issues.apache.org/jira/browse/HDFS-11384
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer & mover
>Affects Versions: 2.7.3
>Reporter: yunjiong zhao
>Assignee: yunjiong zhao
> Attachments: balancer.day.png, balancer.week.png, HDFS-11384.001.patch
>
>
> When running balancer on hadoop cluster which have more than 3000 Datanodes 
> will cause NameNode's rpc.CallQueueLength spike. We observed this situation 
> could cause Hbase cluster failure due to RegionServer's WAL timeout.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11447) Ozone: SCM: Send node report to SCM with heartbeat

2017-03-01 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11447:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7240
   Status: Resolved  (was: Patch Available)

[~xyao]  Thank you for the contribution. I have committed this to the feature 
branch. For one of the test failures, not related to this patch, I have filed 
HDFS-11480

> Ozone: SCM: Send node report to SCM with heartbeat
> --
>
> Key: HDFS-11447
> URL: https://issues.apache.org/jira/browse/HDFS-11447
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: HDFS-7240
>
> Attachments: HDFS-11447-HDFS-7240.001.patch, 
> HDFS-11447-HDFS-7240.002.patch, HDFS-11447-HDFS-7240.003.patch
>
>
> The storage utilization information on datanode should be reported to SCM to 
> help decide container allocation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11480) Ozone: TestEndpoint task failure

2017-03-01 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-11480:
---

 Summary: Ozone: TestEndpoint task failure
 Key: HDFS-11480
 URL: https://issues.apache.org/jira/browse/HDFS-11480
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Anu Engineer
 Fix For: HDFS-7240


During a test run it seems that TestEndPoint test failed with null pointer 
access
{code}
Running org.apache.hadoop.ozone.container.common.TestEndPoint
Tests run: 13, Failures: 0, Errors: 5, Skipped: 0, Time elapsed: 4.413 sec <<< 
FAILURE! - in org.apache.hadoop.ozone.container.common.TestEndPoint
testHeartbeatTaskToInvalidNode(org.apache.hadoop.ozone.container.common.TestEndPoint)
  Time elapsed: 0.029 sec  <<< ERROR!
java.lang.NullPointerException: null
at 
org.apache.hadoop.ozone.container.common.states.endpoint.HeartbeatEndpointTask.call(HeartbeatEndpointTask.java:93)
at 
org.apache.hadoop.ozone.container.common.TestEndPoint.heartbeatTaskHelper(TestEndPoint.java:262)
at 
org.apache.hadoop.ozone.container.common.TestEndPoint.heartbeatTaskHelper(TestEndPoint.java:270)
at 
org.apache.hadoop.ozone.container.common.TestEndPoint.testHeartbeatTaskToInvalidNode(TestEndPoint.java:284)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11474) Ozone: TestContainerMapping needs to cleanup levelDB files

2017-03-01 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11474:

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

[~xyao] Thank you for the contribution.  I have committed this to the feature 
branch.

> Ozone: TestContainerMapping needs to cleanup levelDB files
> --
>
> Key: HDFS-11474
> URL: https://issues.apache.org/jira/browse/HDFS-11474
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Xiaoyu Yao
> Fix For: HDFS-7240
>
> Attachments: HDFS-11474-HDFS-7240.001.patch
>
>
> Currently some tests don't delete the LevelDB database after the tests are 
> done. This causes an asflicense check failure. This JIRA is to track that 
> issue so that we cleanup the DB directories properly during exit.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9807) Add an optional StorageID to writes

2017-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890895#comment-15890895
 ] 

Hadoop QA commented on HDFS-9807:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 19 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
55s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 11m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
55s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 15s{color} | {color:orange} root: The patch generated 40 new + 1766 
unchanged - 27 fixed = 1806 total (was 1793) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
57s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
46s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 5 new + 7 
unchanged - 0 fixed = 12 total (was 7) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 17s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
8s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m  2s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}150m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  
org.apache.hadoop.hdfs.server.datanode.DataXceiver.replaceBlock(ExtendedBlock, 
StorageType, Token, String, DatanodeInfo, String) may fail to close stream  At 
DataXceiver.java:String) may fail to close stream  At DataXceiver.java:[line 
1165] |
|  |  Dead store to storageType in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList.getNextTransientVolume(long)
  At 
FsVolumeList.java:org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList.getNextTransientVolume(long)
  At FsVolumeList.java:[line 135] |
| Failed junit tests | hadoop.ipc.TestRPCWaitForProxy |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | 

[jira] [Commented] (HDFS-11447) Ozone: SCM: Send node report to SCM with heartbeat

2017-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890887#comment-15890887
 ] 

Hadoop QA commented on HDFS-11447:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
12s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
7s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 9 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 54s{color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 103 unchanged - 
1 fixed = 104 total (was 104) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}102m 28s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
19s{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}126m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.tools.TestDelegationTokenFetcher |
|   | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
|   | hadoop.ozone.web.TestOzoneVolumes |
|   | hadoop.ozone.container.common.impl.TestContainerPersistence |
|   | hadoop.ozone.web.TestOzoneRestWithMiniCluster |
|   | hadoop.ozone.scm.node.TestNodeManager |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.ozone.container.common.TestEndPoint |
|   | hadoop.ozone.container.common.TestDatanodeStateMachine |
|   | hadoop.ozone.web.client.TestBuckets |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.ozone.scm.TestAllocateContainer |
|   | hadoop.ozone.web.client.TestVolume |
|   | hadoop.ozone.web.TestOzoneWebAccess |
| Timed out junit tests | 
org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HDFS-11447 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855418/HDFS-11447-HDFS-7240.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux f89d5233a917 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| 

[jira] [Comment Edited] (HDFS-11395) RequestHedgingProxyProvider#RequestHedgingInvocationHandler hides the Exception thrown from NameNode

2017-03-01 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890867#comment-15890867
 ] 

Jing Zhao edited comment on HDFS-11395 at 3/1/17 7:27 PM:
--

Thanks for working on this, [~nandakumar131]. I agree we should not directly 
throw a MultiException. But I have similar concern as Arpit, i.e., we should 
not simply throw the first exception. I think we should
# Not mix detailed exception handling logic into 
{{RequestHedgingProxyProvider}}. In {{RequestHedgingProxyProvider}}, we only 
need to get the RemoteException from {{ExecutionException}}, and put all the 
exceptions into {{badResults}}. No need for special handling for 
StandbyException etc there. These should be handled by 
{{RetryInvocationHandler#newRetryInfo}}.
# Then in {{RetryInvocationHandler#newRetryInfo}}, we should let this method 
return both the RetryInfo and the exception to throw from the MultiException. 
These two information should comes from the same internal exception inside of 
the MultiException.


was (Author: jingzhao):
Thanks for working on this, [~nandakumar131]. I agree we should not directly 
throw a MultiException. But I have similar concern as Arpit, i.e., we should 
not simply throw the first exception. I think we should
# Not mix detailed exception handling logic into 
{{RequestHedgingProxyProvider}}. In {{RequestHedgingProxyProvider}}, we only 
need to get the RemoteException from {{ExecutionException}}, and put all the 
exceptions into {{badResults}}. No need for special handling for 
StandbyException etc there. These should be handled by 
{{RetryInvocationHandler#newRetryInfo}}.
# Then in {{RetryInvocationHandler#newRetryInfo}}, we should let this method 
return both the RetryInfo and the exception to throw from the MultiException. 
These two information should comes from the same internal exception inside of 
the MultiException.

> RequestHedgingProxyProvider#RequestHedgingInvocationHandler hides the 
> Exception thrown from NameNode
> 
>
> Key: HDFS-11395
> URL: https://issues.apache.org/jira/browse/HDFS-11395
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: HDFS-11395.000.patch, HDFS-11395.001.patch
>
>
> When using RequestHedgingProxyProvider, in case of Exception (like 
> FileNotFoundException) from ActiveNameNode, 
> {{RequestHedgingProxyProvider#RequestHedgingInvocationHandler.invoke}} 
> receives {{ExecutionException}} since we use {{CompletionService}} for the 
> call. The ExecutionException is put into a map and wrapped with 
> {{MultiException}}.
> So for a FileNotFoundException the client receives 
> {{MultiException(Map(ExecutionException(InvocationTargetException(RemoteException(FileNotFoundException)}}
> It will cause problem in clients which are handling RemoteExceptions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11395) RequestHedgingProxyProvider#RequestHedgingInvocationHandler hides the Exception thrown from NameNode

2017-03-01 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890867#comment-15890867
 ] 

Jing Zhao commented on HDFS-11395:
--

Thanks for working on this, [~nandakumar131]. I agree we should not directly 
throw a MultiException. But I have similar concern as Arpit, i.e., we should 
not simply throw the first exception. I think we should
# Not mix detailed exception handling logic into 
{{RequestHedgingProxyProvider}}. In {{RequestHedgingProxyProvider}}, we only 
need to get the RemoteException from {{ExecutionException}}, and put all the 
exceptions into {{badResults}}. No need for special handling for 
StandbyException etc there. These should be handled by 
{{RetryInvocationHandler#newRetryInfo}}.
# Then in {{RetryInvocationHandler#newRetryInfo}}, we should let this method 
return both the RetryInfo and the exception to throw from the MultiException. 
These two information should comes from the same internal exception inside of 
the MultiException.

> RequestHedgingProxyProvider#RequestHedgingInvocationHandler hides the 
> Exception thrown from NameNode
> 
>
> Key: HDFS-11395
> URL: https://issues.apache.org/jira/browse/HDFS-11395
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: HDFS-11395.000.patch, HDFS-11395.001.patch
>
>
> When using RequestHedgingProxyProvider, in case of Exception (like 
> FileNotFoundException) from ActiveNameNode, 
> {{RequestHedgingProxyProvider#RequestHedgingInvocationHandler.invoke}} 
> receives {{ExecutionException}} since we use {{CompletionService}} for the 
> call. The ExecutionException is put into a map and wrapped with 
> {{MultiException}}.
> So for a FileNotFoundException the client receives 
> {{MultiException(Map(ExecutionException(InvocationTargetException(RemoteException(FileNotFoundException)}}
> It will cause problem in clients which are handling RemoteExceptions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7285) Erasure Coding Support inside HDFS

2017-03-01 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890866#comment-15890866
 ] 

Wei-Chiu Chuang commented on HDFS-7285:
---

The Hadoop EC framework is designed to be pluggable, however, the native EC 
codec loader assumes it uses ISA-L library (ErasureCodeNative.java), so that 
probably need to be changed.

> Erasure Coding Support inside HDFS
> --
>
> Key: HDFS-7285
> URL: https://issues.apache.org/jira/browse/HDFS-7285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Weihua Jiang
>Assignee: Zhe Zhang
> Fix For: 3.0.0-alpha1
>
> Attachments: Compare-consolidated-20150824.diff, 
> Consolidated-20150707.patch, Consolidated-20150806.patch, 
> Consolidated-20150810.patch, ECAnalyzer.py, ECParser.py, 
> fsimage-analysis-20150105.pdf, HDFS-7285-Consolidated-20150911.patch, 
> HDFS-7285-initial-PoC.patch, HDFS-7285-merge-consolidated-01.patch, 
> HDFS-7285-merge-consolidated-trunk-01.patch, 
> HDFS-7285-merge-consolidated.trunk.03.patch, 
> HDFS-7285-merge-consolidated.trunk.04.patch, HDFS-bistriped.patch, 
> HDFS-EC-merge-consolidated-01.patch, HDFS-EC-Merge-PoC-20150624.patch, 
> HDFSErasureCodingDesign-20141028.pdf, HDFSErasureCodingDesign-20141217.pdf, 
> HDFSErasureCodingDesign-20150204.pdf, HDFSErasureCodingDesign-20150206.pdf, 
> HDFSErasureCodingPhaseITestPlan.pdf, 
> HDFSErasureCodingSystemTestPlan-20150824.pdf, 
> HDFSErasureCodingSystemTestReport-20150826.pdf
>
>
> Erasure Coding (EC) can greatly reduce the storage overhead without sacrifice 
> of data reliability, comparing to the existing HDFS 3-replica approach. For 
> example, if we use a 10+4 Reed Solomon coding, we can allow loss of 4 blocks, 
> with storage overhead only being 40%. This makes EC a quite attractive 
> alternative for big data storage, particularly for cold data. 
> Facebook had a related open source project called HDFS-RAID. It used to be 
> one of the contribute packages in HDFS but had been removed since Hadoop 2.0 
> for maintain reason. The drawbacks are: 1) it is on top of HDFS and depends 
> on MapReduce to do encoding and decoding tasks; 2) it can only be used for 
> cold files that are intended not to be appended anymore; 3) the pure Java EC 
> coding implementation is extremely slow in practical use. Due to these, it 
> might not be a good idea to just bring HDFS-RAID back.
> We (Intel and Cloudera) are working on a design to build EC into HDFS that 
> gets rid of any external dependencies, makes it self-contained and 
> independently maintained. This design lays the EC feature on the storage type 
> support and considers compatible with existing HDFS features like caching, 
> snapshot, encryption, high availability and etc. This design will also 
> support different EC coding schemes, implementations and policies for 
> different deployment scenarios. By utilizing advanced libraries (e.g. Intel 
> ISA-L library), an implementation can greatly improve the performance of EC 
> encoding/decoding and makes the EC solution even more attractive. We will 
> post the design document soon. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11479) Socket re-use address option should be used in SimpleUdpServer

2017-03-01 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890864#comment-15890864
 ] 

Jitendra Nath Pandey commented on HDFS-11479:
-

+1

> Socket re-use address option should be used in SimpleUdpServer
> --
>
> Key: HDFS-11479
> URL: https://issues.apache.org/jira/browse/HDFS-11479
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-11479.001.patch
>
>
> Nfs gateway restart can fail because of bind error in SimpleUdpServer.
> re-use address option should be used in SimpleUdpServer to so that socket 
> bind can happen when it is in TIME_WAIT state
> {noformat}
> 2017-02-28 04:19:53,495 FATAL mount.MountdBase 
> (MountdBase.java:startUDPServer(66)) - Failed to start the UDP server.
> org.jboss.netty.channel.ChannelException: Failed to bind to: 
> 0.0.0.0/0.0.0.0:4242
> at 
> org.jboss.netty.bootstrap.ConnectionlessBootstrap.bind(ConnectionlessBootstrap.java:204)
> at 
> org.apache.hadoop.oncrpc.SimpleUdpServer.run(SimpleUdpServer.java:68)
> at 
> org.apache.hadoop.mount.MountdBase.startUDPServer(MountdBase.java:64)
> at org.apache.hadoop.mount.MountdBase.start(MountdBase.java:97)
> at 
> org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startServiceInternal(Nfs3.java:56)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:69)
> at 
> org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter.start(PrivilegedNfsGatewayStarter.java:71)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
> Caused by: java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.DatagramChannelImpl.bind(DatagramChannelImpl.java:691)
> at 
> sun.nio.ch.DatagramSocketAdaptor.bind(DatagramSocketAdaptor.java:91)
> at 
> org.jboss.netty.channel.socket.nio.NioDatagramPipelineSink.bind(NioDatagramPipelineSink.java:129)
> at 
> org.jboss.netty.channel.socket.nio.NioDatagramPipelineSink.eventSunk(NioDatagramPipelineSink.java:77)
> at org.jboss.netty.channel.Channels.bind(Channels.java:561)
> at 
> org.jboss.netty.channel.AbstractChannel.bind(AbstractChannel.java:189)
> at 
> org.jboss.netty.bootstrap.ConnectionlessBootstrap.bind(ConnectionlessBootstrap.java:198)
> ... 11 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7285) Erasure Coding Support inside HDFS

2017-03-01 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890862#comment-15890862
 ] 

Zhe Zhang commented on HDFS-7285:
-

Thanks [~sw0rdf1sh]. In addition to what Chris pointed out, you might also want 
to take a look at the discussions under HADOOP-11264; since your library is on 
the RS layer, sounds like the library can be plugged in with our coder 
framework.

> Erasure Coding Support inside HDFS
> --
>
> Key: HDFS-7285
> URL: https://issues.apache.org/jira/browse/HDFS-7285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Weihua Jiang
>Assignee: Zhe Zhang
> Fix For: 3.0.0-alpha1
>
> Attachments: Compare-consolidated-20150824.diff, 
> Consolidated-20150707.patch, Consolidated-20150806.patch, 
> Consolidated-20150810.patch, ECAnalyzer.py, ECParser.py, 
> fsimage-analysis-20150105.pdf, HDFS-7285-Consolidated-20150911.patch, 
> HDFS-7285-initial-PoC.patch, HDFS-7285-merge-consolidated-01.patch, 
> HDFS-7285-merge-consolidated-trunk-01.patch, 
> HDFS-7285-merge-consolidated.trunk.03.patch, 
> HDFS-7285-merge-consolidated.trunk.04.patch, HDFS-bistriped.patch, 
> HDFS-EC-merge-consolidated-01.patch, HDFS-EC-Merge-PoC-20150624.patch, 
> HDFSErasureCodingDesign-20141028.pdf, HDFSErasureCodingDesign-20141217.pdf, 
> HDFSErasureCodingDesign-20150204.pdf, HDFSErasureCodingDesign-20150206.pdf, 
> HDFSErasureCodingPhaseITestPlan.pdf, 
> HDFSErasureCodingSystemTestPlan-20150824.pdf, 
> HDFSErasureCodingSystemTestReport-20150826.pdf
>
>
> Erasure Coding (EC) can greatly reduce the storage overhead without sacrifice 
> of data reliability, comparing to the existing HDFS 3-replica approach. For 
> example, if we use a 10+4 Reed Solomon coding, we can allow loss of 4 blocks, 
> with storage overhead only being 40%. This makes EC a quite attractive 
> alternative for big data storage, particularly for cold data. 
> Facebook had a related open source project called HDFS-RAID. It used to be 
> one of the contribute packages in HDFS but had been removed since Hadoop 2.0 
> for maintain reason. The drawbacks are: 1) it is on top of HDFS and depends 
> on MapReduce to do encoding and decoding tasks; 2) it can only be used for 
> cold files that are intended not to be appended anymore; 3) the pure Java EC 
> coding implementation is extremely slow in practical use. Due to these, it 
> might not be a good idea to just bring HDFS-RAID back.
> We (Intel and Cloudera) are working on a design to build EC into HDFS that 
> gets rid of any external dependencies, makes it self-contained and 
> independently maintained. This design lays the EC feature on the storage type 
> support and considers compatible with existing HDFS features like caching, 
> snapshot, encryption, high availability and etc. This design will also 
> support different EC coding schemes, implementations and policies for 
> different deployment scenarios. By utilizing advanced libraries (e.g. Intel 
> ISA-L library), an implementation can greatly improve the performance of EC 
> encoding/decoding and makes the EC solution even more attractive. We will 
> post the design document soon. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7285) Erasure Coding Support inside HDFS

2017-03-01 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890845#comment-15890845
 ] 

Chris Douglas commented on HDFS-7285:
-

That sounds exciting, [~sw0rdf1sh]. There's a wiki 
[here|https://wiki.apache.org/hadoop/HowToContribute] that describes the patch 
process. If you need help please don't hesitate to reach out on 
[hdfs-dev|http://hadoop.apache.org/mailing_lists.html#HDFS].

> Erasure Coding Support inside HDFS
> --
>
> Key: HDFS-7285
> URL: https://issues.apache.org/jira/browse/HDFS-7285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Weihua Jiang
>Assignee: Zhe Zhang
> Fix For: 3.0.0-alpha1
>
> Attachments: Compare-consolidated-20150824.diff, 
> Consolidated-20150707.patch, Consolidated-20150806.patch, 
> Consolidated-20150810.patch, ECAnalyzer.py, ECParser.py, 
> fsimage-analysis-20150105.pdf, HDFS-7285-Consolidated-20150911.patch, 
> HDFS-7285-initial-PoC.patch, HDFS-7285-merge-consolidated-01.patch, 
> HDFS-7285-merge-consolidated-trunk-01.patch, 
> HDFS-7285-merge-consolidated.trunk.03.patch, 
> HDFS-7285-merge-consolidated.trunk.04.patch, HDFS-bistriped.patch, 
> HDFS-EC-merge-consolidated-01.patch, HDFS-EC-Merge-PoC-20150624.patch, 
> HDFSErasureCodingDesign-20141028.pdf, HDFSErasureCodingDesign-20141217.pdf, 
> HDFSErasureCodingDesign-20150204.pdf, HDFSErasureCodingDesign-20150206.pdf, 
> HDFSErasureCodingPhaseITestPlan.pdf, 
> HDFSErasureCodingSystemTestPlan-20150824.pdf, 
> HDFSErasureCodingSystemTestReport-20150826.pdf
>
>
> Erasure Coding (EC) can greatly reduce the storage overhead without sacrifice 
> of data reliability, comparing to the existing HDFS 3-replica approach. For 
> example, if we use a 10+4 Reed Solomon coding, we can allow loss of 4 blocks, 
> with storage overhead only being 40%. This makes EC a quite attractive 
> alternative for big data storage, particularly for cold data. 
> Facebook had a related open source project called HDFS-RAID. It used to be 
> one of the contribute packages in HDFS but had been removed since Hadoop 2.0 
> for maintain reason. The drawbacks are: 1) it is on top of HDFS and depends 
> on MapReduce to do encoding and decoding tasks; 2) it can only be used for 
> cold files that are intended not to be appended anymore; 3) the pure Java EC 
> coding implementation is extremely slow in practical use. Due to these, it 
> might not be a good idea to just bring HDFS-RAID back.
> We (Intel and Cloudera) are working on a design to build EC into HDFS that 
> gets rid of any external dependencies, makes it self-contained and 
> independently maintained. This design lays the EC feature on the storage type 
> support and considers compatible with existing HDFS features like caching, 
> snapshot, encryption, high availability and etc. This design will also 
> support different EC coding schemes, implementations and policies for 
> different deployment scenarios. By utilizing advanced libraries (e.g. Intel 
> ISA-L library), an implementation can greatly improve the performance of EC 
> encoding/decoding and makes the EC solution even more attractive. We will 
> post the design document soon. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8132) Namenode Startup Failing When we add Jcarder.jar in class Path

2017-03-01 Thread vijay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890833#comment-15890833
 ] 

vijay commented on HDFS-8132:
-

[~brahma], Is this issue resolved? Where can I access the latest Jcarder that 
works with Java7?

> Namenode Startup Failing When we add Jcarder.jar in class Path
> --
>
> Key: HDFS-8132
> URL: https://issues.apache.org/jira/browse/HDFS-8132
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>
>  *{color:blue}Namenode while Startup Args{color}*   ( Just added the jcarder 
> args)
> exec /home/hdfs/jdk1.7.0_72/bin/java -Dproc_namenode -Xmx1000m 
> -Djava.net.preferIPv4Stack=true 
> -Dhadoop.log.dir=/opt/ClusterSetup/Hadoop2.7/install/hadoop/namenode/logs 
> -Dhadoop.log.file=hadoop.log 
> -Dhadoop.home.dir=/opt/ClusterSetup/Hadoop2.7/install/hadoop/namenode 
> -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,console 
> -Djava.library.path=/opt/ClusterSetup/Hadoop2.7/install/hadoop/namenode/lib/native
>  -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true 
> -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender 
> {color:red}-javaagent:/opt/Jcarder/jcarder.jar=outputdir=/opt/Jcarder/Output/nn-jcarder{color}
>  -Dhadoop.security.logger=INFO,NullAppender 
> org.apache.hadoop.hdfs.server.namenode.NameNode
> Setting outputdir to /opt/Jcarder/Output/nn-jcarder
> Starting JCarder (2.0.0/6) agent
> Opening for writing: /opt/Jcarder/Output/nn-jcarder/jcarder_events.db
> Opening for writing: /opt/Jcarder/Output/nn-jcarder/jcarder_contexts.db
> Not instrumenting standard library classes (AWT, Swing, etc.)
> JCarder agent initialized
>  *{color:red}ERROR{color}* 
> {noformat}
> Exception in thread "main" java.lang.VerifyError: Expecting a stackmap frame 
> at branch target 21
> Exception Details:
>   Location:
> 
> org/apache/hadoop/hdfs/server/namenode/NameNode.createHAState(Lorg/apache/hadoop/hdfs/server/common/HdfsServerConstants$StartupOption;)Lorg/apache/hadoop/hdfs/server/namenode/ha/HAState;
>  @4: ifeq
>   Reason:
> Expected stackmap frame at this location.
>   Bytecode:
> 000: 2ab4 02d2 9900 112b b203 08a5 000a 2bb2
> 010: 030b a600 07b2 030d b0b2 030f b0   
>   at java.lang.Class.getDeclaredMethods0(Native Method)
>   at java.lang.Class.privateGetDeclaredMethods(Class.java:2615)
>   at java.lang.Class.getMethod0(Class.java:2856)
>   at java.lang.Class.getMethod(Class.java:1668)
>   at sun.launcher.LauncherHelper.getMainMethod(LauncherHelper.java:494)
>   at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:486)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11474) Ozone: TestContainerMapping needs to cleanup levelDB files

2017-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890797#comment-15890797
 ] 

Hadoop QA commented on HDFS-11474:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  5m  
1s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
52s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
51s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 9 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 91m 52s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}117m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.tools.TestDelegationTokenFetcher |
|   | hadoop.hdfs.TestErasureCodeBenchmarkThroughput |
|   | hadoop.ozone.container.common.TestDatanodeStateMachine |
|   | hadoop.ozone.web.client.TestBuckets |
|   | hadoop.ozone.scm.TestAllocateContainer |
|   | hadoop.ozone.web.client.TestVolume |
|   | hadoop.ozone.web.TestOzoneWebAccess |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HDFS-11474 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855407/HDFS-11474-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 637b635e204b 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 00684d6 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18490/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18490/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18490/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDFS-11416) Refactor out system default erasure coding policy

2017-03-01 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890757#comment-15890757
 ] 

Wei-Chiu Chuang commented on HDFS-11416:


This patch still applies to trunk with no conflicts after HDFS-11428, so I 
retriggered jenkins to see how it goes.

> Refactor out system default erasure coding policy
> -
>
> Key: HDFS-11416
> URL: https://issues.apache.org/jira/browse/HDFS-11416
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-11416.001.patch
>
>
> As discussed on HDFS-7859, the system default EC policy is mostly a relic 
> from development when the system only supported a single global policy. Now, 
> we support multiple policies, and the system default policy is mostly used by 
> tests.
> We should refactor to remove this concept.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11412) Maintenance minimum replication config value allowable range should be {0 - DefaultReplication}

2017-03-01 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890705#comment-15890705
 ] 

Ming Ma commented on HDFS-11412:


bq. this particular range can have adverse effects as it can force replicate to 
larger number of blocks (to honor minReplicationToBeInMaintenance) even for the 
files that aren't created with higher replication factor.
It can choose to only force the replication up to the replication factor of 
that files. So for most files which have default replication factor, the less 
of {default replication factor, minReplicationToBeInMaintenance} will be used 
as the min replication value f during maintenance. So the impact should be 
similar to setting minReplicationToBeInMaintenance to default replication 
factor. Also this is similar to how the following case will be handled. Set 
minReplicationToBeInMaintenance to the default replication factor. For files 
with replication factor of 2, 2 will be used the value for the min replication 
value.

 bq. May be we need to return the max or min value based on how the block 
replication is set compared to the default replication
Maybe we can modify getMinReplicationToBeInMaintenance to return the less of 
{file replication factor, minReplicationToBeInMaintenance}.

> Maintenance minimum replication config value allowable range should be {0 - 
> DefaultReplication}
> ---
>
> Key: HDFS-11412
> URL: https://issues.apache.org/jira/browse/HDFS-11412
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11412.01.patch
>
>
> Currently the allowed value range for Maintenance Min Replication 
> {{dfs.namenode.maintenance.replication.min}} is 0 to 
> {{dfs.namenode.replication.min}} (default=1). Users wanting not to affect the 
> performance of the cluster would wish to have the Maintenance Min Replication 
> number greater than 1, say 2. In the current design, it is possible to have 
> this Maintenance Min Replication configuration, but only after changing the 
> NameNode level Block Min Replication to 2, and which could slowdown the 
> overall latency for client writes.
> Technically speaking we should be allowing Maintenance Min Replication to be 
> in range 0 to dfs.replication.max.  
> * There is always config value of 0 for users not wanting any 
> availability/performance during maintenance. 
> * And, performance centric workloads can still get maintenance done without 
> major disruptions by having a bigger Maintenance Min Replication. Setting the 
> upper limit as dfs.replication.max could be an overkill as it could trigger 
> re-replication which Maintenance State is trying to avoid. So, we could allow 
> the {{dfs.namenode.maintenance.replication.min}} in the range {{0 to 
> dfs.replication}}
> {noformat}
> if (minMaintenanceR < 0) {
>   throw new IOException("Unexpected configuration parameters: "
>   + DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY
>   + " = " + minMaintenanceR + " < 0");
> }
> if (minMaintenanceR > minR) {
>   throw new IOException("Unexpected configuration parameters: "
>   + DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY
>   + " = " + minMaintenanceR + " > "
>   + DFSConfigKeys.DFS_NAMENODE_REPLICATION_MIN_KEY
>   + " = " + minR);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >