[jira] [Commented] (HDFS-12308) Erasure Coding: Provide DistributedFileSystem & DFSClient API to return the effective EC policy on a directory or file, including the replication policy

2017-12-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288849#comment-16288849
 ] 

genericqa commented on HDFS-12308:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
43s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 42s{color} | {color:orange} hadoop-hdfs-project: The patch generated 13 new 
+ 396 unchanged - 0 fixed = 409 total (was 396) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 10m  
5s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
50s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 2 new + 1 
unchanged - 0 fixed = 3 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
27s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m 56s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}145m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Possible null pointer dereference of ecpi in 
org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager.getByID(byte) 
 Dereferenced at ErasureCodingPolicyManager.java:ecpi in 
org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager.getByID(byte) 
 Derefer

[jira] [Commented] (HDFS-9806) Allow HDFS block replicas to be provided by an external storage system

2017-12-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288834#comment-16288834
 ] 

genericqa commented on HDFS-9806:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 28 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-tools/hadoop-tools-dist hadoop-tools {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
56s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m  
2s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 12m 
13s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 12m 13s{color} 
| {color:red} root generated 1 new + 1236 unchanged - 0 fixed = 1237 total (was 
1236) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 32s{color} | {color:orange} root: The patch generated 90 new + 2122 
unchanged - 15 fixed = 2212 total (was 2137) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
7s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-tools hadoop-tools/hadoop-tools-dist {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
45s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
58s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
32s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {c

[jira] [Comment Edited] (HDFS-12918) NameNode fails to start after upgrade - Missing state in ECPolicy Proto

2017-12-12 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288591#comment-16288591
 ] 

Xiao Chen edited comment on HDFS-12918 at 12/13/17 6:58 AM:


We have an upgrade incompatible fix landed in 3.0 at 
e565b5277d5b890dad107fe85e295a3907e4bfc1. The fix is necessary and it verifies 
the EC Policy state when loading FSImage. This issue is nothing to do with the 
default value for the ECPolicyState field in the ErasureCodingPolicyProto. 
While the ECPolicyState field is optional in ECPolocyProto message for over the 
wire communications, but its mandatory in FSImage for the EC files. I hope the 
upgrade incompatible changes before the 3.0 GA are ok. Please let me know if 
you have other thoughts. 


was (Author: manojg):
We have an upgrade incompatible fix landed in 3.0 at 
e565b5277d5b890dad107fe85e295a3907e4bfc1. The fix is necessary and it verifies 
the EC Policy state when loading FSImage. This issue is nothing to do with the 
default value for the ECPolicyState field in the ErasureCodingPolicyProto. 
While the ECPolicyState field is optional in ECPolocyProto message for over the 
wire communications, but its mandatory in FSImage for the EC files. I hope the 
upgrade incompatible changes before the C6 GA are ok. Please let me know if you 
have other thoughts. 

> NameNode fails to start after upgrade - Missing state in ECPolicy Proto 
> 
>
> Key: HDFS-12918
> URL: https://issues.apache.org/jira/browse/HDFS-12918
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-beta1
>Reporter: Zach Amsden
>Assignee: Manoj Govindassamy
>Priority: Critical
>
> According to documentation and code comments, the default setting for erasure 
> coding policy is disabled:
> /** Policy is disabled. It's policy default state. */
>  DISABLED(1),
> However, HDFS-12258 appears to have incorrectly set the policy state in the 
> protobuf to enabled:
> {code:java}
>  message ErasureCodingPolicyProto {
> ooptional string name = 1;
> optional ECSchemaProto schema = 2;
> optional uint32 cellSize = 3;
> required uint32 id = 4; // Actually a byte - only 8 bits used
>  + optional ErasureCodingPolicyState state = 5 [default = ENABLED];
>   }
> {code}
> This means the parameter can't actually be optional, it must always be 
> included, and existing serialized data without this optional field will be 
> incorrectly interpreted as having erasure coding enabled.
> This unnecessarily breaks compatibility and will require existing HDFS 
> installations that store metadata in protobufs to require reformatting.
> It looks like a simple mistake that was overlooked in code review.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-12921) DFS.setReplication should throw exception on EC files

2017-12-12 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu resolved HDFS-12921.
--
   Resolution: Won't Fix
Fix Version/s: 3.0.0

It was a no-op in {{FSDirAttrOp#unprotectedSetReplication()}}

> DFS.setReplication should throw exception on EC files
> -
>
> Key: HDFS-12921
> URL: https://issues.apache.org/jira/browse/HDFS-12921
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-beta1
>Reporter: Lei (Eddy) Xu
> Fix For: 3.0.0
>
>
> This was checked from {{o.a.h.fs.shell.SetReplication#processPath}}, however, 
> {{DistributedFileSystem#setReplication()}} API is also a public API, we 
> should move the check to {{DistributedFileSystem}} to prevent directly call 
> this API on EC file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12917) Fix description errors in testErasureCodingConf.xml

2017-12-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288769#comment-16288769
 ] 

genericqa commented on HDFS-12917:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
30m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  4s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
4s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12917 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901605/HADOOP-12917.patch |
| Optional Tests |  asflicense  unit  xml  |
| uname | Linux cff9478a5368 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7efc4f7 |
| maven | version: Apache Maven 3.3.9 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22376/testReport/ |
| Max. process+thread count | 306 (vs. ulimit of 5000) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22376/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix description errors in testErasureCodingConf.xml
> ---
>
> Key: HDFS-12917
> URL: https://issues.apache.org/jira/browse/HDFS-12917
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: chencan
> Attachments: HADOOP-12917.patch
>
>
> In testErasureCodingConf.xml,there are two case's description may be 
> "getPolicy : get EC policy information at specified path, whick have an EC 
> Policy".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12918) NameNode fails to start after upgrade - Missing state in ECPolicy Proto

2017-12-12 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288591#comment-16288591
 ] 

Manoj Govindassamy edited comment on HDFS-12918 at 12/13/17 5:46 AM:
-

We have an upgrade incompatible fix landed in 3.0 at 
e565b5277d5b890dad107fe85e295a3907e4bfc1. The fix is necessary and it verifies 
the EC Policy state when loading FSImage. This issue is nothing to do with the 
default value for the ECPolicyState field in the ErasureCodingPolicyProto. 
While the ECPolicyState field is optional in ECPolocyProto message for over the 
wire communications, but its mandatory in FSImage for the EC files. I hope the 
upgrade incompatible changes before the C6 GA are ok. Please let me know if you 
have other thoughts. 


was (Author: manojg):
We have an upgrade incompatible fix landed in C6 at 
e565b5277d5b890dad107fe85e295a3907e4bfc1. The fix is necessary and it verifies 
the EC Policy state when loading FSImage. This issue is nothing to do with the 
default value for the ECPolicyState field in the ErasureCodingPolicyProto. 
While the ECPolicyState field is optional in ECPolocyProto message for over the 
wire communications, but its mandatory in FSImage for the EC files. I hope the 
upgrade incompatible changes before the C6 GA are ok. Please let me know if you 
have other thoughts. 

> NameNode fails to start after upgrade - Missing state in ECPolicy Proto 
> 
>
> Key: HDFS-12918
> URL: https://issues.apache.org/jira/browse/HDFS-12918
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-beta1
>Reporter: Zach Amsden
>Assignee: Manoj Govindassamy
>Priority: Critical
>
> According to documentation and code comments, the default setting for erasure 
> coding policy is disabled:
> /** Policy is disabled. It's policy default state. */
>  DISABLED(1),
> However, HDFS-12258 appears to have incorrectly set the policy state in the 
> protobuf to enabled:
> {code:java}
>  message ErasureCodingPolicyProto {
> ooptional string name = 1;
> optional ECSchemaProto schema = 2;
> optional uint32 cellSize = 3;
> required uint32 id = 4; // Actually a byte - only 8 bits used
>  + optional ErasureCodingPolicyState state = 5 [default = ENABLED];
>   }
> {code}
> This means the parameter can't actually be optional, it must always be 
> included, and existing serialized data without this optional field will be 
> incorrectly interpreted as having erasure coding enabled.
> This unnecessarily breaks compatibility and will require existing HDFS 
> installations that store metadata in protobufs to require reformatting.
> It looks like a simple mistake that was overlooked in code review.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12923) DFS.concat should throw exception if files have different EC policies.

2017-12-12 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-12923:
-
Component/s: erasure-coding

> DFS.concat should throw exception if files have different EC policies. 
> ---
>
> Key: HDFS-12923
> URL: https://issues.apache.org/jira/browse/HDFS-12923
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Lei (Eddy) Xu
>Priority: Critical
>
> {{DFS#concat}} appends blocks from different files to a single file. However, 
> if these files have different EC policies, or mixed with replicated and EC 
> files, the resulted file would be problematic to read, because the EC codec 
> is defined in INode instead of in a block. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12923) DFS.concat should throw exception if files have different EC policies.

2017-12-12 Thread Lei (Eddy) Xu (JIRA)
Lei (Eddy) Xu created HDFS-12923:


 Summary: DFS.concat should throw exception if files have different 
EC policies. 
 Key: HDFS-12923
 URL: https://issues.apache.org/jira/browse/HDFS-12923
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Lei (Eddy) Xu
Priority: Critical


{{DFS#concat}} appends blocks from different files to a single file. However, 
if these files have different EC policies, or mixed with replicated and EC 
files, the resulted file would be problematic to read, because the EC codec is 
defined in INode instead of in a block. 





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12833) Distcp : Update the usage of delete option for dependency with update and overwrite option

2017-12-12 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288728#comment-16288728
 ] 

Surendra Singh Lilhore edited comment on HDFS-12833 at 12/13/17 5:29 AM:
-

committed to branch-2, branch-2.9 and branch-2.8. Thanks [~usharani] for 
contribution.


was (Author: surendrasingh):
committed to branch-2 and branch-2.8. Thanks [~usharani] for contribution.

> Distcp : Update the usage of delete option for dependency with update and 
> overwrite option
> --
>
> Key: HDFS-12833
> URL: https://issues.apache.org/jira/browse/HDFS-12833
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp, hdfs
>Affects Versions: 2.8.0
>Reporter: Harshakiran Reddy
>Assignee: usharani
>Priority: Minor
> Fix For: 3.1.0, 2.10.0, 2.9.1, 2.8.4
>
> Attachments: HDFS-12833-branch-2.001.patch, 
> HDFS-12833-branch-2.committed.patch, HDFS-12833.001.patch, HDFS-12833.patch
>
>
> Basically Delete option applicable only with update or overwrite options. I 
> tried as per usage message am getting the bellow exception.
> {noformat}
> bin:> ./hadoop distcp -delete /Dir1/distcpdir /Dir/distcpdir5
> 2017-11-17 20:48:09,828 ERROR tools.DistCp: Invalid arguments:
> java.lang.IllegalArgumentException: Delete missing is applicable only with 
> update or overwrite options
> at 
> org.apache.hadoop.tools.DistCpOptions$Builder.validate(DistCpOptions.java:528)
> at 
> org.apache.hadoop.tools.DistCpOptions$Builder.build(DistCpOptions.java:487)
> at org.apache.hadoop.tools.OptionsParser.parse(OptionsParser.java:233)
> at org.apache.hadoop.tools.DistCp.run(DistCp.java:141)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
> Invalid arguments: Delete missing is applicable only with update or overwrite 
> options
> usage: distcp OPTIONS [source_path...] 
>   OPTIONS
>  -append   Reuse existing data in target files and
>append new data to them if possible
>  -asyncShould distcp execution be blocking
>  -atomic   Commit all changes or none
>  -bandwidth   Specify bandwidth per map in MB, accepts
>bandwidth as a fraction.
>  -blocksperchunk  If set to a positive value, fileswith more
>blocks than this value will be split into
>chunks of  blocks to be
>transferred in parallel, and reassembled on
>the destination. By default,
> is 0 and the files will be
>transmitted in their entirety without
>splitting. This switch is only applicable
>when the source file system implements
>getBlockLocations method and the target
>file system implements concat method
>  -copybuffersize  Size of the copy buffer to use. By default
> is 8192B.
>  -delete   Delete from target, files missing in source
>  -diffUse snapshot diff report to identify the
>difference between source and target
> {noformat}
> Even in Document also it's not updated proper usage.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12917) Fix description errors in testErasureCodingConf.xml

2017-12-12 Thread chencan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDFS-12917:
---
Status: Patch Available  (was: Open)

> Fix description errors in testErasureCodingConf.xml
> ---
>
> Key: HDFS-12917
> URL: https://issues.apache.org/jira/browse/HDFS-12917
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: chencan
> Attachments: HADOOP-12917.patch
>
>
> In testErasureCodingConf.xml,there are two case's description may be 
> "getPolicy : get EC policy information at specified path, whick have an EC 
> Policy".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12308) Erasure Coding: Provide DistributedFileSystem & DFSClient API to return the effective EC policy on a directory or file, including the replication policy

2017-12-12 Thread chencan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDFS-12308:
---
Status: Patch Available  (was: Open)

> Erasure Coding: Provide DistributedFileSystem &  DFSClient API to return the 
> effective EC policy on a directory or file, including the replication policy
> -
>
> Key: HDFS-12308
> URL: https://issues.apache.org/jira/browse/HDFS-12308
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
> Environment: Provide DistributedFileSystem &  DFSClient API to return 
> the effective EC policy on a directory or file, including the replication 
> policy. Policy name will like {{getNominalErasureCodingPolicy(PATH)}} and 
> {{getAllNominalErasureCodingPolicies}}. 
>Reporter: SammiChen
>Assignee: chencan
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HADOOP-12308.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12833) Distcp : Update the usage of delete option for dependency with update and overwrite option

2017-12-12 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-12833:
--
Affects Version/s: (was: 3.0.0-alpha1)
   2.8.0
Fix Version/s: 2.9.1

> Distcp : Update the usage of delete option for dependency with update and 
> overwrite option
> --
>
> Key: HDFS-12833
> URL: https://issues.apache.org/jira/browse/HDFS-12833
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp, hdfs
>Affects Versions: 2.8.0
>Reporter: Harshakiran Reddy
>Assignee: usharani
>Priority: Minor
> Fix For: 3.1.0, 2.10.0, 2.9.1, 2.8.4
>
> Attachments: HDFS-12833-branch-2.001.patch, 
> HDFS-12833-branch-2.committed.patch, HDFS-12833.001.patch, HDFS-12833.patch
>
>
> Basically Delete option applicable only with update or overwrite options. I 
> tried as per usage message am getting the bellow exception.
> {noformat}
> bin:> ./hadoop distcp -delete /Dir1/distcpdir /Dir/distcpdir5
> 2017-11-17 20:48:09,828 ERROR tools.DistCp: Invalid arguments:
> java.lang.IllegalArgumentException: Delete missing is applicable only with 
> update or overwrite options
> at 
> org.apache.hadoop.tools.DistCpOptions$Builder.validate(DistCpOptions.java:528)
> at 
> org.apache.hadoop.tools.DistCpOptions$Builder.build(DistCpOptions.java:487)
> at org.apache.hadoop.tools.OptionsParser.parse(OptionsParser.java:233)
> at org.apache.hadoop.tools.DistCp.run(DistCp.java:141)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
> Invalid arguments: Delete missing is applicable only with update or overwrite 
> options
> usage: distcp OPTIONS [source_path...] 
>   OPTIONS
>  -append   Reuse existing data in target files and
>append new data to them if possible
>  -asyncShould distcp execution be blocking
>  -atomic   Commit all changes or none
>  -bandwidth   Specify bandwidth per map in MB, accepts
>bandwidth as a fraction.
>  -blocksperchunk  If set to a positive value, fileswith more
>blocks than this value will be split into
>chunks of  blocks to be
>transferred in parallel, and reassembled on
>the destination. By default,
> is 0 and the files will be
>transmitted in their entirety without
>splitting. This switch is only applicable
>when the source file system implements
>getBlockLocations method and the target
>file system implements concat method
>  -copybuffersize  Size of the copy buffer to use. By default
> is 8192B.
>  -delete   Delete from target, files missing in source
>  -diffUse snapshot diff report to identify the
>difference between source and target
> {noformat}
> Even in Document also it's not updated proper usage.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12910) Secure Datanode Starter should log the port when it

2017-12-12 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288725#comment-16288725
 ] 

Xiao Chen edited comment on HDFS-12910 at 12/13/17 5:15 AM:


Thanks for moving this forward, Nanda and Stephen! Latest patch looks pretty 
good to me. Also thanks Nanda for the understanding and Stephen for the new 
revs.

- Checkstyle warnings seem relevant, please fix those. Findbugs and unit test 
failures looks unrelated here.

- bq. factor this out into one reusable method
Let's please do that. For readability it maybe better to still do this on 2 
lines, so whoever read the code here doesn't have to check the extracted method 
to figure out it's just rethrowing a BindException (as opposed to, say throw a 
IOE or RTE).
{code}
BindException newBe = appendAddressToBindException();
throw newBe;
{code}

- In the test, I think we should be good to just verify the port number is 
contained in the message - in case some funny dns mapping happens to the 
jenkins server running the tests.

- Also saw one other minor thing not brought in by this patch, but would be 
great if we can fix that too.
{code}
  if (localAddr.getPort() != infoSocAddr.getPort()) {
throw new RuntimeException("Unable to bind on specified info port in 
secure " +
"context. Needed " + streamingAddr.getPort() + ", got " + 
ss.getLocalPort());
  }
{code}
The messages should say {{... Needed " + infoSocAddr.getPort() ...}}.

To follow up a bit on the discussion:
Technically we can use a local var to temporarily save the current address 
being bond, and just log that in the finally to have just 1 finally block. 
Don't think that's necessary here though. Looking at the {{getSecureResources}} 
method, I think it really should have been broken down into 2 methods: 
openRpcPort which returns the {{ServerSocket ss}}, and openWebServerPort which 
returns the {{ServerSocketChannel httpChannel}}. Since it's already written 
this way, patch 4 should be good as-is so we don't unnecessarily change too 
many lines solely for a log message improvement. :)


was (Author: xiaochen):
Thanks for moving this forward, Nanda and Stephen! Latest patch looks pretty 
good to me. Also thanks Nanda for the understanding and Stephen for the new 
revs.

Checkstyle warnings seem relevant, please fix those. Findbugs and unit test 
failures looks unrelated here.

bq. factor this out into one reusable method
Let's please do that. For readability it maybe better to still do this on 2 
lines, so whoever read the code here doesn't have to check the extracted method 
to figure out it's just rethrowing a BindException (as opposed to, say throw a 
IOE or RTE).
{code}
BindException newBe = appendAddressToBindException();
throw newBe;
{code}

Also saw one other minor thing not brought in by this patch, but would be great 
if we can fix that too.
{code}
  if (localAddr.getPort() != infoSocAddr.getPort()) {
throw new RuntimeException("Unable to bind on specified info port in 
secure " +
"context. Needed " + streamingAddr.getPort() + ", got " + 
ss.getLocalPort());
  }
{code}
The messages should say {{... Needed " + infoSocAddr.getPort() ...}}.

To follow up a bit on the discussion:
Technically we can use a local var to temporarily save the current address 
being bond, and just log that in the finally to have just 1 finally block. 
Don't think that's necessary here though. Looking at the {{getSecureResources}} 
method, I think it really should have been broken down into 2 methods: 
openRpcPort which returns the {{ServerSocket ss}}, and openWebServerPort which 
returns the {{ServerSocketChannel httpChannel}}. Since it's already written 
this way, patch 4 should be good as-is so we don't unnecessarily change too 
many lines solely for a log message improvement. :)

> Secure Datanode Starter should log the port when it 
> 
>
> Key: HDFS-12910
> URL: https://issues.apache.org/jira/browse/HDFS-12910
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.1.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Minor
> Attachments: HDFS-12910.001.patch, HDFS-12910.002.patch, 
> HDFS-12910.003.patch, HDFS-12910.004.patch
>
>
> When running a secure data node, the default ports it uses are 1004 and 1006. 
> Sometimes other OS services can start on these ports causing the DN to fail 
> to start (eg the nfs service can use random ports under 1024).
> When this happens an error is logged by jsvc, but it is confusing as it does 
> not tell you which port it is having issues binding to, for example, when 
> port 1004 is used by another process:
> {code}
> Initializing secure datanode resources
> java.

[jira] [Updated] (HDFS-12833) Distcp : Update the usage of delete option for dependency with update and overwrite option

2017-12-12 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-12833:
--
   Resolution: Fixed
Fix Version/s: 2.8.4
   2.10.0
   3.1.0
   Status: Resolved  (was: Patch Available)

committed to branch-2 and branch-2.8. Thanks [~usharani] for contribution.

> Distcp : Update the usage of delete option for dependency with update and 
> overwrite option
> --
>
> Key: HDFS-12833
> URL: https://issues.apache.org/jira/browse/HDFS-12833
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp, hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Harshakiran Reddy
>Assignee: usharani
>Priority: Minor
> Fix For: 3.1.0, 2.10.0, 2.8.4
>
> Attachments: HDFS-12833-branch-2.001.patch, 
> HDFS-12833-branch-2.committed.patch, HDFS-12833.001.patch, HDFS-12833.patch
>
>
> Basically Delete option applicable only with update or overwrite options. I 
> tried as per usage message am getting the bellow exception.
> {noformat}
> bin:> ./hadoop distcp -delete /Dir1/distcpdir /Dir/distcpdir5
> 2017-11-17 20:48:09,828 ERROR tools.DistCp: Invalid arguments:
> java.lang.IllegalArgumentException: Delete missing is applicable only with 
> update or overwrite options
> at 
> org.apache.hadoop.tools.DistCpOptions$Builder.validate(DistCpOptions.java:528)
> at 
> org.apache.hadoop.tools.DistCpOptions$Builder.build(DistCpOptions.java:487)
> at org.apache.hadoop.tools.OptionsParser.parse(OptionsParser.java:233)
> at org.apache.hadoop.tools.DistCp.run(DistCp.java:141)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
> Invalid arguments: Delete missing is applicable only with update or overwrite 
> options
> usage: distcp OPTIONS [source_path...] 
>   OPTIONS
>  -append   Reuse existing data in target files and
>append new data to them if possible
>  -asyncShould distcp execution be blocking
>  -atomic   Commit all changes or none
>  -bandwidth   Specify bandwidth per map in MB, accepts
>bandwidth as a fraction.
>  -blocksperchunk  If set to a positive value, fileswith more
>blocks than this value will be split into
>chunks of  blocks to be
>transferred in parallel, and reassembled on
>the destination. By default,
> is 0 and the files will be
>transmitted in their entirety without
>splitting. This switch is only applicable
>when the source file system implements
>getBlockLocations method and the target
>file system implements concat method
>  -copybuffersize  Size of the copy buffer to use. By default
> is 8192B.
>  -delete   Delete from target, files missing in source
>  -diffUse snapshot diff report to identify the
>difference between source and target
> {noformat}
> Even in Document also it's not updated proper usage.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12833) Distcp : Update the usage of delete option for dependency with update and overwrite option

2017-12-12 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-12833:
--
Attachment: HDFS-12833-branch-2.committed.patch

> Distcp : Update the usage of delete option for dependency with update and 
> overwrite option
> --
>
> Key: HDFS-12833
> URL: https://issues.apache.org/jira/browse/HDFS-12833
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp, hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Harshakiran Reddy
>Assignee: usharani
>Priority: Minor
> Fix For: 3.1.0, 2.10.0, 2.8.4
>
> Attachments: HDFS-12833-branch-2.001.patch, 
> HDFS-12833-branch-2.committed.patch, HDFS-12833.001.patch, HDFS-12833.patch
>
>
> Basically Delete option applicable only with update or overwrite options. I 
> tried as per usage message am getting the bellow exception.
> {noformat}
> bin:> ./hadoop distcp -delete /Dir1/distcpdir /Dir/distcpdir5
> 2017-11-17 20:48:09,828 ERROR tools.DistCp: Invalid arguments:
> java.lang.IllegalArgumentException: Delete missing is applicable only with 
> update or overwrite options
> at 
> org.apache.hadoop.tools.DistCpOptions$Builder.validate(DistCpOptions.java:528)
> at 
> org.apache.hadoop.tools.DistCpOptions$Builder.build(DistCpOptions.java:487)
> at org.apache.hadoop.tools.OptionsParser.parse(OptionsParser.java:233)
> at org.apache.hadoop.tools.DistCp.run(DistCp.java:141)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
> Invalid arguments: Delete missing is applicable only with update or overwrite 
> options
> usage: distcp OPTIONS [source_path...] 
>   OPTIONS
>  -append   Reuse existing data in target files and
>append new data to them if possible
>  -asyncShould distcp execution be blocking
>  -atomic   Commit all changes or none
>  -bandwidth   Specify bandwidth per map in MB, accepts
>bandwidth as a fraction.
>  -blocksperchunk  If set to a positive value, fileswith more
>blocks than this value will be split into
>chunks of  blocks to be
>transferred in parallel, and reassembled on
>the destination. By default,
> is 0 and the files will be
>transmitted in their entirety without
>splitting. This switch is only applicable
>when the source file system implements
>getBlockLocations method and the target
>file system implements concat method
>  -copybuffersize  Size of the copy buffer to use. By default
> is 8192B.
>  -delete   Delete from target, files missing in source
>  -diffUse snapshot diff report to identify the
>difference between source and target
> {noformat}
> Even in Document also it's not updated proper usage.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12910) Secure Datanode Starter should log the port when it

2017-12-12 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288725#comment-16288725
 ] 

Xiao Chen commented on HDFS-12910:
--

Thanks for moving this forward, Nanda and Stephen! Latest patch looks pretty 
good to me. Also thanks Nanda for the understanding and Stephen for the new 
revs.

Checkstyle warnings seem relevant, please fix those. Findbugs and unit test 
failures looks unrelated here.

bq. factor this out into one reusable method
Let's please do that. For readability it maybe better to still do this on 2 
lines, so whoever read the code here doesn't have to check the extracted method 
to figure out it's just rethrowing a BindException (as opposed to, say throw a 
IOE or RTE).
{code}
BindException newBe = appendAddressToBindException();
throw newBe;
{code}

Also saw one other minor thing not brought in by this patch, but would be great 
if we can fix that too.
{code}
  if (localAddr.getPort() != infoSocAddr.getPort()) {
throw new RuntimeException("Unable to bind on specified info port in 
secure " +
"context. Needed " + streamingAddr.getPort() + ", got " + 
ss.getLocalPort());
  }
{code}
The messages should say {{... Needed " + infoSocAddr.getPort() ...}}.

To follow up a bit on the discussion:
Technically we can use a local var to temporarily save the current address 
being bond, and just log that in the finally to have just 1 finally block. 
Don't think that's necessary here though. Looking at the {{getSecureResources}} 
method, I think it really should have been broken down into 2 methods: 
openRpcPort which returns the {{ServerSocket ss}}, and openWebServerPort which 
returns the {{ServerSocketChannel httpChannel}}. Since it's already written 
this way, patch 4 should be good as-is so we don't unnecessarily change too 
many lines solely for a log message improvement. :)

> Secure Datanode Starter should log the port when it 
> 
>
> Key: HDFS-12910
> URL: https://issues.apache.org/jira/browse/HDFS-12910
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.1.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Minor
> Attachments: HDFS-12910.001.patch, HDFS-12910.002.patch, 
> HDFS-12910.003.patch, HDFS-12910.004.patch
>
>
> When running a secure data node, the default ports it uses are 1004 and 1006. 
> Sometimes other OS services can start on these ports causing the DN to fail 
> to start (eg the nfs service can use random ports under 1024).
> When this happens an error is logged by jsvc, but it is confusing as it does 
> not tell you which port it is having issues binding to, for example, when 
> port 1004 is used by another process:
> {code}
> Initializing secure datanode resources
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:105)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.init(SecureDataNodeStarter.java:71)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
> Cannot load daemon
> Service exit with a return value of 3
> {code}
> And when port 1006 is used:
> {code}
> Opened streaming server at /0.0.0.0:1004
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:129)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.init(SecureDataNodeStarter.java:71)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(Na

[jira] [Commented] (HDFS-12881) Output streams closed with IOUtils suppressing write errors

2017-12-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288698#comment-16288698
 ] 

genericqa commented on HDFS-12881:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
44s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} hadoop-hdfs-project_hadoop-hdfs generated 0 new + 
394 unchanged - 4 fixed = 394 total (was 398) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m  9s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}125m 32s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}171m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12881 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901792/HDFS-12881.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ba787524e308 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2abab1d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22373/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warning

[jira] [Commented] (HDFS-12907) Allow read-only access to reserved raw for non-superusers

2017-12-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288688#comment-16288688
 ] 

genericqa commented on HDFS-12907:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m  
3s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
0s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 38s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 8 new + 101 unchanged - 0 fixed = 109 total (was 101) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 58s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 33s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}155m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestSetrepIncreasing |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.TestFileChecksum |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12907 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901789/HDFS-12907.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e84c5296390b 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2abab1d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://bu

[jira] [Commented] (HDFS-12919) RBF: support erasure coding methods in RouterRpcServer

2017-12-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288645#comment-16288645
 ] 

genericqa commented on HDFS-12919:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
57s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 36s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}119m 32s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}188m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestErasureCodingPolicies |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12919 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901785/HDFS-12919.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b4e2cd146895 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2abab1d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22372/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-w

[jira] [Commented] (HDFS-12920) HDFS default value change (with adding time unit) breaks old version MR tarball work with new version (3.0) of hadoop

2017-12-12 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288641#comment-16288641
 ] 

Chris Douglas commented on HDFS-12920:
--

/cc [~arpitagarwal]

If we're not going to [change the config 
properties|https://issues.apache.org/jira/browse/HDFS-9847?focusedCommentId=15211227&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15211227],
 and the client needs to load 3.x config files during rolling upgrades, then 
this isn't worth the hassle.

> HDFS default value change (with adding time unit) breaks old version MR 
> tarball work with new version (3.0) of hadoop
> -
>
> Key: HDFS-12920
> URL: https://issues.apache.org/jira/browse/HDFS-12920
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Junping Du
>Priority: Blocker
>
> After HADOOP-15059 get resolved. I tried to deploy 2.9.0 tar ball with 3.0.0 
> RC1, and run the job with following errors:
> {noformat}
> 2017-12-12 13:29:06,824 INFO [main] 
> org.apache.hadoop.service.AbstractService: Service 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster failed in state INITED; cause: 
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.lang.NumberFormatException: For input string: "30s"
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.lang.NumberFormatException: For input string: "30s"
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:542)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:522)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.callWithJobClassLoader(MRAppMaster.java:1764)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.createOutputCommitter(MRAppMaster.java:522)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:308)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$5.run(MRAppMaster.java:1722)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1886)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1719)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1650)
> {noformat}
> This is because HDFS-10845, we are adding time unit to hdfs-default.xml but 
> it cannot be recognized by old version MR jars. 
> This break our rolling upgrade story, so should mark as blocker.
> A quick workaround is to add values in hdfs-site.xml with removing all time 
> unit. But the right way may be to revert HDFS-10845 (and get rid of noisy 
> warnings).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12920) HDFS default value change (with adding time unit) breaks old version MR tarball work with new version (3.0) of hadoop

2017-12-12 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288638#comment-16288638
 ] 

Yiqun Lin commented on HDFS-12920:
--

Hi  [~djp], thanks for reporting this.
bq. A quick workaround is to add values in hdfs-site.xml with removing all time 
unit. But the right way may be to revert HDFS-10845 (and get rid of noisy 
warnings).
I think we don't need to removing all time unit values in hdfs-default file. 
HDFS configurations support time unit suffix was implemented in HDFS-9847. That 
change was only committed in trunk not include branch-2. So the new settings 
with time unit suffix are only making sense in 3.x.x versions. So the right way 
should be to revert HDFS-10845 and get rid of noisy warnings as you suggested.
If we are all agreed on on this way, I will attach the patch to make this 
changed.
Thanks.

> HDFS default value change (with adding time unit) breaks old version MR 
> tarball work with new version (3.0) of hadoop
> -
>
> Key: HDFS-12920
> URL: https://issues.apache.org/jira/browse/HDFS-12920
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Junping Du
>Priority: Blocker
>
> After HADOOP-15059 get resolved. I tried to deploy 2.9.0 tar ball with 3.0.0 
> RC1, and run the job with following errors:
> {noformat}
> 2017-12-12 13:29:06,824 INFO [main] 
> org.apache.hadoop.service.AbstractService: Service 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster failed in state INITED; cause: 
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.lang.NumberFormatException: For input string: "30s"
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.lang.NumberFormatException: For input string: "30s"
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:542)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:522)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.callWithJobClassLoader(MRAppMaster.java:1764)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.createOutputCommitter(MRAppMaster.java:522)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:308)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$5.run(MRAppMaster.java:1722)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1886)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1719)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1650)
> {noformat}
> This is because HDFS-10845, we are adding time unit to hdfs-default.xml but 
> it cannot be recognized by old version MR jars. 
> This break our rolling upgrade story, so should mark as blocker.
> A quick workaround is to add values in hdfs-site.xml with removing all time 
> unit. But the right way may be to revert HDFS-10845 (and get rid of noisy 
> warnings).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12751) Ozone: SCM: update container allocated size to container db for all the open containers in ContainerStateManager#close

2017-12-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288636#comment-16288636
 ] 

genericqa commented on HDFS-12751:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
55s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
47s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
50s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
21s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
0s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 47s{color} | {color:orange} hadoop-hdfs-project: The patch generated 3 new + 
2 unchanged - 0 fixed = 5 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
47s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}151m 56s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}225m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy 
|
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestDFSStripedInputStream |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 |
|   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 |
|   | hadoop.hdfs.TestExternalBlockReader |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.TestAbandonBlock |
|   | hadoop.cblock.TestCBlockReadWrite |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 |
|   | hadoop.fs.TestUnbuffer |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.ozone

[jira] [Updated] (HDFS-9806) Allow HDFS block replicas to be provided by an external storage system

2017-12-12 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-9806:
-
Status: Patch Available  (was: Open)

> Allow HDFS block replicas to be provided by an external storage system
> --
>
> Key: HDFS-9806
> URL: https://issues.apache.org/jira/browse/HDFS-9806
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Chris Douglas
> Attachments: HDFS-9806-design.001.pdf, HDFS-9806-design.002.pdf, 
> HDFS-9806.001.patch
>
>
> In addition to heterogeneous media, many applications work with heterogeneous 
> storage systems. The guarantees and semantics provided by these systems are 
> often similar, but not identical to those of 
> [HDFS|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/index.html].
>  Any client accessing multiple storage systems is responsible for reasoning 
> about each system independently, and must propagate/and renew credentials for 
> each store.
> Remote stores could be mounted under HDFS. Block locations could be mapped to 
> immutable file regions, opaque IDs, or other tokens that represent a 
> consistent view of the data. While correctness for arbitrary operations 
> requires careful coordination between stores, in practice we can provide 
> workable semantics with weaker guarantees.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9806) Allow HDFS block replicas to be provided by an external storage system

2017-12-12 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-9806:
-
Attachment: HDFS-9806.001.patch

> Allow HDFS block replicas to be provided by an external storage system
> --
>
> Key: HDFS-9806
> URL: https://issues.apache.org/jira/browse/HDFS-9806
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Chris Douglas
> Attachments: HDFS-9806-design.001.pdf, HDFS-9806-design.002.pdf, 
> HDFS-9806.001.patch
>
>
> In addition to heterogeneous media, many applications work with heterogeneous 
> storage systems. The guarantees and semantics provided by these systems are 
> often similar, but not identical to those of 
> [HDFS|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/index.html].
>  Any client accessing multiple storage systems is responsible for reasoning 
> about each system independently, and must propagate/and renew credentials for 
> each store.
> Remote stores could be mounted under HDFS. Block locations could be mapped to 
> immutable file regions, opaque IDs, or other tokens that represent a 
> consistent view of the data. While correctness for arbitrary operations 
> requires careful coordination between stores, in practice we can provide 
> workable semantics with weaker guarantees.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9806) Allow HDFS block replicas to be provided by an external storage system

2017-12-12 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-9806:
-
Status: Open  (was: Patch Available)

> Allow HDFS block replicas to be provided by an external storage system
> --
>
> Key: HDFS-9806
> URL: https://issues.apache.org/jira/browse/HDFS-9806
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Chris Douglas
> Attachments: HDFS-9806-design.001.pdf, HDFS-9806-design.002.pdf, 
> HDFS-9806.001.patch
>
>
> In addition to heterogeneous media, many applications work with heterogeneous 
> storage systems. The guarantees and semantics provided by these systems are 
> often similar, but not identical to those of 
> [HDFS|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/index.html].
>  Any client accessing multiple storage systems is responsible for reasoning 
> about each system independently, and must propagate/and renew credentials for 
> each store.
> Remote stores could be mounted under HDFS. Block locations could be mapped to 
> immutable file regions, opaque IDs, or other tokens that represent a 
> consistent view of the data. While correctness for arbitrary operations 
> requires careful coordination between stores, in practice we can provide 
> workable semantics with weaker guarantees.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9806) Allow HDFS block replicas to be provided by an external storage system

2017-12-12 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-9806:
-
Attachment: (was: HDFS-9806.001.patch)

> Allow HDFS block replicas to be provided by an external storage system
> --
>
> Key: HDFS-9806
> URL: https://issues.apache.org/jira/browse/HDFS-9806
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Chris Douglas
> Attachments: HDFS-9806-design.001.pdf, HDFS-9806-design.002.pdf, 
> HDFS-9806.001.patch
>
>
> In addition to heterogeneous media, many applications work with heterogeneous 
> storage systems. The guarantees and semantics provided by these systems are 
> often similar, but not identical to those of 
> [HDFS|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/index.html].
>  Any client accessing multiple storage systems is responsible for reasoning 
> about each system independently, and must propagate/and renew credentials for 
> each store.
> Remote stores could be mounted under HDFS. Block locations could be mapped to 
> immutable file regions, opaque IDs, or other tokens that represent a 
> consistent view of the data. While correctness for arbitrary operations 
> requires careful coordination between stores, in practice we can provide 
> workable semantics with weaker guarantees.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12818) Support multiple storages in DataNodeCluster / SimulatedFSDataset

2017-12-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288624#comment-16288624
 ] 

genericqa commented on HDFS-12818:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
57s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 369 unchanged - 10 fixed = 369 total (was 379) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}127m 52s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}181m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy 
|
|   | hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.tracing.TestTracingShortCircuitLocalRead |
|   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.tracing.TestTracing |
|   | hadoop.net.TestNetworkTopology |
|   | hadoop.tools.TestJMXGet |
|   | hadoop.TestGenericRefresh |
|   | hadoop.hdfs.TestDataTransferKeepalive |
|   | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 |
|   | hadoop.hdfs.TestExternalBlockReader |
|   | hadoop.hdfs.TestDFSStripedOutputStream |
|   | hadoop.hdfs.TestFileCorruption |
|   | hadoop.hdfs.TestAbandonBlock |
|   | hadoop.hdfs.TestFileCreationDelete |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 |
|   | hadoop.hdfs.qjournal.TestSecureNNWithQJM |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting

[jira] [Updated] (HDFS-12922) Arrays of length 1 cause 9.2% memory overhead

2017-12-12 Thread Misha Dmitriev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misha Dmitriev updated HDFS-12922:
--
Attachment: screenshot-1.png

> Arrays of length 1 cause 9.2% memory overhead
> -
>
> Key: HDFS-12922
> URL: https://issues.apache.org/jira/browse/HDFS-12922
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
> Attachments: screenshot-1.png
>
>
> I recently obtained a big (over 60GiB) heap dump from a customer and analyzed 
> it using jxray (www.jxray.com). One source of memory waste that the tool 
> detected is arrays of length 1 that come from {{BlockInfo[] 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.blocks}} and 
> {{INode$Feature[] 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.features}}. Only a small 
> fraction of these arrays (less than 10%) have a length greater than 1. 
> Collectively these arrays waste 5.5GiB, or 9.2% of the heap. See the attached 
> screenshot for more details.
> The reason why an array of length 1 is problematic is that every array in the 
> JVM has a header, that takes between 16 and 20 bytes depending on the JVM 
> configuration. For a big enough array this 16-20 byte overhead is not a 
> concern, but if the array has only one element (that takes 4-8 bytes 
> depending on the JVM configuration), the overhead becomes bigger than the 
> array's "workload".
> In such a situation it makes sense to replace the array data field {{Foo[] 
> ar}} with an {{Object obj}}, that would contain either a direct reference to 
> the array's single workload element, or a reference to the array if there is 
> more than one element. This change will require further code changes and type 
> casts. For example, code like {{return ar[i];}} becomes {{return (obj 
> instanceof Foo) ? (Foo) obj : ((Foo[]) obj)[i];}} and so on. This doesn't 
> look very pretty, but as far as I see, the code that deals with e.g. 
> INodeFile.blocks already contains various null checks, etc. So we will not 
> make the code much less readable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12922) Arrays of length 1 cause 9.2% memory overhead

2017-12-12 Thread Misha Dmitriev (JIRA)
Misha Dmitriev created HDFS-12922:
-

 Summary: Arrays of length 1 cause 9.2% memory overhead
 Key: HDFS-12922
 URL: https://issues.apache.org/jira/browse/HDFS-12922
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Misha Dmitriev
Assignee: Misha Dmitriev


I recently obtained a big (over 60GiB) heap dump from a customer and analyzed 
it using jxray (www.jxray.com). One source of memory waste that the tool 
detected is arrays of length 1 that come from {{BlockInfo[] 
org.apache.hadoop.hdfs.server.namenode.INodeFile.blocks}} and {{INode$Feature[] 
org.apache.hadoop.hdfs.server.namenode.INodeFile.features}}. Only a small 
fraction of these arrays (less than 10%) have a length greater than 1. 
Collectively these arrays waste 5.5GiB, or 9.2% of the heap. See the attached 
screenshot for more details.

The reason why an array of length 1 is problematic is that every array in the 
JVM has a header, that takes between 16 and 20 bytes depending on the JVM 
configuration. For a big enough array this 16-20 byte overhead is not a 
concern, but if the array has only one element (that takes 4-8 bytes depending 
on the JVM configuration), the overhead becomes bigger than the array's 
"workload".

In such a situation it makes sense to replace the array data field {{Foo[] ar}} 
with an {{Object obj}}, that would contain either a direct reference to the 
array's single workload element, or a reference to the array if there is more 
than one element. This change will require further code changes and type casts. 
For example, code like {{return ar[i];}} becomes {{return (obj instanceof Foo) 
? (Foo) obj : ((Foo[]) obj)[i];}} and so on. This doesn't look very pretty, but 
as far as I see, the code that deals with e.g. INodeFile.blocks already 
contains various null checks, etc. So we will not make the code much less 
readable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12895) RBF: Add ACL support for mount table

2017-12-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288594#comment-16288594
 ] 

Íñigo Goiri commented on HDFS-12895:


[~huanbang1993], the commented code is in our internal PR not in this patch.
The javadoc comment apply though.

> RBF: Add ACL support for mount table
> 
>
> Key: HDFS-12895
> URL: https://issues.apache.org/jira/browse/HDFS-12895
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF
> Attachments: HDFS-12895.001.patch, HDFS-12895.002.patch, 
> HDFS-12895.003.patch
>
>
> Adding ACL support for the Mount Table management. Following is the initial 
> design of ACL control for the mount table management.
> Each mount table has its owner, group name and permission.
> The mount table permissions (FsPermission), here we use 
> {{org.apache.hadoop.fs.permission.FsPermission}} to do the access check:
> # READ permission: you can read the mount table info.
> # WRITE permission: you can add remove or update this mount table info.
> # EXECUTE permission: This won't be used.
> The add command of mount table will be extended like this
> {noformat}
> $HADOOP_HOME/bin/hdfs dfsrouteradmin [-add   
>  [-owner ] [-group ] [-mode ]]
> {noformat}
> * is UNIX-style permissions for the mount table. Permissions are 
> specified in octal, e.g. 0755. By default, this is set to 0755*.
> If we want update the ACL info of specfied mount table, just execute add 
> command again. This command not only adding for new mount talle but also 
> updating mount table once it finds given mount table is existed. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-12918) NameNode fails to start after upgrade - Missing state in ECPolicy Proto

2017-12-12 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy resolved HDFS-12918.
---
Resolution: Won't Fix

We have an upgrade incompatible fix landed in C6 at 
e565b5277d5b890dad107fe85e295a3907e4bfc1. The fix is necessary and it verifies 
the EC Policy state when loading FSImage. This issue is nothing to do with the 
default value for the ECPolicyState field in the ErasureCodingPolicyProto. 
While the ECPolicyState field is optional in ECPolocyProto message for over the 
wire communications, but its mandatory in FSImage for the EC files. I hope the 
upgrade incompatible changes before the C6 GA are ok. Please let me know if you 
have other thoughts. 

> NameNode fails to start after upgrade - Missing state in ECPolicy Proto 
> 
>
> Key: HDFS-12918
> URL: https://issues.apache.org/jira/browse/HDFS-12918
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-beta1
>Reporter: Zach Amsden
>Assignee: Manoj Govindassamy
>Priority: Critical
>
> According to documentation and code comments, the default setting for erasure 
> coding policy is disabled:
> /** Policy is disabled. It's policy default state. */
>  DISABLED(1),
> However, HDFS-12258 appears to have incorrectly set the policy state in the 
> protobuf to enabled:
> {code:java}
>  message ErasureCodingPolicyProto {
> ooptional string name = 1;
> optional ECSchemaProto schema = 2;
> optional uint32 cellSize = 3;
> required uint32 id = 4; // Actually a byte - only 8 bits used
>  + optional ErasureCodingPolicyState state = 5 [default = ENABLED];
>   }
> {code}
> This means the parameter can't actually be optional, it must always be 
> included, and existing serialized data without this optional field will be 
> incorrectly interpreted as having erasure coding enabled.
> This unnecessarily breaks compatibility and will require existing HDFS 
> installations that store metadata in protobufs to require reformatting.
> It looks like a simple mistake that was overlooked in code review.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12918) NameNode fails to start after upgrade - Missing state in ECPolicy Proto

2017-12-12 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-12918:
--
Affects Version/s: 3.0.0-beta1
  Component/s: hdfs

> NameNode fails to start after upgrade - Missing state in ECPolicy Proto 
> 
>
> Key: HDFS-12918
> URL: https://issues.apache.org/jira/browse/HDFS-12918
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-beta1
>Reporter: Zach Amsden
>Assignee: Manoj Govindassamy
>Priority: Critical
>
> According to documentation and code comments, the default setting for erasure 
> coding policy is disabled:
> /** Policy is disabled. It's policy default state. */
>  DISABLED(1),
> However, HDFS-12258 appears to have incorrectly set the policy state in the 
> protobuf to enabled:
> {code:java}
>  message ErasureCodingPolicyProto {
> ooptional string name = 1;
> optional ECSchemaProto schema = 2;
> optional uint32 cellSize = 3;
> required uint32 id = 4; // Actually a byte - only 8 bits used
>  + optional ErasureCodingPolicyState state = 5 [default = ENABLED];
>   }
> {code}
> This means the parameter can't actually be optional, it must always be 
> included, and existing serialized data without this optional field will be 
> incorrectly interpreted as having erasure coding enabled.
> This unnecessarily breaks compatibility and will require existing HDFS 
> installations that store metadata in protobufs to require reformatting.
> It looks like a simple mistake that was overlooked in code review.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12918) NameNode fails to start after upgrade - Missing state in ECPolicy Proto

2017-12-12 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-12918:
--
Summary: NameNode fails to start after upgrade - Missing state in ECPolicy 
Proto   (was: EC Policy defaults incorrectly to enabled in protobufs)

> NameNode fails to start after upgrade - Missing state in ECPolicy Proto 
> 
>
> Key: HDFS-12918
> URL: https://issues.apache.org/jira/browse/HDFS-12918
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zach Amsden
>Assignee: Manoj Govindassamy
>Priority: Critical
>
> According to documentation and code comments, the default setting for erasure 
> coding policy is disabled:
> /** Policy is disabled. It's policy default state. */
>  DISABLED(1),
> However, HDFS-12258 appears to have incorrectly set the policy state in the 
> protobuf to enabled:
> {code:java}
>  message ErasureCodingPolicyProto {
> ooptional string name = 1;
> optional ECSchemaProto schema = 2;
> optional uint32 cellSize = 3;
> required uint32 id = 4; // Actually a byte - only 8 bits used
>  + optional ErasureCodingPolicyState state = 5 [default = ENABLED];
>   }
> {code}
> This means the parameter can't actually be optional, it must always be 
> included, and existing serialized data without this optional field will be 
> incorrectly interpreted as having erasure coding enabled.
> This unnecessarily breaks compatibility and will require existing HDFS 
> installations that store metadata in protobufs to require reformatting.
> It looks like a simple mistake that was overlooked in code review.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12921) DFS.setReplication should throw exception on EC files

2017-12-12 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-12921:
-
Summary: DFS.setReplication should throw exception on EC files  (was: 
DFS.setReplication should throw IOE on EC files)

> DFS.setReplication should throw exception on EC files
> -
>
> Key: HDFS-12921
> URL: https://issues.apache.org/jira/browse/HDFS-12921
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-beta1
>Reporter: Lei (Eddy) Xu
>
> This was checked from {{o.a.h.fs.shell.SetReplication#processPath}}, however, 
> {{DistributedFileSystem#setReplication()}} API is also a public API, we 
> should move the check to {{DistributedFileSystem}} to prevent directly call 
> this API on EC file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12921) DFS.setReplication should throw IOE on EC files

2017-12-12 Thread Lei (Eddy) Xu (JIRA)
Lei (Eddy) Xu created HDFS-12921:


 Summary: DFS.setReplication should throw IOE on EC files
 Key: HDFS-12921
 URL: https://issues.apache.org/jira/browse/HDFS-12921
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: erasure-coding
Affects Versions: 3.0.0-beta1
Reporter: Lei (Eddy) Xu


This was checked from {{o.a.h.fs.shell.SetReplication#processPath}}, however, 
{{DistributedFileSystem#setReplication()}} API is also a public API, we should 
move the check to {{DistributedFileSystem}} to prevent directly call this API 
on EC file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10453) ReplicationMonitor thread could stuck for long time due to the race between replication and delete of same file in a large cluster.

2017-12-12 Thread Xiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288578#comment-16288578
 ] 

Xiang Li commented on HDFS-10453:
-

Agree. Thanks xiaoqiao!

> ReplicationMonitor thread could stuck for long time due to the race between 
> replication and delete of same file in a large cluster.
> ---
>
> Key: HDFS-10453
> URL: https://issues.apache.org/jira/browse/HDFS-10453
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.4.1, 2.5.2, 2.7.1, 2.6.4
>Reporter: He Xiaoqiao
> Attachments: HDFS-10453-branch-2.001.patch, 
> HDFS-10453-branch-2.003.patch, HDFS-10453-branch-2.7.004.patch, 
> HDFS-10453-branch-2.7.005.patch, HDFS-10453.001.patch
>
>
> ReplicationMonitor thread could stuck for long time and loss data with little 
> probability. Consider the typical scenario:
> (1) create and close a file with the default replicas(3);
> (2) increase replication (to 10) of the file.
> (3) delete the file while ReplicationMonitor is scheduling blocks belong to 
> that file for replications.
> if ReplicationMonitor stuck reappeared, NameNode will print log as:
> {code:xml}
> 2016-04-19 10:20:48,083 WARN 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to 
> place enough replicas, still in need of 7 to reach 10 
> (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, 
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, 
> newBlock=false) For more information, please enable DEBUG log level on 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
> ..
> 2016-04-19 10:21:17,184 WARN 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to 
> place enough replicas, still in need of 7 to reach 10 
> (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, 
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, 
> newBlock=false) For more information, please enable DEBUG log level on 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
> 2016-04-19 10:21:17,184 WARN 
> org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough 
> replicas: expected size is 7 but only 0 storage types can be selected 
> (replication=10, selected=[], unavailable=[DISK, ARCHIVE], removed=[DISK, 
> DISK, DISK, DISK, DISK, DISK, DISK], policy=BlockStoragePolicy{HOT:7, 
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
> 2016-04-19 10:21:17,184 WARN 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to 
> place enough replicas, still in need of 7 to reach 10 
> (unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, 
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, 
> newBlock=false) All required storage types are unavailable:  
> unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, 
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
> {code}
> This is because 2 threads (#NameNodeRpcServer and #ReplicationMonitor) 
> process same block at the same moment.
> (1) ReplicationMonitor#computeReplicationWorkForBlocks get blocks to 
> replicate and leave the global lock.
> (2) FSNamesystem#delete invoked to delete blocks then clear the reference in 
> blocksmap, needReplications, etc. the block's NumBytes will set 
> NO_ACK(Long.MAX_VALUE) which is used to indicate that the block deletion does 
> not need explicit ACK from the node. 
> (3) ReplicationMonitor#computeReplicationWorkForBlocks continue to 
> chooseTargets for the same blocks and no node will be selected after traverse 
> whole cluster because  no node choice satisfy the goodness criteria 
> (remaining spaces achieve required size Long.MAX_VALUE). 
> During of stage#3 ReplicationMonitor stuck for long time, especial in a large 
> cluster. invalidateBlocks & neededReplications continues to grow and no 
> consumes. it will loss data at the worst.
> This can mostly be avoided by skip chooseTarget for BlockCommand.NO_ACK block 
> and remove it from neededReplications.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12881) Output streams closed with IOUtils suppressing write errors

2017-12-12 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288576#comment-16288576
 ] 

Ajay Kumar commented on HDFS-12881:
---

[~jlowe], removed yarn and hadoop-common related changes in patch v4. Created 
[HADOOP-15114] to add {IOUtils.closeStreams(...)|

> Output streams closed with IOUtils suppressing write errors
> ---
>
> Key: HDFS-12881
> URL: https://issues.apache.org/jira/browse/HDFS-12881
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jason Lowe
>Assignee: Ajay Kumar
> Attachments: HDFS-12881.001.patch, HDFS-12881.002.patch, 
> HDFS-12881.003.patch, HDFS-12881.004.patch
>
>
> There are a few places in HDFS code that are closing an output stream with 
> IOUtils.cleanupWithLogger like this:
> {code}
>   try {
> ...write to outStream...
>   } finally {
> IOUtils.cleanupWithLogger(LOG, outStream);
>   }
> {code}
> This suppresses any IOException that occurs during the close() method which 
> could lead to partial/corrupted output without throwing a corresponding 
> exception.  The code should either use try-with-resources or explicitly close 
> the stream within the try block so the exception thrown during close() is 
> properly propagated as exceptions during write operations are.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12920) HDFS default value change (with adding time unit) breaks old version MR tarball work with new version (3.0) of hadoop

2017-12-12 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12920:
---
Target Version/s: 3.0.1  (was: 3.0.0)

> HDFS default value change (with adding time unit) breaks old version MR 
> tarball work with new version (3.0) of hadoop
> -
>
> Key: HDFS-12920
> URL: https://issues.apache.org/jira/browse/HDFS-12920
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Junping Du
>Priority: Blocker
>
> After HADOOP-15059 get resolved. I tried to deploy 2.9.0 tar ball with 3.0.0 
> RC1, and run the job with following errors:
> {noformat}
> 2017-12-12 13:29:06,824 INFO [main] 
> org.apache.hadoop.service.AbstractService: Service 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster failed in state INITED; cause: 
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.lang.NumberFormatException: For input string: "30s"
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.lang.NumberFormatException: For input string: "30s"
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:542)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:522)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.callWithJobClassLoader(MRAppMaster.java:1764)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.createOutputCommitter(MRAppMaster.java:522)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:308)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$5.run(MRAppMaster.java:1722)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1886)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1719)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1650)
> {noformat}
> This is because HDFS-10845, we are adding time unit to hdfs-default.xml but 
> it cannot be recognized by old version MR jars. 
> This break our rolling upgrade story, so should mark as blocker.
> A quick workaround is to add values in hdfs-site.xml with removing all time 
> unit. But the right way may be to revert HDFS-10845 (and get rid of noisy 
> warnings).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12920) HDFS default value change (with adding time unit) breaks old version MR tarball work with new version (3.0) of hadoop

2017-12-12 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288574#comment-16288574
 ] 

Andrew Wang commented on HDFS-12920:


Particularly since there's a workaround, let's bump this to 3.0.1.

> HDFS default value change (with adding time unit) breaks old version MR 
> tarball work with new version (3.0) of hadoop
> -
>
> Key: HDFS-12920
> URL: https://issues.apache.org/jira/browse/HDFS-12920
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Junping Du
>Priority: Blocker
>
> After HADOOP-15059 get resolved. I tried to deploy 2.9.0 tar ball with 3.0.0 
> RC1, and run the job with following errors:
> {noformat}
> 2017-12-12 13:29:06,824 INFO [main] 
> org.apache.hadoop.service.AbstractService: Service 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster failed in state INITED; cause: 
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.lang.NumberFormatException: For input string: "30s"
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.lang.NumberFormatException: For input string: "30s"
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:542)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:522)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.callWithJobClassLoader(MRAppMaster.java:1764)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.createOutputCommitter(MRAppMaster.java:522)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:308)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$5.run(MRAppMaster.java:1722)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1886)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1719)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1650)
> {noformat}
> This is because HDFS-10845, we are adding time unit to hdfs-default.xml but 
> it cannot be recognized by old version MR jars. 
> This break our rolling upgrade story, so should mark as blocker.
> A quick workaround is to add values in hdfs-site.xml with removing all time 
> unit. But the right way may be to revert HDFS-10845 (and get rid of noisy 
> warnings).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work stopped] (HDFS-12773) RBF: Improve State Store FS implementation

2017-12-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-12773 stopped by Íñigo Goiri.
--
> RBF: Improve State Store FS implementation
> --
>
> Key: HDFS-12773
> URL: https://issues.apache.org/jira/browse/HDFS-12773
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Attachments: HDFS-12773.000.patch
>
>
> HDFS-10630 introduced a filesystem implementation of the State Store for unit 
> tests. However, this implementation doesn't handle multiple writers 
> concurrently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12773) RBF: Improve State Store FS implementation

2017-12-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12773:
---
Status: Patch Available  (was: Open)

> RBF: Improve State Store FS implementation
> --
>
> Key: HDFS-12773
> URL: https://issues.apache.org/jira/browse/HDFS-12773
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Attachments: HDFS-12773.000.patch
>
>
> HDFS-10630 introduced a filesystem implementation of the State Store for unit 
> tests. However, this implementation doesn't handle multiple writers 
> concurrently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12773) RBF: Improve State Store FS implementation

2017-12-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12773:
---
Status: In Progress  (was: Patch Available)

> RBF: Improve State Store FS implementation
> --
>
> Key: HDFS-12773
> URL: https://issues.apache.org/jira/browse/HDFS-12773
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Attachments: HDFS-12773.000.patch
>
>
> HDFS-10630 introduced a filesystem implementation of the State Store for unit 
> tests. However, this implementation doesn't handle multiple writers 
> concurrently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12920) HDFS default value change (with adding time unit) breaks old version MR tarball work with new version (3.0) of hadoop

2017-12-12 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288568#comment-16288568
 ] 

Junping Du commented on HDFS-12920:
---

CC [~andrew.wang], [~linyiqun], [~chris.douglas].

> HDFS default value change (with adding time unit) breaks old version MR 
> tarball work with new version (3.0) of hadoop
> -
>
> Key: HDFS-12920
> URL: https://issues.apache.org/jira/browse/HDFS-12920
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Junping Du
>Priority: Blocker
>
> After HADOOP-15059 get resolved. I tried to deploy 2.9.0 tar ball with 3.0.0 
> RC1, and run the job with following errors:
> {noformat}
> 2017-12-12 13:29:06,824 INFO [main] 
> org.apache.hadoop.service.AbstractService: Service 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster failed in state INITED; cause: 
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.lang.NumberFormatException: For input string: "30s"
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.lang.NumberFormatException: For input string: "30s"
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:542)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:522)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.callWithJobClassLoader(MRAppMaster.java:1764)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.createOutputCommitter(MRAppMaster.java:522)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:308)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$5.run(MRAppMaster.java:1722)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1886)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1719)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1650)
> {noformat}
> This is because HDFS-10845, we are adding time unit to hdfs-default.xml but 
> it cannot be recognized by old version MR jars. 
> This break our rolling upgrade story, so should mark as blocker.
> A quick workaround is to add values in hdfs-site.xml with removing all time 
> unit. But the right way may be to revert HDFS-10845 (and get rid of noisy 
> warnings).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12920) HDFS default value change (with adding time unit) breaks old version MR tarball work with new version (3.0) of hadoop

2017-12-12 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-12920:
--
Description: 
After HADOOP-15059 get resolved. I tried to deploy 2.9.0 tar ball with 3.0.0 
RC1, and run the job with following errors:
{noformat}
2017-12-12 13:29:06,824 INFO [main] org.apache.hadoop.service.AbstractService: 
Service org.apache.hadoop.mapreduce.v2.app.MRAppMaster failed in state INITED; 
cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
java.lang.NumberFormatException: For input string: "30s"
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
java.lang.NumberFormatException: For input string: "30s"
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:542)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:522)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.callWithJobClassLoader(MRAppMaster.java:1764)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.createOutputCommitter(MRAppMaster.java:522)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:308)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$5.run(MRAppMaster.java:1722)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1886)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1719)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1650)
{noformat}
This is because HDFS-10845, we are adding time unit to hdfs-default.xml but it 
cannot be recognized by old version MR jars. 
This break our rolling upgrade story, so should mark as blocker.
A quick workaround is to add values in hdfs-site.xml with removing all time 
unit. But the right way may be to revert HDFS-10845 (and get rid of noisy 
warnings).

  was:
After HADOOP-15059 get resolved. I tried to deploy 2.9.0 tar ball with 3.0.0 
RC1, and run the job with following errors:
{noformat}
2017-12-12 13:29:06,824 INFO [main] org.apache.hadoop.service.AbstractService: 
Service org.apache.hadoop.mapreduce.v2.app.MRAppMaster failed in state INITED; 
cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
java.lang.NumberFormatException: For input string: "30s"
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
java.lang.NumberFormatException: For input string: "30s"
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:542)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:522)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.callWithJobClassLoader(MRAppMaster.java:1764)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.createOutputCommitter(MRAppMaster.java:522)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:308)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$5.run(MRAppMaster.java:1722)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1886)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1719)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1650)
{noformat}
This is because HDFS-9821, we are adding time unit to hdfs-default.xml but it 
cannot be recognized by old version MR jars. 
This break our rolling upgrade story, so should mark as blocker.
A quick workaround is to add values in hdfs-site.xml with removing all time 
unit. But the right way may be to revert HDFS-9821.


> HDFS default value change (with adding time unit) breaks old version MR 
> tarball work with new version (3.0) of hadoop
> -
>
> Key: HDFS-12920
> URL: https://issues.apache.org/jira/browse/HDFS-12920
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Junping Du
>Priority: Blocker
>
> After HADOOP-15059 get resolved. I tried to deploy 2.9.0 tar ball with 3.0.0 
> RC1, and run the job with following errors:
> {noformat}
> 2017-12-12 13:29:06,824 INFO [main] 
> org.apache.hadoop.service.AbstractService: Service 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster failed in state INITED; cause: 
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.lan

[jira] [Updated] (HDFS-12920) HDFS default value change (with adding time unit) breaks old version MR tarball work with new version (3.0) of hadoop

2017-12-12 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-12920:
--
Description: 
After HADOOP-15059 get resolved. I tried to deploy 2.9.0 tar ball with 3.0.0 
RC1, and run the job with following errors:
{noformat}
2017-12-12 13:29:06,824 INFO [main] org.apache.hadoop.service.AbstractService: 
Service org.apache.hadoop.mapreduce.v2.app.MRAppMaster failed in state INITED; 
cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
java.lang.NumberFormatException: For input string: "30s"
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
java.lang.NumberFormatException: For input string: "30s"
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:542)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:522)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.callWithJobClassLoader(MRAppMaster.java:1764)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.createOutputCommitter(MRAppMaster.java:522)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:308)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$5.run(MRAppMaster.java:1722)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1886)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1719)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1650)
{noformat}
This is because HDFS-9821, we are adding time unit to hdfs-default.xml but it 
cannot be recognized by old version MR jars. 
This break our rolling upgrade story, so should mark as blocker.
A quick workaround is to add values in hdfs-site.xml with removing all time 
unit. But the right way may be to revert HDFS-9821.

  was:
I tried to deploy 2.9.0 tar ball with 3.0.0 RC1, and run the job with following 
errors:
{noformat}
2017-12-12 13:29:06,824 INFO [main] org.apache.hadoop.service.AbstractService: 
Service org.apache.hadoop.mapreduce.v2.app.MRAppMaster failed in state INITED; 
cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
java.lang.NumberFormatException: For input string: "30s"
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
java.lang.NumberFormatException: For input string: "30s"
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:542)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:522)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.callWithJobClassLoader(MRAppMaster.java:1764)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.createOutputCommitter(MRAppMaster.java:522)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:308)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$5.run(MRAppMaster.java:1722)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1886)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1719)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1650)
{noformat}
This is because HDFS-9821, we are adding time unit to hdfs-default.xml but it 
cannot be recognized by old version MR jars. 
This break our rolling upgrade story, so should mark as blocker.
A quick workaround is to add values in hdfs-site.xml with removing all time 
unit. But the right way may be to revert HDFS-9821.


> HDFS default value change (with adding time unit) breaks old version MR 
> tarball work with new version (3.0) of hadoop
> -
>
> Key: HDFS-12920
> URL: https://issues.apache.org/jira/browse/HDFS-12920
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Junping Du
>Priority: Blocker
>
> After HADOOP-15059 get resolved. I tried to deploy 2.9.0 tar ball with 3.0.0 
> RC1, and run the job with following errors:
> {noformat}
> 2017-12-12 13:29:06,824 INFO [main] 
> org.apache.hadoop.service.AbstractService: Service 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster failed in state INITED; cause: 
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.lang.NumberFormatException: For input string: "30s"
> org.apache.hadoop

[jira] [Created] (HDFS-12920) HDFS default value change (with adding time unit) breaks old version MR tarball work with new version (3.0) of hadoop

2017-12-12 Thread Junping Du (JIRA)
Junping Du created HDFS-12920:
-

 Summary: HDFS default value change (with adding time unit) breaks 
old version MR tarball work with new version (3.0) of hadoop
 Key: HDFS-12920
 URL: https://issues.apache.org/jira/browse/HDFS-12920
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Reporter: Junping Du
Priority: Blocker


I tried to deploy 2.9.0 tar ball with 3.0.0 RC1, and run the job with following 
errors:
{noformat}
2017-12-12 13:29:06,824 INFO [main] org.apache.hadoop.service.AbstractService: 
Service org.apache.hadoop.mapreduce.v2.app.MRAppMaster failed in state INITED; 
cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
java.lang.NumberFormatException: For input string: "30s"
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
java.lang.NumberFormatException: For input string: "30s"
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:542)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:522)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.callWithJobClassLoader(MRAppMaster.java:1764)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.createOutputCommitter(MRAppMaster.java:522)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:308)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$5.run(MRAppMaster.java:1722)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1886)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1719)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1650)
{noformat}
This is because HDFS-9821, we are adding time unit to hdfs-default.xml but it 
cannot be recognized by old version MR jars. 
This break our rolling upgrade story, so should mark as blocker.
A quick workaround is to add values in hdfs-site.xml with removing all time 
unit. But the right way may be to revert HDFS-9821.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12895) RBF: Add ACL support for mount table

2017-12-12 Thread Anbang Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288550#comment-16288550
 ] 

Anbang Hu edited comment on HDFS-12895 at 12/13/17 12:56 AM:
-

Thanks Yiqun's patch. Per Inigo's request, a few comments are listed:
* Comments in RouterAdminServer.java has unnecessary "that"
{code:java}
  /**
   * Permission related info that used for constructing new router permission
   * checker instance.
   */
  private static String routerOwner;
  private static String superGroup;
  private static boolean isPermissionEnabled;
...
  /**
   * Get a new permission checker that used for making mount table access
   * control. This method will be invoked during each RPC call in router
   * admin server.
   *
   * @return
   * @throws AccessControlException
   */
  public static RouterPermissionChecker getPermissionChecker()
  throws AccessControlException {
if (!isPermissionEnabled) {
  return null;
}
{code}
* There is a piece of commented code in 
{{MountTableStoreImpl.RemoveMountTableEntryResponse}}


was (Author: huanbang1993):
Thanks Yiqun's patch. Per Inigo's request, a few comments are listed:
* Comments in RouterAdminServer.java has unnecessary "that"
{code:java}
  /**
   * Permission related info that used for constructing new router permission
   * checker instance.
   */
  private static String routerOwner;
  private static String superGroup;
  private static boolean isPermissionEnabled;
...
  /**
   * Get a new permission checker that used for making mount table access
   * control. This method will be invoked during each RPC call in router
   * admin server.
   *
   * @return
   * @throws AccessControlException
   */
  public static RouterPermissionChecker getPermissionChecker()
  throws AccessControlException {
if (!isPermissionEnabled) {
  return null;
}
{code}
* There is a piece of commented code in 
MountTableStoreImpl.RemoveMountTableEntryResponse

> RBF: Add ACL support for mount table
> 
>
> Key: HDFS-12895
> URL: https://issues.apache.org/jira/browse/HDFS-12895
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF
> Attachments: HDFS-12895.001.patch, HDFS-12895.002.patch, 
> HDFS-12895.003.patch
>
>
> Adding ACL support for the Mount Table management. Following is the initial 
> design of ACL control for the mount table management.
> Each mount table has its owner, group name and permission.
> The mount table permissions (FsPermission), here we use 
> {{org.apache.hadoop.fs.permission.FsPermission}} to do the access check:
> # READ permission: you can read the mount table info.
> # WRITE permission: you can add remove or update this mount table info.
> # EXECUTE permission: This won't be used.
> The add command of mount table will be extended like this
> {noformat}
> $HADOOP_HOME/bin/hdfs dfsrouteradmin [-add   
>  [-owner ] [-group ] [-mode ]]
> {noformat}
> * is UNIX-style permissions for the mount table. Permissions are 
> specified in octal, e.g. 0755. By default, this is set to 0755*.
> If we want update the ACL info of specfied mount table, just execute add 
> command again. This command not only adding for new mount talle but also 
> updating mount table once it finds given mount table is existed. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12895) RBF: Add ACL support for mount table

2017-12-12 Thread Anbang Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288550#comment-16288550
 ] 

Anbang Hu commented on HDFS-12895:
--

Thanks Yiqun's patch. Per Inigo's request, a few comments are listed:
* Comments in RouterAdminServer.java has unnecessary "that"
{code:java}
  /**
   * Permission related info that used for constructing new router permission
   * checker instance.
   */
  private static String routerOwner;
  private static String superGroup;
  private static boolean isPermissionEnabled;
...
  /**
   * Get a new permission checker that used for making mount table access
   * control. This method will be invoked during each RPC call in router
   * admin server.
   *
   * @return
   * @throws AccessControlException
   */
  public static RouterPermissionChecker getPermissionChecker()
  throws AccessControlException {
if (!isPermissionEnabled) {
  return null;
}
{code}
* There is a piece of commented code in 
MountTableStoreImpl.RemoveMountTableEntryResponse

> RBF: Add ACL support for mount table
> 
>
> Key: HDFS-12895
> URL: https://issues.apache.org/jira/browse/HDFS-12895
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF
> Attachments: HDFS-12895.001.patch, HDFS-12895.002.patch, 
> HDFS-12895.003.patch
>
>
> Adding ACL support for the Mount Table management. Following is the initial 
> design of ACL control for the mount table management.
> Each mount table has its owner, group name and permission.
> The mount table permissions (FsPermission), here we use 
> {{org.apache.hadoop.fs.permission.FsPermission}} to do the access check:
> # READ permission: you can read the mount table info.
> # WRITE permission: you can add remove or update this mount table info.
> # EXECUTE permission: This won't be used.
> The add command of mount table will be extended like this
> {noformat}
> $HADOOP_HOME/bin/hdfs dfsrouteradmin [-add   
>  [-owner ] [-group ] [-mode ]]
> {noformat}
> * is UNIX-style permissions for the mount table. Permissions are 
> specified in octal, e.g. 0755. By default, this is set to 0755*.
> If we want update the ACL info of specfied mount table, just execute add 
> command again. This command not only adding for new mount talle but also 
> updating mount table once it finds given mount table is existed. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12895) RBF: Add ACL support for mount table

2017-12-12 Thread Anbang Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288550#comment-16288550
 ] 

Anbang Hu edited comment on HDFS-12895 at 12/13/17 12:57 AM:
-

Thanks Yiqun's patch. Per Inigo's request, a few comments are listed:
* Comments in {{RouterAdminServer.java}} has unnecessary "that"
{code:java}
  /**
   * Permission related info that used for constructing new router permission
   * checker instance.
   */
  private static String routerOwner;
  private static String superGroup;
  private static boolean isPermissionEnabled;
...
  /**
   * Get a new permission checker that used for making mount table access
   * control. This method will be invoked during each RPC call in router
   * admin server.
   *
   * @return
   * @throws AccessControlException
   */
  public static RouterPermissionChecker getPermissionChecker()
  throws AccessControlException {
if (!isPermissionEnabled) {
  return null;
}
{code}
* There is a piece of commented code in 
{{MountTableStoreImpl.RemoveMountTableEntryResponse}}


was (Author: huanbang1993):
Thanks Yiqun's patch. Per Inigo's request, a few comments are listed:
* Comments in RouterAdminServer.java has unnecessary "that"
{code:java}
  /**
   * Permission related info that used for constructing new router permission
   * checker instance.
   */
  private static String routerOwner;
  private static String superGroup;
  private static boolean isPermissionEnabled;
...
  /**
   * Get a new permission checker that used for making mount table access
   * control. This method will be invoked during each RPC call in router
   * admin server.
   *
   * @return
   * @throws AccessControlException
   */
  public static RouterPermissionChecker getPermissionChecker()
  throws AccessControlException {
if (!isPermissionEnabled) {
  return null;
}
{code}
* There is a piece of commented code in 
{{MountTableStoreImpl.RemoveMountTableEntryResponse}}

> RBF: Add ACL support for mount table
> 
>
> Key: HDFS-12895
> URL: https://issues.apache.org/jira/browse/HDFS-12895
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF
> Attachments: HDFS-12895.001.patch, HDFS-12895.002.patch, 
> HDFS-12895.003.patch
>
>
> Adding ACL support for the Mount Table management. Following is the initial 
> design of ACL control for the mount table management.
> Each mount table has its owner, group name and permission.
> The mount table permissions (FsPermission), here we use 
> {{org.apache.hadoop.fs.permission.FsPermission}} to do the access check:
> # READ permission: you can read the mount table info.
> # WRITE permission: you can add remove or update this mount table info.
> # EXECUTE permission: This won't be used.
> The add command of mount table will be extended like this
> {noformat}
> $HADOOP_HOME/bin/hdfs dfsrouteradmin [-add   
>  [-owner ] [-group ] [-mode ]]
> {noformat}
> * is UNIX-style permissions for the mount table. Permissions are 
> specified in octal, e.g. 0755. By default, this is set to 0755*.
> If we want update the ACL info of specfied mount table, just execute add 
> command again. This command not only adding for new mount talle but also 
> updating mount table once it finds given mount table is existed. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12881) Output streams closed with IOUtils suppressing write errors

2017-12-12 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12881:
--
Attachment: HDFS-12881.004.patch

> Output streams closed with IOUtils suppressing write errors
> ---
>
> Key: HDFS-12881
> URL: https://issues.apache.org/jira/browse/HDFS-12881
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jason Lowe
>Assignee: Ajay Kumar
> Attachments: HDFS-12881.001.patch, HDFS-12881.002.patch, 
> HDFS-12881.003.patch, HDFS-12881.004.patch
>
>
> There are a few places in HDFS code that are closing an output stream with 
> IOUtils.cleanupWithLogger like this:
> {code}
>   try {
> ...write to outStream...
>   } finally {
> IOUtils.cleanupWithLogger(LOG, outStream);
>   }
> {code}
> This suppresses any IOException that occurs during the close() method which 
> could lead to partial/corrupted output without throwing a corresponding 
> exception.  The code should either use try-with-resources or explicitly close 
> the stream within the try block so the exception thrown during close() is 
> properly propagated as exceptions during write operations are.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12918) EC Policy defaults incorrectly to enabled in protobufs

2017-12-12 Thread Zach Amsden (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288537#comment-16288537
 ] 

Zach Amsden commented on HDFS-12918:


Maybe that is the real bug then.  I got this exception when upgrading an 
existing HDFS cluster - reformatting was required:


{noformat}
Failed to load image from 
FSImageFile(file=/data/2/dfs/nn/current/fsimage_8728887, 
cpktTxId=8728887)
java.lang.IllegalArgumentException: Missing state field in ErasureCodingPolicy 
proto
at 
com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
at 
org.apache.hadoop.hdfs.protocolPB.PBHelperClient.convertErasureCodingPolicyInfo(PBHelperClient.java:2973)
at 
org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadErasureCodingSection(FSImageFormatProtobuf.java:386)
at 
org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:298)
at 
org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:188)
at 
org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:227)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:928)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:912)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:785)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:719)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:317)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1072)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:704)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:665)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:727)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:950)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:929)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1653)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1720)
{noformat}


> EC Policy defaults incorrectly to enabled in protobufs
> --
>
> Key: HDFS-12918
> URL: https://issues.apache.org/jira/browse/HDFS-12918
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zach Amsden
>Assignee: Manoj Govindassamy
>Priority: Critical
>
> According to documentation and code comments, the default setting for erasure 
> coding policy is disabled:
> /** Policy is disabled. It's policy default state. */
>  DISABLED(1),
> However, HDFS-12258 appears to have incorrectly set the policy state in the 
> protobuf to enabled:
> {code:java}
>  message ErasureCodingPolicyProto {
> ooptional string name = 1;
> optional ECSchemaProto schema = 2;
> optional uint32 cellSize = 3;
> required uint32 id = 4; // Actually a byte - only 8 bits used
>  + optional ErasureCodingPolicyState state = 5 [default = ENABLED];
>   }
> {code}
> This means the parameter can't actually be optional, it must always be 
> included, and existing serialized data without this optional field will be 
> incorrectly interpreted as having erasure coding enabled.
> This unnecessarily breaks compatibility and will require existing HDFS 
> installations that store metadata in protobufs to require reformatting.
> It looks like a simple mistake that was overlooked in code review.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12907) Allow read-only access to reserved raw for non-superusers

2017-12-12 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-12907:
--
Attachment: HDFS-12907.003.patch

Attaching a new patch.
Following comments are addressed.
1. Allowing user to see raw xattrs if they have read access.
2. Added test to verify that user who don't have access are not allowed to 
getattr.
3. Fixed the switch statement indentation.

> Allow read-only access to reserved raw for non-superusers
> -
>
> Key: HDFS-12907
> URL: https://issues.apache.org/jira/browse/HDFS-12907
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: Rushabh S Shah
> Attachments: HDFS-12907.001.patch, HDFS-12907.002.patch, 
> HDFS-12907.003.patch, HDFS-12907.patch
>
>
> HDFS-6509 added a special /.reserved/raw path prefix to access the raw file 
> contents of EZ files.  In the simplest sense it doesn't return the FE info in 
> the {{LocatedBlocks}} so the dfs client doesn't try to decrypt the data.  
> This facilitates allowing tools like distcp to copy raw bytes.
> Access to the raw hierarchy is restricted to superusers.  This seems like an 
> overly broad restriction designed to prevent non-admins from munging the EZ 
> related xattrs.  I believe we should relax the restriction to allow 
> non-admins to perform read-only operations.  Allowing non-superusers to 
> easily read the raw bytes will be extremely useful for regular users, esp. 
> for enabling webhdfs client-side encryption.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12918) EC Policy defaults incorrectly to enabled in protobufs

2017-12-12 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288521#comment-16288521
 ] 

Manoj Govindassamy commented on HDFS-12918:
---


A new check added in the convert seems to be not backward compatible. It is 
going to break the upgrade from previous image format where the 
ErasureCodingPolicyProto didn't have state field. It is suppose to be an 
optional field and the below check need to be relaxed as well. [~xiaochen] your 
thoughts please?

{noformat}
  /**
   * Convert the protobuf to a {@link ErasureCodingPolicyInfo}. This should only
   * be needed when the caller is interested in the state of the policy.
   */
  public static ErasureCodingPolicyInfo convertErasureCodingPolicyInfo(
  ErasureCodingPolicyProto proto) {
ErasureCodingPolicy policy = convertErasureCodingPolicy(proto);
ErasureCodingPolicyInfo info = new ErasureCodingPolicyInfo(policy);
Preconditions.checkArgument(proto.hasState(),<==
"Missing state field in ErasureCodingPolicy proto");
info.setState(convertECState(proto.getState()));
return info;
  }
{noformat}


> EC Policy defaults incorrectly to enabled in protobufs
> --
>
> Key: HDFS-12918
> URL: https://issues.apache.org/jira/browse/HDFS-12918
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zach Amsden
>Assignee: Manoj Govindassamy
>Priority: Critical
>
> According to documentation and code comments, the default setting for erasure 
> coding policy is disabled:
> /** Policy is disabled. It's policy default state. */
>  DISABLED(1),
> However, HDFS-12258 appears to have incorrectly set the policy state in the 
> protobuf to enabled:
> {code:java}
>  message ErasureCodingPolicyProto {
> ooptional string name = 1;
> optional ECSchemaProto schema = 2;
> optional uint32 cellSize = 3;
> required uint32 id = 4; // Actually a byte - only 8 bits used
>  + optional ErasureCodingPolicyState state = 5 [default = ENABLED];
>   }
> {code}
> This means the parameter can't actually be optional, it must always be 
> included, and existing serialized data without this optional field will be 
> incorrectly interpreted as having erasure coding enabled.
> This unnecessarily breaks compatibility and will require existing HDFS 
> installations that store metadata in protobufs to require reformatting.
> It looks like a simple mistake that was overlooked in code review.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12000) Ozone: Container : Add key versioning support-1

2017-12-12 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288500#comment-16288500
 ] 

Chen Liang commented on HDFS-12000:
---

Thansk [~xyao] for checking the tests! I ran locally the tests you mentioned 
and the failed tests in the latest Jenkins run. All tests passed except for 
{{TestOzoneRpcClient.testPutKeyRatisThreeNodes}} which fails even without the 
patch.

> Ozone: Container : Add key versioning support-1
> ---
>
> Key: HDFS-12000
> URL: https://issues.apache.org/jira/browse/HDFS-12000
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Chen Liang
>  Labels: OzonePostMerge
> Attachments: HDFS-12000-HDFS-7240.001.patch, 
> HDFS-12000-HDFS-7240.002.patch, HDFS-12000-HDFS-7240.003.patch, 
> HDFS-12000-HDFS-7240.004.patch, HDFS-12000-HDFS-7240.005.patch, 
> HDFS-12000-HDFS-7240.007.patch, HDFS-12000-HDFS-7240.008.patch, 
> HDFS-12000-HDFS-7240.009.patch, HDFS-12000-HDFS-7240.010.patch, 
> HDFS-12000-HDFS-7240.011.patch, OzoneVersion.001.pdf
>
>
> The rest interface of ozone supports versioning of keys. This support comes 
> from the containers and how chunks are managed to support this feature. This 
> JIRA tracks that feature. Will post a detailed design doc so that we can talk 
> about this feature.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12626) Ozone : delete open key entries that will no longer be closed

2017-12-12 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-12626:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7240
   Status: Resolved  (was: Patch Available)

Thanks [~vagarychen] for the contribution. I've commit the patch to the feature 
branch. 

> Ozone : delete open key entries that will no longer be closed
> -
>
> Key: HDFS-12626
> URL: https://issues.apache.org/jira/browse/HDFS-12626
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Fix For: HDFS-7240
>
> Attachments: HDFS-12626-HDFS-7240.001.patch, 
> HDFS-12626-HDFS-7240.002.patch, HDFS-12626-HDFS-7240.003.patch, 
> HDFS-12626-HDFS-7240.004.patch, HDFS-12626-HDFS-7240.005.patch, 
> HDFS-12626-HDFS-7240.006.patch
>
>
> HDFS-12543 introduced the notion of "open key" where when a key is opened, an 
> open key entry gets persisted, only after client calls a close will this 
> entry be made visible. One issue is that if the client does not call close 
> (e.g. failed), then that open key entry will never be deleted from meta data. 
> This JIRA tracks this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12918) EC Policy defaults incorrectly to enabled in protobufs

2017-12-12 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288485#comment-16288485
 ] 

Xiao Chen commented on HDFS-12918:
--

Thanks Zach for reporting this and Manoj for investigating.

As Manoj pointed out, I found this too while fixing HDFS-12682. Although it is 
a mistake there, it was not changed due to the fear of incompatible behavior, 
when discussing whether that should be set to 'DISABLED'. 

According to [protobuf 
doc|https://developers.google.com/protocol-buffers/docs/proto#optional]: {{If 
the default value is not specified for an optional element, a type-specific 
default value is used instead...For enums, the default value is the first value 
listed in the enum's type definition}}.

I think we check whether a state is set in the protobuf by 
{{proto.hasState()}}, so this wrong default shouldn't be visible downstream - 
so also echoing Manoj: where is this observed?

> EC Policy defaults incorrectly to enabled in protobufs
> --
>
> Key: HDFS-12918
> URL: https://issues.apache.org/jira/browse/HDFS-12918
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zach Amsden
>Assignee: Manoj Govindassamy
>Priority: Critical
>
> According to documentation and code comments, the default setting for erasure 
> coding policy is disabled:
> /** Policy is disabled. It's policy default state. */
>  DISABLED(1),
> However, HDFS-12258 appears to have incorrectly set the policy state in the 
> protobuf to enabled:
> {code:java}
>  message ErasureCodingPolicyProto {
> ooptional string name = 1;
> optional ECSchemaProto schema = 2;
> optional uint32 cellSize = 3;
> required uint32 id = 4; // Actually a byte - only 8 bits used
>  + optional ErasureCodingPolicyState state = 5 [default = ENABLED];
>   }
> {code}
> This means the parameter can't actually be optional, it must always be 
> included, and existing serialized data without this optional field will be 
> incorrectly interpreted as having erasure coding enabled.
> This unnecessarily breaks compatibility and will require existing HDFS 
> installations that store metadata in protobufs to require reformatting.
> It looks like a simple mistake that was overlooked in code review.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12919) RBF: support erasure coding methods in RouterRpcServer

2017-12-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12919:
---
Summary: RBF: support erasure coding methods in RouterRpcServer  (was: RBF: 
MR sets erasure coding by default)

> RBF: support erasure coding methods in RouterRpcServer
> --
>
> Key: HDFS-12919
> URL: https://issues.apache.org/jira/browse/HDFS-12919
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>  Labels: RBF
> Attachments: HDFS-12919.000.patch
>
>
> MAPREDUCE-6954 started to tune the erasure coding settings for staging files. 
> However, the {{Router}} does not support this operation and throws:
> {code}
> 17/12/12 14:36:07 INFO mapreduce.JobSubmitter: Cleaning up the staging area 
> /tmp/hadoop-yarn/staging/hadoop/.staging/job_1513116010218_0002
> org.apache.hadoop.ipc.RemoteException(java.lang.UnsupportedOperationException):
>  Operation "setErasureCodingPolicy" is not supported
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.checkOperation(RouterRpcServer.java:368)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.setErasureCodingPolicy(RouterRpcServer.java:1805)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12919) RBF: MR sets erasure coding by default

2017-12-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288479#comment-16288479
 ] 

Íñigo Goiri commented on HDFS-12919:


I added a very quick implementation of the EC methods.
I don't have a cluster to test this, so I'll try to do a unit test to cover 
these basic operations.

> RBF: MR sets erasure coding by default
> --
>
> Key: HDFS-12919
> URL: https://issues.apache.org/jira/browse/HDFS-12919
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>  Labels: RBF
> Attachments: HDFS-12919.000.patch
>
>
> MAPREDUCE-6954 started to tune the erasure coding settings for staging files. 
> However, the {{Router}} does not support this operation and throws:
> {code}
> 17/12/12 14:36:07 INFO mapreduce.JobSubmitter: Cleaning up the staging area 
> /tmp/hadoop-yarn/staging/hadoop/.staging/job_1513116010218_0002
> org.apache.hadoop.ipc.RemoteException(java.lang.UnsupportedOperationException):
>  Operation "setErasureCodingPolicy" is not supported
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.checkOperation(RouterRpcServer.java:368)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.setErasureCodingPolicy(RouterRpcServer.java:1805)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12919) RBF: MR sets erasure coding by default

2017-12-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12919:
---
Status: Patch Available  (was: Open)

> RBF: MR sets erasure coding by default
> --
>
> Key: HDFS-12919
> URL: https://issues.apache.org/jira/browse/HDFS-12919
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>  Labels: RBF
> Attachments: HDFS-12919.000.patch
>
>
> MAPREDUCE-6954 started to tune the erasure coding settings for staging files. 
> However, the {{Router}} does not support this operation and throws:
> {code}
> 17/12/12 14:36:07 INFO mapreduce.JobSubmitter: Cleaning up the staging area 
> /tmp/hadoop-yarn/staging/hadoop/.staging/job_1513116010218_0002
> org.apache.hadoop.ipc.RemoteException(java.lang.UnsupportedOperationException):
>  Operation "setErasureCodingPolicy" is not supported
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.checkOperation(RouterRpcServer.java:368)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.setErasureCodingPolicy(RouterRpcServer.java:1805)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12919) RBF: MR sets erasure coding by default

2017-12-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12919:
---
Attachment: HDFS-12919.000.patch

> RBF: MR sets erasure coding by default
> --
>
> Key: HDFS-12919
> URL: https://issues.apache.org/jira/browse/HDFS-12919
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>  Labels: RBF
> Attachments: HDFS-12919.000.patch
>
>
> MAPREDUCE-6954 started to tune the erasure coding settings for staging files. 
> However, the {{Router}} does not support this operation and throws:
> {code}
> 17/12/12 14:36:07 INFO mapreduce.JobSubmitter: Cleaning up the staging area 
> /tmp/hadoop-yarn/staging/hadoop/.staging/job_1513116010218_0002
> org.apache.hadoop.ipc.RemoteException(java.lang.UnsupportedOperationException):
>  Operation "setErasureCodingPolicy" is not supported
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.checkOperation(RouterRpcServer.java:368)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.setErasureCodingPolicy(RouterRpcServer.java:1805)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12818) Support multiple storages in DataNodeCluster / SimulatedFSDataset

2017-12-12 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-12818:
---
Attachment: HDFS-12818.007.patch

Argh, {{TestBalancerWithMultipleNameNodes}} snuck by me because it was timing 
out instead of failing. Thanks for the catch [~shv]. Confirmed that all of the 
{{TestBalancer*}} tests actually pass locally now with v007 patch.

> Support multiple storages in DataNodeCluster / SimulatedFSDataset
> -
>
> Key: HDFS-12818
> URL: https://issues.apache.org/jira/browse/HDFS-12818
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, test
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Minor
> Attachments: HDFS-12818.000.patch, HDFS-12818.001.patch, 
> HDFS-12818.002.patch, HDFS-12818.003.patch, HDFS-12818.004.patch, 
> HDFS-12818.005.patch, HDFS-12818.006.patch, HDFS-12818.007.patch
>
>
> Currently {{SimulatedFSDataset}} (and thus, {{DataNodeCluster}} with 
> {{-simulated}}) only supports a single storage per {{DataNode}}. Given that 
> the number of storages can have important implications on the performance of 
> block report processing, it would be useful for these classes to support a 
> multiple storage configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12626) Ozone : delete open key entries that will no longer be closed

2017-12-12 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288466#comment-16288466
 ] 

Xiaoyu Yao commented on HDFS-12626:
---

+1 for the latest patch. I will commit it shortly.

> Ozone : delete open key entries that will no longer be closed
> -
>
> Key: HDFS-12626
> URL: https://issues.apache.org/jira/browse/HDFS-12626
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-12626-HDFS-7240.001.patch, 
> HDFS-12626-HDFS-7240.002.patch, HDFS-12626-HDFS-7240.003.patch, 
> HDFS-12626-HDFS-7240.004.patch, HDFS-12626-HDFS-7240.005.patch, 
> HDFS-12626-HDFS-7240.006.patch
>
>
> HDFS-12543 introduced the notion of "open key" where when a key is opened, an 
> open key entry gets persisted, only after client calls a close will this 
> entry be made visible. One issue is that if the client does not call close 
> (e.g. failed), then that open key entry will never be deleted from meta data. 
> This JIRA tracks this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12000) Ozone: Container : Add key versioning support-1

2017-12-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288437#comment-16288437
 ] 

genericqa commented on HDFS-12000:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
41s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
7s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
1s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
42s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
23s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
2s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}133m 42s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}211m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.web.client.TestKeysRatis |
|   | hadoop.ozone.ozShell.TestOzoneShell |
|   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks |
|   | hadoop.hdfs.TestErasureCodingMultipleRacks |
|   | hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy |
|   | hadoop.ozone.web.client.TestKeys |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d11161b |
| JIRA Issue | HDFS-12000 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901753/HDFS-12000-HDFS-7240.011.patch
 |
| Optional 

[jira] [Commented] (HDFS-12919) RBF: MR sets erasure coding by default

2017-12-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288418#comment-16288418
 ] 

Íñigo Goiri commented on HDFS-12919:


Correct, the {{Router}} implements the same interfaces as the Namenode (i.e., 
{{ClientProtocol}}) and is accessed using {{DistributedFileSystem}}.
Right now, it leaves a few RPC calls without implementation (and throwing 
{{UnsupportedOperationException}}).
I can implement {{setErasureCodingPolicy()}} for now.

> RBF: MR sets erasure coding by default
> --
>
> Key: HDFS-12919
> URL: https://issues.apache.org/jira/browse/HDFS-12919
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>  Labels: RBF
>
> MAPREDUCE-6954 started to tune the erasure coding settings for staging files. 
> However, the {{Router}} does not support this operation and throws:
> {code}
> 17/12/12 14:36:07 INFO mapreduce.JobSubmitter: Cleaning up the staging area 
> /tmp/hadoop-yarn/staging/hadoop/.staging/job_1513116010218_0002
> org.apache.hadoop.ipc.RemoteException(java.lang.UnsupportedOperationException):
>  Operation "setErasureCodingPolicy" is not supported
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.checkOperation(RouterRpcServer.java:368)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.setErasureCodingPolicy(RouterRpcServer.java:1805)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12919) RBF: MR sets erasure coding by default

2017-12-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12919:
---
Affects Version/s: 3.0.0

> RBF: MR sets erasure coding by default
> --
>
> Key: HDFS-12919
> URL: https://issues.apache.org/jira/browse/HDFS-12919
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>  Labels: RBF
>
> MAPREDUCE-6954 started to tune the erasure coding settings for staging files. 
> However, the {{Router}} does not support this operation and throws:
> {code}
> 17/12/12 14:36:07 INFO mapreduce.JobSubmitter: Cleaning up the staging area 
> /tmp/hadoop-yarn/staging/hadoop/.staging/job_1513116010218_0002
> org.apache.hadoop.ipc.RemoteException(java.lang.UnsupportedOperationException):
>  Operation "setErasureCodingPolicy" is not supported
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.checkOperation(RouterRpcServer.java:368)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.setErasureCodingPolicy(RouterRpcServer.java:1805)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12919) RBF: MR sets erasure coding by default

2017-12-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12919:
---
Labels: RBF  (was: )

> RBF: MR sets erasure coding by default
> --
>
> Key: HDFS-12919
> URL: https://issues.apache.org/jira/browse/HDFS-12919
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>  Labels: RBF
>
> MAPREDUCE-6954 started to tune the erasure coding settings for staging files. 
> However, the {{Router}} does not support this operation and throws:
> {code}
> 17/12/12 14:36:07 INFO mapreduce.JobSubmitter: Cleaning up the staging area 
> /tmp/hadoop-yarn/staging/hadoop/.staging/job_1513116010218_0002
> org.apache.hadoop.ipc.RemoteException(java.lang.UnsupportedOperationException):
>  Operation "setErasureCodingPolicy" is not supported
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.checkOperation(RouterRpcServer.java:368)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.setErasureCodingPolicy(RouterRpcServer.java:1805)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12751) Ozone: SCM: update container allocated size to container db for all the open containers in ContainerStateManager#close

2017-12-12 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12751:
--
Status: Patch Available  (was: In Progress)

> Ozone: SCM: update container allocated size to container db for all the open 
> containers in ContainerStateManager#close
> --
>
> Key: HDFS-12751
> URL: https://issues.apache.org/jira/browse/HDFS-12751
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Chen Liang
> Attachments: HDFS-12751-HDFS-7240.001.patch
>
>
> Container allocated size is maintained in memory by 
> {{ContainerStateManager}}, this has to be updated in container db when we 
> shutdown SCM. {{ContainerStateManager#close}} will be called during SCM 
> shutdown, so updating allocated size for all the open containers should be 
> done here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12751) Ozone: SCM: update container allocated size to container db for all the open containers in ContainerStateManager#close

2017-12-12 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12751:
--
Attachment: HDFS-12751-HDFS-7240.001.patch

Post v001 patch.

> Ozone: SCM: update container allocated size to container db for all the open 
> containers in ContainerStateManager#close
> --
>
> Key: HDFS-12751
> URL: https://issues.apache.org/jira/browse/HDFS-12751
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Chen Liang
> Attachments: HDFS-12751-HDFS-7240.001.patch
>
>
> Container allocated size is maintained in memory by 
> {{ContainerStateManager}}, this has to be updated in container db when we 
> shutdown SCM. {{ContainerStateManager#close}} will be called during SCM 
> shutdown, so updating allocated size for all the open containers should be 
> done here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12919) RBF: MR sets erasure coding by default

2017-12-12 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288411#comment-16288411
 ] 

Robert Kanter commented on HDFS-12919:
--

I don't know enough about the {{RouterRpcServer}}, but MAPREDUCE-6954 does a 
check for {{instanceof DistributedFileSystem}} before casting the 
{{FileSystem}} to a {{DistributedFileSystem}} and calling 
{{setErasureCodingPolicy}}.  Is the {{RouterRpcServer}} a subclass of 
{{DistributedFileSystem}}?  If so, then it should implement everything that 
{{DistributedFileSystem}}, including {{setErasureCodingPolicy}} (so, #3).  If 
not, then it shouldn't be possible to hit this.

> RBF: MR sets erasure coding by default
> --
>
> Key: HDFS-12919
> URL: https://issues.apache.org/jira/browse/HDFS-12919
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>
> MAPREDUCE-6954 started to tune the erasure coding settings for staging files. 
> However, the {{Router}} does not support this operation and throws:
> {code}
> 17/12/12 14:36:07 INFO mapreduce.JobSubmitter: Cleaning up the staging area 
> /tmp/hadoop-yarn/staging/hadoop/.staging/job_1513116010218_0002
> org.apache.hadoop.ipc.RemoteException(java.lang.UnsupportedOperationException):
>  Operation "setErasureCodingPolicy" is not supported
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.checkOperation(RouterRpcServer.java:368)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.setErasureCodingPolicy(RouterRpcServer.java:1805)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12919) RBF: MR sets erasure coding by default

2017-12-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288406#comment-16288406
 ] 

Íñigo Goiri commented on HDFS-12919:


[~pbacsko] and [~rkanter], you guys were involved in MAPREDUCE-6954, what is 
your preference for the fix?

> RBF: MR sets erasure coding by default
> --
>
> Key: HDFS-12919
> URL: https://issues.apache.org/jira/browse/HDFS-12919
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>
> MAPREDUCE-6954 started to tune the erasure coding settings for staging files. 
> However, the {{Router}} does not support this operation and throws:
> {code}
> 17/12/12 14:36:07 INFO mapreduce.JobSubmitter: Cleaning up the staging area 
> /tmp/hadoop-yarn/staging/hadoop/.staging/job_1513116010218_0002
> org.apache.hadoop.ipc.RemoteException(java.lang.UnsupportedOperationException):
>  Operation "setErasureCodingPolicy" is not supported
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.checkOperation(RouterRpcServer.java:368)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.setErasureCodingPolicy(RouterRpcServer.java:1805)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12919) RBF: MR sets erasure coding by default

2017-12-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288404#comment-16288404
 ] 

Íñigo Goiri commented on HDFS-12919:


We have a few options:
# Make the Router log the error and not throw an exception.
# Capture the exception in 
{{JobResourceUploader#disableErasureCodingForPath()}}.
# Implement {{setErasureCodingPolicy()}} in {{RouterRpcServer}}.

> RBF: MR sets erasure coding by default
> --
>
> Key: HDFS-12919
> URL: https://issues.apache.org/jira/browse/HDFS-12919
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>
> MAPREDUCE-6954 started to tune the erasure coding settings for staging files. 
> However, the {{Router}} does not support this operation and throws:
> {code}
> 17/12/12 14:36:07 INFO mapreduce.JobSubmitter: Cleaning up the staging area 
> /tmp/hadoop-yarn/staging/hadoop/.staging/job_1513116010218_0002
> org.apache.hadoop.ipc.RemoteException(java.lang.UnsupportedOperationException):
>  Operation "setErasureCodingPolicy" is not supported
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.checkOperation(RouterRpcServer.java:368)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.setErasureCodingPolicy(RouterRpcServer.java:1805)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12919) RBF: MR sets erasure coding by default

2017-12-12 Thread JIRA
Íñigo Goiri created HDFS-12919:
--

 Summary: RBF: MR sets erasure coding by default
 Key: HDFS-12919
 URL: https://issues.apache.org/jira/browse/HDFS-12919
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Íñigo Goiri
Assignee: Íñigo Goiri


MAPREDUCE-6954 started to tune the erasure coding settings for staging files. 
However, the {{Router}} does not support this operation and throws:
{code}
17/12/12 14:36:07 INFO mapreduce.JobSubmitter: Cleaning up the staging area 
/tmp/hadoop-yarn/staging/hadoop/.staging/job_1513116010218_0002
org.apache.hadoop.ipc.RemoteException(java.lang.UnsupportedOperationException): 
Operation "setErasureCodingPolicy" is not supported
at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.checkOperation(RouterRpcServer.java:368)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.setErasureCodingPolicy(RouterRpcServer.java:1805)
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-5926) Documentation should clarify dfs.datanode.du.reserved impact from reserved disk capacity

2017-12-12 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-5926:
-
Summary: Documentation should clarify dfs.datanode.du.reserved impact from 
reserved disk capacity  (was: documation should clarify 
dfs.datanode.du.reserved wrt reserved disk capacity)

> Documentation should clarify dfs.datanode.du.reserved impact from reserved 
> disk capacity
> 
>
> Key: HDFS-5926
> URL: https://issues.apache.org/jira/browse/HDFS-5926
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.20.2
>Reporter: Alexander Fahlke
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-5926-1.patch
>
>
> I'm using hadoop-0.20.2 on Debian Squeeze and ran into the same confusion as 
> many others with the parameter for dfs.datanode.du.reserved. One day some 
> data nodes got out of disk errors although there was space left on the disks.
> The following values are rounded to make the problem more clear:
> - the disk for the DFS data has 1000GB and only one Partition (ext3) for DFS 
> data
> - you plan to set the dfs.datanode.du.reserved to 20GB
> - the reserved reserved-blocks-percentage by tune2fs is 5% (the default)
> That gives all users, except root, 5% less capacity that they can use.
> Although the System reports the total of 1000GB as usable for all users via 
> df. The hadoop-deamons are not running as root.
> If i read it right, than hadoop get's the free capacity via df.
>  
> Starting in 
> {{/src/hdfs/org/apache/hadoop/hdfs/server/datanode/FSDataset.java}} on line 
> 350: {{return usage.getCapacity()-reserved;}}
> going to {{/src/core/org/apache/hadoop/fs/DF.java}} which says:
> {{"Filesystem disk space usage statistics. Uses the unix 'df' program"}}
> When you have 5% reserved by tune2fs (in our case 50GB) and you give 
> dfs.datanode.du.reserved only 20GB, than you can possibly ran into out of 
> disk errors that hadoop can't handle.
> In this case you must add the planned 20GB du reserved to the reserved 
> capacity by tune2fs. This results in (at least) 70GB for 
> dfs.datanode.du.reserved in my case.
> Two ideas:
> # The documentation must be clear at this point to avoid this problem.
> # Hadoop could check for reserved space by tune2fs (or other tools) and add 
> this value to the dfs.datanode.du.reserved parameter.
> This ticket is a follow up from the Mailinglist: 
> https://mail-archives.apache.org/mod_mbox/hadoop-common-user/201312.mbox/%3CCAHodO=Kbv=13T=2otz+s8nsodbs1icnzqyxt_0wdfxy5gks...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12626) Ozone : delete open key entries that will no longer be closed

2017-12-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288380#comment-16288380
 ] 

genericqa commented on HDFS-12626:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
43s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
9s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
9s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
3s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 45s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
46s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}137m  8s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}213m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.web.client.TestKeysRatis |
|   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
|   | hadoop.ozone.ksm.TestKeySpaceManager |
|   | hadoop.cblock.TestCBlockReadWrite |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.ozone.web.client.TestKeys |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d11161b |
| JIRA Issue | HDFS-12626 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901735/HDFS-12626-HDFS-7240.006.patch
 |
| Optional Tests |  asflicense  compile  j

[jira] [Commented] (HDFS-12918) EC Policy defaults incorrectly to enabled in protobufs

2017-12-12 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288378#comment-16288378
 ] 

Manoj Govindassamy commented on HDFS-12918:
---

[~zamsden],
  There is an addendum patch HDFS-12682 after HDFS-12258 to make the policy 
immutable by pulling the EC state to {{ErasureCodingPolicyInfo}}. As you 
pointed out {{hdfs.proto}} default value looks wrong to me as well. But, in the 
PBHelperClient code there is an explicit handling for this, both while saving 
the ECPolicy and while retrieving.  So, ECPI saved and retrieved from FSImages 
should be right. 

{{PBHelperClient}}
{noformat}
  /**
   * Convert the protobuf to a {@link ErasureCodingPolicyInfo}. This should only
   * be needed when the caller is interested in the state of the policy.
   */
  public static ErasureCodingPolicyInfo convertErasureCodingPolicyInfo(
  ErasureCodingPolicyProto proto) {
ErasureCodingPolicy policy = convertErasureCodingPolicy(proto);
ErasureCodingPolicyInfo info = new ErasureCodingPolicyInfo(policy);
Preconditions.checkArgument(proto.hasState(),
"Missing state field in ErasureCodingPolicy proto");
info.setState(convertECState(proto.getState()));  <
return info;
  }

  /**
   * Convert a {@link ErasureCodingPolicyInfo} to protobuf.
   * The protobuf will have the policy, and state. State is relevant when:
   * 1. Persisting a policy to fsimage
   * 2. Returning the policy to the RPC call
   * {@link DistributedFileSystem#getAllErasureCodingPolicies()}
   */
  public static ErasureCodingPolicyProto convertErasureCodingPolicy(
  ErasureCodingPolicyInfo info) {
final ErasureCodingPolicyProto.Builder builder =
createECPolicyProtoBuilder(info.getPolicy());
builder.setState(convertECState(info.getState()));  <===
return builder.build();
  }

{noformat}

Listing Policies:
{noformat}
$ hdfs ec -listPolicies
Erasure Coding Policies:
ErasureCodingPolicy=[Name=RS-10-4-1024k, Schema=[ECSchema=[Codec=rs, 
numDataUnits=10, numParityUnits=4]], CellSize=1048576, Id=5], State=DISABLED
ErasureCodingPolicy=[Name=RS-3-2-1024k, Schema=[ECSchema=[Codec=rs, 
numDataUnits=3, numParityUnits=2]], CellSize=1048576, Id=2], State=ENABLED
ErasureCodingPolicy=[Name=RS-6-3-1024k, Schema=[ECSchema=[Codec=rs, 
numDataUnits=6, numParityUnits=3]], CellSize=1048576, Id=1], State=ENABLED
ErasureCodingPolicy=[Name=RS-LEGACY-6-3-1024k, 
Schema=[ECSchema=[Codec=rs-legacy, numDataUnits=6, numParityUnits=3]], 
CellSize=1048576, Id=3], State=DISABLED
ErasureCodingPolicy=[Name=XOR-2-1-1024k, Schema=[ECSchema=[Codec=xor, 
numDataUnits=2, numParityUnits=1]], CellSize=1048576, Id=4], State=DISABLED
{noformat}

But, there is another version of {{convertErasureCodingPolicy}} which takes in 
only {{ErasureCodingPolicy}} where the state is missing and the default  state 
from {{ErasureCodingPolicyProto}} will be used.

{noformat}
  /**
   * Convert a {@link ErasureCodingPolicy} to protobuf.
   * This means no state of the policy will be set on the protobuf.
   */
  public static ErasureCodingPolicyProto convertErasureCodingPolicy(
  ErasureCodingPolicy policy) {
return createECPolicyProtoBuilder(policy).build();
  }
{noformat}

Probably you are seeing the default value of the EC state from the callers 
(like ListStatus, BlockRecovery, BlockGroupChecksum etc.,) of the above convert 
util. Can you please confirm where you are seeing the inconsistent EC state? 

> EC Policy defaults incorrectly to enabled in protobufs
> --
>
> Key: HDFS-12918
> URL: https://issues.apache.org/jira/browse/HDFS-12918
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zach Amsden
>Assignee: Manoj Govindassamy
>Priority: Critical
>
> According to documentation and code comments, the default setting for erasure 
> coding policy is disabled:
> /** Policy is disabled. It's policy default state. */
>  DISABLED(1),
> However, HDFS-12258 appears to have incorrectly set the policy state in the 
> protobuf to enabled:
> {code:java}
>  message ErasureCodingPolicyProto {
> ooptional string name = 1;
> optional ECSchemaProto schema = 2;
> optional uint32 cellSize = 3;
> required uint32 id = 4; // Actually a byte - only 8 bits used
>  + optional ErasureCodingPolicyState state = 5 [default = ENABLED];
>   }
> {code}
> This means the parameter can't actually be optional, it must always be 
> included, and existing serialized data without this optional field will be 
> incorrectly interpreted as having erasure coding enabled.
> This unnecessarily breaks compatibility and will require existing HDFS 
> installations that store metadata in protobufs to require reformatting.
> It looks like a simple mistake that was overlooked in code review.



--
This message was sent by At

[jira] [Resolved] (HDFS-4411) Asserts are disabled in unit tests

2017-12-12 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee resolved HDFS-4411.
--
Resolution: Invalid

> Asserts are disabled in unit tests
> --
>
> Key: HDFS-4411
> URL: https://issues.apache.org/jira/browse/HDFS-4411
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.3-alpha, 3.0.0-alpha1
>Reporter: Daryn Sharp
>
> Unlike 23, asserts are disabled for tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12881) Output streams closed with IOUtils suppressing write errors

2017-12-12 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288320#comment-16288320
 ] 

Jason Lowe edited comment on HDFS-12881 at 12/12/17 9:58 PM:
-

Thanks for updating the patch!

The patch looks much better, but it is modifying more places than intended.  
The changes in hadoop-common should be under HADOOP-15085 and the changes in 
YARN are already covered in YARN-7595.  Also one minor nit, it's cleaner to 
call {{IOUtils.closeStream\(x)}} rather than {{IOUtils.cleanupWithLogger(null, 
x)}} when there's only one stream to close.  Would be nice if there was an 
{{IOUtils.closeStreams(...)}} method, but that's not part of this JIRA.



was (Author: jlowe):
Thanks for updating the patch!

The patch looks much better, but it is modifying more places than intended.  
The changes in hadoop-common should be under HADOOP-15085 and the changes in 
YARN are already covered in YARN-7595.  Also one minor nit, it's cleaner to 
call IOUtils.closeStream(x) rather than IOUtils.cleanupWithLogger(null, x) when 
there's only one stream to close.  Would be nice if there was an 
IOUtils.closeStreams(...) method, but that's not part of this JIRA.


> Output streams closed with IOUtils suppressing write errors
> ---
>
> Key: HDFS-12881
> URL: https://issues.apache.org/jira/browse/HDFS-12881
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jason Lowe
>Assignee: Ajay Kumar
> Attachments: HDFS-12881.001.patch, HDFS-12881.002.patch, 
> HDFS-12881.003.patch
>
>
> There are a few places in HDFS code that are closing an output stream with 
> IOUtils.cleanupWithLogger like this:
> {code}
>   try {
> ...write to outStream...
>   } finally {
> IOUtils.cleanupWithLogger(LOG, outStream);
>   }
> {code}
> This suppresses any IOException that occurs during the close() method which 
> could lead to partial/corrupted output without throwing a corresponding 
> exception.  The code should either use try-with-resources or explicitly close 
> the stream within the try block so the exception thrown during close() is 
> properly propagated as exceptions during write operations are.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12881) Output streams closed with IOUtils suppressing write errors

2017-12-12 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288320#comment-16288320
 ] 

Jason Lowe commented on HDFS-12881:
---

Thanks for updating the patch!

The patch looks much better, but it is modifying more places than intended.  
The changes in hadoop-common should be under HADOOP-15085 and the changes in 
YARN are already covered in YARN-7595.  Also one minor nit, it's cleaner to 
call IOUtils.closeStream(x) rather than IOUtils.cleanupWithLogger(null, x) when 
there's only one stream to close.  Would be nice if there was an 
IOUtils.closeStreams(...) method, but that's not part of this JIRA.


> Output streams closed with IOUtils suppressing write errors
> ---
>
> Key: HDFS-12881
> URL: https://issues.apache.org/jira/browse/HDFS-12881
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jason Lowe
>Assignee: Ajay Kumar
> Attachments: HDFS-12881.001.patch, HDFS-12881.002.patch, 
> HDFS-12881.003.patch
>
>
> There are a few places in HDFS code that are closing an output stream with 
> IOUtils.cleanupWithLogger like this:
> {code}
>   try {
> ...write to outStream...
>   } finally {
> IOUtils.cleanupWithLogger(LOG, outStream);
>   }
> {code}
> This suppresses any IOException that occurs during the close() method which 
> could lead to partial/corrupted output without throwing a corresponding 
> exception.  The code should either use try-with-resources or explicitly close 
> the stream within the try block so the exception thrown during close() is 
> properly propagated as exceptions during write operations are.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12000) Ozone: Container : Add key versioning support-1

2017-12-12 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288265#comment-16288265
 ] 

Xiaoyu Yao commented on HDFS-12000:
---

Thanks [~vagarychen] for the update. Can you take a look at the test failures 
below from the latest Jenkins and confirm?

{code}
org.apache.hadoop.cblock.TestBufferManager
org.apache.hadoop.ozone.ozShell.TestOzoneShell.testPutKey
org.apache.hadoop.cblock.TestCBlockReadWrite
org.apache.hadoop.ozone.web.client.TestKeysRatis.testPutAndListKey
{code}

> Ozone: Container : Add key versioning support-1
> ---
>
> Key: HDFS-12000
> URL: https://issues.apache.org/jira/browse/HDFS-12000
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Chen Liang
>  Labels: OzonePostMerge
> Attachments: HDFS-12000-HDFS-7240.001.patch, 
> HDFS-12000-HDFS-7240.002.patch, HDFS-12000-HDFS-7240.003.patch, 
> HDFS-12000-HDFS-7240.004.patch, HDFS-12000-HDFS-7240.005.patch, 
> HDFS-12000-HDFS-7240.007.patch, HDFS-12000-HDFS-7240.008.patch, 
> HDFS-12000-HDFS-7240.009.patch, HDFS-12000-HDFS-7240.010.patch, 
> HDFS-12000-HDFS-7240.011.patch, OzoneVersion.001.pdf
>
>
> The rest interface of ozone supports versioning of keys. This support comes 
> from the containers and how chunks are managed to support this feature. This 
> JIRA tracks that feature. Will post a detailed design doc so that we can talk 
> about this feature.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12895) RBF: Add ACL support for mount table

2017-12-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288226#comment-16288226
 ] 

Íñigo Goiri commented on HDFS-12895:


[~linyiqun], thanks for [^HDFS-12895.003.patch]; I tested in our cluster and I 
have a few comments:
* The javadoc in {{RouterPermissionChecker}} has a typo: ferdertion
* {{MountTableStoreImpl#updateMountTableEntry()}} is repeating 
{{request.getEntry()}}
* Could {{MountTableStoreImpl#removeMountTableEntry()}} still use the {{Query}} 
to do the get?
* {{MountTableStoreImpl#getMountTableEntries()}} could use a different if 
structure:
{code}
} else if (pc != null) {
  // do the READ permission check
  try {
pc.checkPermission(record, FsAction.READ);
  } catch(AccessControlException ignored) {
// Remove this mount table entry if it cannot
// be accessed by current user.
it.remove();
  }
}
{code}
* In {{MountTable#toString()}}, we should use {{append}} instead of {{+}}.
* {{RouterAdmin#printUsage}} should open/close the brackets.
* In {{RouterAdmin}}, the new breaks across {{if}} are a little too much (this 
is more a personal taste thing, ignore if so).
* The headers for the output in {{RouterAdmin}} have inconsitent capitalization 
(e.g., "Destinations", "owner").
* The entries that where created before show all the fields as null. Should we 
do some better default? In addition, they don't show for other users when they 
are default, if null, we should assume 755 or so.
* When listing the entries, they show without a particular order; they should 
be sorted by source I'd say.
* We should add these permissions to the Web UI.

> RBF: Add ACL support for mount table
> 
>
> Key: HDFS-12895
> URL: https://issues.apache.org/jira/browse/HDFS-12895
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF
> Attachments: HDFS-12895.001.patch, HDFS-12895.002.patch, 
> HDFS-12895.003.patch
>
>
> Adding ACL support for the Mount Table management. Following is the initial 
> design of ACL control for the mount table management.
> Each mount table has its owner, group name and permission.
> The mount table permissions (FsPermission), here we use 
> {{org.apache.hadoop.fs.permission.FsPermission}} to do the access check:
> # READ permission: you can read the mount table info.
> # WRITE permission: you can add remove or update this mount table info.
> # EXECUTE permission: This won't be used.
> The add command of mount table will be extended like this
> {noformat}
> $HADOOP_HOME/bin/hdfs dfsrouteradmin [-add   
>  [-owner ] [-group ] [-mode ]]
> {noformat}
> * is UNIX-style permissions for the mount table. Permissions are 
> specified in octal, e.g. 0755. By default, this is set to 0755*.
> If we want update the ACL info of specfied mount table, just execute add 
> command again. This command not only adding for new mount talle but also 
> updating mount table once it finds given mount table is existed. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12895) RBF: Add ACL support for mount table

2017-12-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288226#comment-16288226
 ] 

Íñigo Goiri edited comment on HDFS-12895 at 12/12/17 8:35 PM:
--

[~linyiqun], thanks for [^HDFS-12895.003.patch]; I tested in our cluster and I 
have a few comments:
* The javadoc in {{RouterPermissionChecker}} has a typo: ferdertion
* {{MountTableStoreImpl#updateMountTableEntry()}} is repeating 
{{request.getEntry()}}
* Could {{MountTableStoreImpl#removeMountTableEntry()}} still use the {{Query}} 
to do the get?
* {{MountTableStoreImpl#getMountTableEntries()}} could use a different if 
structure:
{code}
} else if (pc != null) {
  // do the READ permission check
  try {
pc.checkPermission(record, FsAction.READ);
  } catch(AccessControlException ignored) {
// Remove this mount table entry if it cannot
// be accessed by current user.
it.remove();
  }
}
{code}
* In {{MountTable#toString()}}, we should use {{append}} instead of {{+}}.
* {{RouterAdmin#printUsage}} should open/close the brackets.
* In {{RouterAdmin}}, the new breaks across {{if}} are a little too much (this 
is more a personal taste thing, ignore if so).
* The headers for the output in {{RouterAdmin}} have inconsitent capitalization 
(e.g., "Destinations", "owner").
* The entries that where created before show all the fields as null. Should we 
do some better default? In addition, they don't show for other users when they 
are default, if null, we should assume 755 or so.
* When listing the entries, they show without a particular order; they should 
be sorted by source I'd say.
* We should add these permissions to the Web UI.
* Changing the order of the ORDER in the proto messes with previously created 
entries, I'd prefer to keep the order and add the permissions starting in field 
10 or so.


was (Author: elgoiri):
[~linyiqun], thanks for [^HDFS-12895.003.patch]; I tested in our cluster and I 
have a few comments:
* The javadoc in {{RouterPermissionChecker}} has a typo: ferdertion
* {{MountTableStoreImpl#updateMountTableEntry()}} is repeating 
{{request.getEntry()}}
* Could {{MountTableStoreImpl#removeMountTableEntry()}} still use the {{Query}} 
to do the get?
* {{MountTableStoreImpl#getMountTableEntries()}} could use a different if 
structure:
{code}
} else if (pc != null) {
  // do the READ permission check
  try {
pc.checkPermission(record, FsAction.READ);
  } catch(AccessControlException ignored) {
// Remove this mount table entry if it cannot
// be accessed by current user.
it.remove();
  }
}
{code}
* In {{MountTable#toString()}}, we should use {{append}} instead of {{+}}.
* {{RouterAdmin#printUsage}} should open/close the brackets.
* In {{RouterAdmin}}, the new breaks across {{if}} are a little too much (this 
is more a personal taste thing, ignore if so).
* The headers for the output in {{RouterAdmin}} have inconsitent capitalization 
(e.g., "Destinations", "owner").
* The entries that where created before show all the fields as null. Should we 
do some better default? In addition, they don't show for other users when they 
are default, if null, we should assume 755 or so.
* When listing the entries, they show without a particular order; they should 
be sorted by source I'd say.
* We should add these permissions to the Web UI.

> RBF: Add ACL support for mount table
> 
>
> Key: HDFS-12895
> URL: https://issues.apache.org/jira/browse/HDFS-12895
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF
> Attachments: HDFS-12895.001.patch, HDFS-12895.002.patch, 
> HDFS-12895.003.patch
>
>
> Adding ACL support for the Mount Table management. Following is the initial 
> design of ACL control for the mount table management.
> Each mount table has its owner, group name and permission.
> The mount table permissions (FsPermission), here we use 
> {{org.apache.hadoop.fs.permission.FsPermission}} to do the access check:
> # READ permission: you can read the mount table info.
> # WRITE permission: you can add remove or update this mount table info.
> # EXECUTE permission: This won't be used.
> The add command of mount table will be extended like this
> {noformat}
> $HADOOP_HOME/bin/hdfs dfsrouteradmin [-add   
>  [-owner ] [-group ] [-mode ]]
> {noformat}
> * is UNIX-style permissions for the mount table. Permissions are 
> specified in octal, e.g. 0755. By default, this is set to 0755*.
> If we want update the ACL info of specfied mount table, just execute add 
> command again. This command not only adding for new mou

[jira] [Commented] (HDFS-12910) Secure Datanode Starter should log the port when it

2017-12-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288198#comment-16288198
 ] 

genericqa commented on HDFS-12910:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
42s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 34s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 13 unchanged - 0 fixed = 15 total (was 13) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 59s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}123m 37s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}169m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12910 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901727/HDFS-12910.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e01f5536743d 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8bb83a8 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22367/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.

[jira] [Commented] (HDFS-12751) Ozone: SCM: update container allocated size to container db for all the open containers in ContainerStateManager#close

2017-12-12 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288193#comment-16288193
 ] 

Chen Liang commented on HDFS-12751:
---

Thanks [~nandakumar131] for the clarification! I totally missed {{usedBytes}}. 
Then this makes sense. I will upload a patch soon.

> Ozone: SCM: update container allocated size to container db for all the open 
> containers in ContainerStateManager#close
> --
>
> Key: HDFS-12751
> URL: https://issues.apache.org/jira/browse/HDFS-12751
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Chen Liang
>
> Container allocated size is maintained in memory by 
> {{ContainerStateManager}}, this has to be updated in container db when we 
> shutdown SCM. {{ContainerStateManager#close}} will be called during SCM 
> shutdown, so updating allocated size for all the open containers should be 
> done here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12918) EC Policy defaults incorrectly to enabled in protobufs

2017-12-12 Thread Zach Amsden (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288158#comment-16288158
 ] 

Zach Amsden commented on HDFS-12918:


[~manojg] thanks, we encountered this during testing.  I wasn't planning on 
creating a patch as we're not blocked but it looks like a simple enough fix.

> EC Policy defaults incorrectly to enabled in protobufs
> --
>
> Key: HDFS-12918
> URL: https://issues.apache.org/jira/browse/HDFS-12918
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zach Amsden
>Assignee: Manoj Govindassamy
>Priority: Critical
>
> According to documentation and code comments, the default setting for erasure 
> coding policy is disabled:
> /** Policy is disabled. It's policy default state. */
>  DISABLED(1),
> However, HDFS-12258 appears to have incorrectly set the policy state in the 
> protobuf to enabled:
> {code:java}
>  message ErasureCodingPolicyProto {
> ooptional string name = 1;
> optional ECSchemaProto schema = 2;
> optional uint32 cellSize = 3;
> required uint32 id = 4; // Actually a byte - only 8 bits used
>  + optional ErasureCodingPolicyState state = 5 [default = ENABLED];
>   }
> {code}
> This means the parameter can't actually be optional, it must always be 
> included, and existing serialized data without this optional field will be 
> incorrectly interpreted as having erasure coding enabled.
> This unnecessarily breaks compatibility and will require existing HDFS 
> installations that store metadata in protobufs to require reformatting.
> It looks like a simple mistake that was overlooked in code review.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12000) Ozone: Container : Add key versioning support-1

2017-12-12 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12000:
--
Attachment: HDFS-12000-HDFS-7240.011.patch

Fix the checkstyle warning. The javadoc and the asf license issues seem 
unrelated, the failed tests also seem unrelated. All passed locally except for 
the two consistently failing tests {{TestUnbuffer}} and {{TestBalancerRPCDelay}}

> Ozone: Container : Add key versioning support-1
> ---
>
> Key: HDFS-12000
> URL: https://issues.apache.org/jira/browse/HDFS-12000
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Chen Liang
>  Labels: OzonePostMerge
> Attachments: HDFS-12000-HDFS-7240.001.patch, 
> HDFS-12000-HDFS-7240.002.patch, HDFS-12000-HDFS-7240.003.patch, 
> HDFS-12000-HDFS-7240.004.patch, HDFS-12000-HDFS-7240.005.patch, 
> HDFS-12000-HDFS-7240.007.patch, HDFS-12000-HDFS-7240.008.patch, 
> HDFS-12000-HDFS-7240.009.patch, HDFS-12000-HDFS-7240.010.patch, 
> HDFS-12000-HDFS-7240.011.patch, OzoneVersion.001.pdf
>
>
> The rest interface of ozone supports versioning of keys. This support comes 
> from the containers and how chunks are managed to support this feature. This 
> JIRA tracks that feature. Will post a detailed design doc so that we can talk 
> about this feature.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12918) EC Policy defaults incorrectly to enabled in protobufs

2017-12-12 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy reassigned HDFS-12918:
-

Assignee: Manoj Govindassamy

I can take a look at this if you haven't already started to work on the patch. 
Please let me know.

> EC Policy defaults incorrectly to enabled in protobufs
> --
>
> Key: HDFS-12918
> URL: https://issues.apache.org/jira/browse/HDFS-12918
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zach Amsden
>Assignee: Manoj Govindassamy
>Priority: Critical
>
> According to documentation and code comments, the default setting for erasure 
> coding policy is disabled:
> /** Policy is disabled. It's policy default state. */
>  DISABLED(1),
> However, HDFS-12258 appears to have incorrectly set the policy state in the 
> protobuf to enabled:
> {code:java}
>  message ErasureCodingPolicyProto {
> ooptional string name = 1;
> optional ECSchemaProto schema = 2;
> optional uint32 cellSize = 3;
> required uint32 id = 4; // Actually a byte - only 8 bits used
>  + optional ErasureCodingPolicyState state = 5 [default = ENABLED];
>   }
> {code}
> This means the parameter can't actually be optional, it must always be 
> included, and existing serialized data without this optional field will be 
> incorrectly interpreted as having erasure coding enabled.
> This unnecessarily breaks compatibility and will require existing HDFS 
> installations that store metadata in protobufs to require reformatting.
> It looks like a simple mistake that was overlooked in code review.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12881) Output streams closed with IOUtils suppressing write errors

2017-12-12 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12881:
--
Status: Patch Available  (was: Open)

> Output streams closed with IOUtils suppressing write errors
> ---
>
> Key: HDFS-12881
> URL: https://issues.apache.org/jira/browse/HDFS-12881
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jason Lowe
>Assignee: Ajay Kumar
> Attachments: HDFS-12881.001.patch, HDFS-12881.002.patch, 
> HDFS-12881.003.patch
>
>
> There are a few places in HDFS code that are closing an output stream with 
> IOUtils.cleanupWithLogger like this:
> {code}
>   try {
> ...write to outStream...
>   } finally {
> IOUtils.cleanupWithLogger(LOG, outStream);
>   }
> {code}
> This suppresses any IOException that occurs during the close() method which 
> could lead to partial/corrupted output without throwing a corresponding 
> exception.  The code should either use try-with-resources or explicitly close 
> the stream within the try block so the exception thrown during close() is 
> properly propagated as exceptions during write operations are.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12881) Output streams closed with IOUtils suppressing write errors

2017-12-12 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12881:
--
Status: Open  (was: Patch Available)

> Output streams closed with IOUtils suppressing write errors
> ---
>
> Key: HDFS-12881
> URL: https://issues.apache.org/jira/browse/HDFS-12881
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jason Lowe
>Assignee: Ajay Kumar
> Attachments: HDFS-12881.001.patch, HDFS-12881.002.patch, 
> HDFS-12881.003.patch
>
>
> There are a few places in HDFS code that are closing an output stream with 
> IOUtils.cleanupWithLogger like this:
> {code}
>   try {
> ...write to outStream...
>   } finally {
> IOUtils.cleanupWithLogger(LOG, outStream);
>   }
> {code}
> This suppresses any IOException that occurs during the close() method which 
> could lead to partial/corrupted output without throwing a corresponding 
> exception.  The code should either use try-with-resources or explicitly close 
> the stream within the try block so the exception thrown during close() is 
> properly propagated as exceptions during write operations are.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.

2017-12-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288121#comment-16288121
 ] 

genericqa commented on HDFS-12574:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
7s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
49s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 27s{color} | {color:orange} root: The patch generated 13 new + 698 unchanged 
- 1 fixed = 711 total (was 699) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
8s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m  
1s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 49s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}127m 58s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}264m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  org.apache.hadoop.hdfs.web.WebHdfsFileSystem.open(Path, int) may fail to 
close stream  At WebHdfsFileSystem.java:close stream  At 
WebHdfsFileSystem.java:[line 1433] |
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsContentLength |
|   | hadoop.hdfs.TestEncryptedTransfer |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailur

[jira] [Created] (HDFS-12918) EC Policy defaults incorrectly to enabled in protobufs

2017-12-12 Thread Zach Amsden (JIRA)
Zach Amsden created HDFS-12918:
--

 Summary: EC Policy defaults incorrectly to enabled in protobufs
 Key: HDFS-12918
 URL: https://issues.apache.org/jira/browse/HDFS-12918
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Zach Amsden
Priority: Critical


According to documentation and code comments, the default setting for erasure 
coding policy is disabled:

/** Policy is disabled. It's policy default state. */
 DISABLED(1),

However, HDFS-12258 appears to have incorrectly set the policy state in the 
protobuf to enabled:

{code:java}
 message ErasureCodingPolicyProto {
ooptional string name = 1;
optional ECSchemaProto schema = 2;
optional uint32 cellSize = 3;
required uint32 id = 4; // Actually a byte - only 8 bits used
 + optional ErasureCodingPolicyState state = 5 [default = ENABLED];
  }
{code}

This means the parameter can't actually be optional, it must always be 
included, and existing serialized data without this optional field will be 
incorrectly interpreted as having erasure coding enabled.

This unnecessarily breaks compatibility and will require existing HDFS 
installations that store metadata in protobufs to require reformatting.

It looks like a simple mistake that was overlooked in code review.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12626) Ozone : delete open key entries that will no longer be closed

2017-12-12 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12626:
--
Attachment: HDFS-12626-HDFS-7240.006.patch

Thanks [~xyao] for the review! Post v006 patch to fix the test.

> Ozone : delete open key entries that will no longer be closed
> -
>
> Key: HDFS-12626
> URL: https://issues.apache.org/jira/browse/HDFS-12626
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-12626-HDFS-7240.001.patch, 
> HDFS-12626-HDFS-7240.002.patch, HDFS-12626-HDFS-7240.003.patch, 
> HDFS-12626-HDFS-7240.004.patch, HDFS-12626-HDFS-7240.005.patch, 
> HDFS-12626-HDFS-7240.006.patch
>
>
> HDFS-12543 introduced the notion of "open key" where when a key is opened, an 
> open key entry gets persisted, only after client calls a close will this 
> entry be made visible. One issue is that if the client does not call close 
> (e.g. failed), then that open key entry will never be deleted from meta data. 
> This JIRA tracks this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12910) Secure Datanode Starter should log the port when it

2017-12-12 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288000#comment-16288000
 ] 

genericqa commented on HDFS-12910:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m  
5s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
56s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 38s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 13 unchanged - 0 fixed = 15 total (was 13) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}121m 45s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}192m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestFSImage |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12910 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901691/HDFS-12910.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4cc32a00e51b 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8bb83a8 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22365/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| checkstyl

[jira] [Commented] (HDFS-12910) Secure Datanode Starter should log the port when it

2017-12-12 Thread Stephen O'Donnell (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16287940#comment-16287940
 ] 

Stephen O'Donnell commented on HDFS-12910:
--

One minor change from patch 003 to 004 - I remove the java.net prefix from 
'java.net.BindException' in the catch call, as I have imported 
java.net.BindException making prefix unnecessary.

> Secure Datanode Starter should log the port when it 
> 
>
> Key: HDFS-12910
> URL: https://issues.apache.org/jira/browse/HDFS-12910
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.1.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Minor
> Attachments: HDFS-12910.001.patch, HDFS-12910.002.patch, 
> HDFS-12910.003.patch, HDFS-12910.004.patch
>
>
> When running a secure data node, the default ports it uses are 1004 and 1006. 
> Sometimes other OS services can start on these ports causing the DN to fail 
> to start (eg the nfs service can use random ports under 1024).
> When this happens an error is logged by jsvc, but it is confusing as it does 
> not tell you which port it is having issues binding to, for example, when 
> port 1004 is used by another process:
> {code}
> Initializing secure datanode resources
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:105)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.init(SecureDataNodeStarter.java:71)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
> Cannot load daemon
> Service exit with a return value of 3
> {code}
> And when port 1006 is used:
> {code}
> Opened streaming server at /0.0.0.0:1004
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:129)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.init(SecureDataNodeStarter.java:71)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
> Cannot load daemon
> Service exit with a return value of 3
> {code}
> We should catch the BindException exception and log out the problem 
> address:port and then re-throw the exception to make the problem more clear.
> I will upload a patch for this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12910) Secure Datanode Starter should log the port when it

2017-12-12 Thread Stephen O'Donnell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-12910:
-
Attachment: HDFS-12910.004.patch

> Secure Datanode Starter should log the port when it 
> 
>
> Key: HDFS-12910
> URL: https://issues.apache.org/jira/browse/HDFS-12910
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.1.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Minor
> Attachments: HDFS-12910.001.patch, HDFS-12910.002.patch, 
> HDFS-12910.003.patch, HDFS-12910.004.patch
>
>
> When running a secure data node, the default ports it uses are 1004 and 1006. 
> Sometimes other OS services can start on these ports causing the DN to fail 
> to start (eg the nfs service can use random ports under 1024).
> When this happens an error is logged by jsvc, but it is confusing as it does 
> not tell you which port it is having issues binding to, for example, when 
> port 1004 is used by another process:
> {code}
> Initializing secure datanode resources
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:105)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.init(SecureDataNodeStarter.java:71)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
> Cannot load daemon
> Service exit with a return value of 3
> {code}
> And when port 1006 is used:
> {code}
> Opened streaming server at /0.0.0.0:1004
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:129)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.init(SecureDataNodeStarter.java:71)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
> Cannot load daemon
> Service exit with a return value of 3
> {code}
> We should catch the BindException exception and log out the problem 
> address:port and then re-throw the exception to make the problem more clear.
> I will upload a patch for this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12818) Support multiple storages in DataNodeCluster / SimulatedFSDataset

2017-12-12 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16287892#comment-16287892
 ] 

Erik Krogen commented on HDFS-12818:


Looks like the test failures on the last run were infrastructure-related; OOM 
due to thread issues and all of them passed successfully on my local. [~shv], 
can you review?

> Support multiple storages in DataNodeCluster / SimulatedFSDataset
> -
>
> Key: HDFS-12818
> URL: https://issues.apache.org/jira/browse/HDFS-12818
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, test
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Minor
> Attachments: HDFS-12818.000.patch, HDFS-12818.001.patch, 
> HDFS-12818.002.patch, HDFS-12818.003.patch, HDFS-12818.004.patch, 
> HDFS-12818.005.patch, HDFS-12818.006.patch
>
>
> Currently {{SimulatedFSDataset}} (and thus, {{DataNodeCluster}} with 
> {{-simulated}}) only supports a single storage per {{DataNode}}. Given that 
> the number of storages can have important implications on the performance of 
> block report processing, it would be useful for these classes to support a 
> multiple storage configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12626) Ozone : delete open key entries that will no longer be closed

2017-12-12 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16287887#comment-16287887
 ] 

Xiaoyu Yao commented on HDFS-12626:
---

Thanks [~vagarychen] for the update. It looks good to me now. +1 after the 
following unit test failure is fixed.

{{org.apache.hadoop.ozone.TestOzoneConfigurationFields.testCompareConfigurationClassAgainstXml}}



> Ozone : delete open key entries that will no longer be closed
> -
>
> Key: HDFS-12626
> URL: https://issues.apache.org/jira/browse/HDFS-12626
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-12626-HDFS-7240.001.patch, 
> HDFS-12626-HDFS-7240.002.patch, HDFS-12626-HDFS-7240.003.patch, 
> HDFS-12626-HDFS-7240.004.patch, HDFS-12626-HDFS-7240.005.patch
>
>
> HDFS-12543 introduced the notion of "open key" where when a key is opened, an 
> open key entry gets persisted, only after client calls a close will this 
> entry be made visible. One issue is that if the client does not call close 
> (e.g. failed), then that open key entry will never be deleted from meta data. 
> This JIRA tracks this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12891) Do not invalidate blocks if toInvalidate is empty

2017-12-12 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16287773#comment-16287773
 ] 

Sean Mackrory commented on HDFS-12891:
--

I see I'm late to the party, but I started testing this the other day and can 
confirm I was seeing approximately 1 in 30 runs fail and that this fixes it. +1 
to the patch as committed.

> Do not invalidate blocks if toInvalidate is empty
> -
>
> Key: HDFS-12891
> URL: https://issues.apache.org/jira/browse/HDFS-12891
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>  Labels: flaky-test
> Fix For: 3.1.0, 3.0.1
>
> Attachments: HDFS-12891.01.patch, HDFS-12891.02.patch
>
>
> {code:java}
> java.lang.AssertionError: Test resulted in an unexpected exit
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress(TestDNFencingWithReplication.java:147)
> :
> :
> 2017-10-19 21:39:40,068 [main] INFO  hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1965)) - Shutting down the Mini HDFS Cluster
> 2017-10-19 21:39:40,068 [main] FATAL hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1968)) - Test resulted in an unexpected exit
> 1: java.lang.AssertionError
>   at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:265)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$RedundancyMonitor.run(BlockManager.java:4437)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.AssertionError
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.addBlocksToBeInvalidated(DatanodeDescriptor.java:641)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.InvalidateBlocks.invalidateWork(InvalidateBlocks.java:299)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.invalidateWorkForOneNode(BlockManager.java:4246)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeInvalidateWork(BlockManager.java:1736)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:4561)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$RedundancyMonitor.run(BlockManager.java:4418)
>   ... 1 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12910) Secure Datanode Starter should log the port when it

2017-12-12 Thread Stephen O'Donnell (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16287722#comment-16287722
 ] 

Stephen O'Donnell commented on HDFS-12910:
--

I think I have addressed the review comments above from [~xiaochen]. I took the 
good idea from [~nandakumar131] to alter the original exception, but kept it 
simple with code like the following:

{code}
try {
  ss.bind(streamingAddr, backlogLength);
} catch (java.net.BindException e) {
  BindException newBe = new BindException(e.getMessage() + " " + 
streamingAddr);
  newBe.initCause(e.getCause());
  newBe.setStackTrace(e.getStackTrace());
  throw newBe;
}
{code}

Ie, the old bind code stays as it was, and I removed the reflection - I think 
this is the simplest way to address the requirement here. For the v003 patch, I 
have repeated the pattern above in both try-catch blocks - I could factor this 
out into one reusable method `private BindException 
appendMessageToBindException(BindException e, String msg)` to avoid the code 
reuse and call it in each exception handler. It does not save much, but I am 
happy to make the change if others want to see it that way, eg something like:

{code}
try {
  ss.bind(streamingAddr, backlogLength);
} catch (java.net.BindException e) {
  throw appendMessageToBindException(e, streamingAddr);
}
{code}


> Secure Datanode Starter should log the port when it 
> 
>
> Key: HDFS-12910
> URL: https://issues.apache.org/jira/browse/HDFS-12910
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.1.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Minor
> Attachments: HDFS-12910.001.patch, HDFS-12910.002.patch, 
> HDFS-12910.003.patch
>
>
> When running a secure data node, the default ports it uses are 1004 and 1006. 
> Sometimes other OS services can start on these ports causing the DN to fail 
> to start (eg the nfs service can use random ports under 1024).
> When this happens an error is logged by jsvc, but it is confusing as it does 
> not tell you which port it is having issues binding to, for example, when 
> port 1004 is used by another process:
> {code}
> Initializing secure datanode resources
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:105)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.init(SecureDataNodeStarter.java:71)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
> Cannot load daemon
> Service exit with a return value of 3
> {code}
> And when port 1006 is used:
> {code}
> Opened streaming server at /0.0.0.0:1004
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:129)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.init(SecureDataNodeStarter.java:71)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
> Cannot load daemon
> Service exit with a return value of 3
> {code}
> We should catch the BindException exception and log out the problem 
> address:port and then re-throw the exception to make the problem more clea

[jira] [Updated] (HDFS-12910) Secure Datanode Starter should log the port when it

2017-12-12 Thread Stephen O'Donnell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-12910:
-
Attachment: HDFS-12910.003.patch

> Secure Datanode Starter should log the port when it 
> 
>
> Key: HDFS-12910
> URL: https://issues.apache.org/jira/browse/HDFS-12910
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.1.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Minor
> Attachments: HDFS-12910.001.patch, HDFS-12910.002.patch, 
> HDFS-12910.003.patch
>
>
> When running a secure data node, the default ports it uses are 1004 and 1006. 
> Sometimes other OS services can start on these ports causing the DN to fail 
> to start (eg the nfs service can use random ports under 1024).
> When this happens an error is logged by jsvc, but it is confusing as it does 
> not tell you which port it is having issues binding to, for example, when 
> port 1004 is used by another process:
> {code}
> Initializing secure datanode resources
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:105)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.init(SecureDataNodeStarter.java:71)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
> Cannot load daemon
> Service exit with a return value of 3
> {code}
> And when port 1006 is used:
> {code}
> Opened streaming server at /0.0.0.0:1004
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:129)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.init(SecureDataNodeStarter.java:71)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
> Cannot load daemon
> Service exit with a return value of 3
> {code}
> We should catch the BindException exception and log out the problem 
> address:port and then re-throw the exception to make the problem more clear.
> I will upload a patch for this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.

2017-12-12 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-12574:
--
Attachment: HDFS-12574.003.patch

Attaching another patch which has following.
1. Added test case for verifying that datanode decrypts in case of old client.
2. Fixed many of the test failures from previous jenkins run.
3. Addressed checkstyle and findbugs warnings.

> Add CryptoInputStream to WebHdfsFileSystem read call.
> -
>
> Key: HDFS-12574
> URL: https://issues.apache.org/jira/browse/HDFS-12574
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: encryption, kms, webhdfs
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HDFS-12574.001.patch, HDFS-12574.002.patch, 
> HDFS-12574.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >