[jira] [Commented] (HDFS-12881) Output streams closed with IOUtils suppressing write errors

2017-12-08 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16284638#comment-16284638
 ] 

Ajay Kumar commented on HDFS-12881:
---

Updating patch for output streams only.

> Output streams closed with IOUtils suppressing write errors
> ---
>
> Key: HDFS-12881
> URL: https://issues.apache.org/jira/browse/HDFS-12881
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jason Lowe
>Assignee: Ajay Kumar
> Attachments: HDFS-12881.001.patch, HDFS-12881.002.patch
>
>
> There are a few places in HDFS code that are closing an output stream with 
> IOUtils.cleanupWithLogger like this:
> {code}
>   try {
> ...write to outStream...
>   } finally {
> IOUtils.cleanupWithLogger(LOG, outStream);
>   }
> {code}
> This suppresses any IOException that occurs during the close() method which 
> could lead to partial/corrupted output without throwing a corresponding 
> exception.  The code should either use try-with-resources or explicitly close 
> the stream within the try block so the exception thrown during close() is 
> properly propagated as exceptions during write operations are.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12881) Output streams closed with IOUtils suppressing write errors

2017-12-08 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12881:
--
Attachment: HDFS-12881.002.patch

> Output streams closed with IOUtils suppressing write errors
> ---
>
> Key: HDFS-12881
> URL: https://issues.apache.org/jira/browse/HDFS-12881
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jason Lowe
>Assignee: Ajay Kumar
> Attachments: HDFS-12881.001.patch, HDFS-12881.002.patch
>
>
> There are a few places in HDFS code that are closing an output stream with 
> IOUtils.cleanupWithLogger like this:
> {code}
>   try {
> ...write to outStream...
>   } finally {
> IOUtils.cleanupWithLogger(LOG, outStream);
>   }
> {code}
> This suppresses any IOException that occurs during the close() method which 
> could lead to partial/corrupted output without throwing a corresponding 
> exception.  The code should either use try-with-resources or explicitly close 
> the stream within the try block so the exception thrown during close() is 
> properly propagated as exceptions during write operations are.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12881) Output streams closed with IOUtils suppressing write errors

2017-12-08 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12881:
--
Attachment: (was: HDFS-12881.002patch)

> Output streams closed with IOUtils suppressing write errors
> ---
>
> Key: HDFS-12881
> URL: https://issues.apache.org/jira/browse/HDFS-12881
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jason Lowe
>Assignee: Ajay Kumar
> Attachments: HDFS-12881.001.patch
>
>
> There are a few places in HDFS code that are closing an output stream with 
> IOUtils.cleanupWithLogger like this:
> {code}
>   try {
> ...write to outStream...
>   } finally {
> IOUtils.cleanupWithLogger(LOG, outStream);
>   }
> {code}
> This suppresses any IOException that occurs during the close() method which 
> could lead to partial/corrupted output without throwing a corresponding 
> exception.  The code should either use try-with-resources or explicitly close 
> the stream within the try block so the exception thrown during close() is 
> properly propagated as exceptions during write operations are.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12881) Output streams closed with IOUtils suppressing write errors

2017-12-08 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12881:
--
Attachment: HDFS-12881.002patch

> Output streams closed with IOUtils suppressing write errors
> ---
>
> Key: HDFS-12881
> URL: https://issues.apache.org/jira/browse/HDFS-12881
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jason Lowe
>Assignee: Ajay Kumar
> Attachments: HDFS-12881.001.patch, HDFS-12881.002patch
>
>
> There are a few places in HDFS code that are closing an output stream with 
> IOUtils.cleanupWithLogger like this:
> {code}
>   try {
> ...write to outStream...
>   } finally {
> IOUtils.cleanupWithLogger(LOG, outStream);
>   }
> {code}
> This suppresses any IOException that occurs during the close() method which 
> could lead to partial/corrupted output without throwing a corresponding 
> exception.  The code should either use try-with-resources or explicitly close 
> the stream within the try block so the exception thrown during close() is 
> properly propagated as exceptions during write operations are.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12910) Secure Datanode Starter should log the port when it

2017-12-08 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16284631#comment-16284631
 ] 

Xiao Chen commented on HDFS-12910:
--

When you think the patch is ready, please feel free to hit 'Submit Patch', to 
trigger a pre-commit. Thanks.

> Secure Datanode Starter should log the port when it 
> 
>
> Key: HDFS-12910
> URL: https://issues.apache.org/jira/browse/HDFS-12910
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.1.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Minor
> Attachments: HDFS-12910.001.patch
>
>
> When running a secure data node, the default ports it uses are 1004 and 1006. 
> Sometimes other OS services can start on these ports causing the DN to fail 
> to start (eg the nfs service can use random ports under 1024).
> When this happens an error is logged by jsvc, but it is confusing as it does 
> not tell you which port it is having issues binding to, for example, when 
> port 1004 is used by another process:
> {code}
> Initializing secure datanode resources
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:105)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.init(SecureDataNodeStarter.java:71)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
> Cannot load daemon
> Service exit with a return value of 3
> {code}
> And when port 1006 is used:
> {code}
> Opened streaming server at /0.0.0.0:1004
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:129)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.init(SecureDataNodeStarter.java:71)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
> Cannot load daemon
> Service exit with a return value of 3
> {code}
> We should catch the BindException exception and log out the problem 
> address:port and then re-throw the exception to make the problem more clear.
> I will upload a patch for this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12910) Secure Datanode Starter should log the port when it

2017-12-08 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16284630#comment-16284630
 ] 

Xiao Chen commented on HDFS-12910:
--

Thanks for working on this [~sodonnell]! I have added you to the contributors 
list (so you can assign jiras to yourself now) and assigned this to you.

Agree this would be very useful for supportability. Some little comments:
- We usually try to log the messages to the DN log. I guess this one is special 
because it happens during DN start. In this case, can we still log to the DN 
log additionally? I think having the ports in the log would be helpful too, in 
case the stderr isn't directly on someone's console, and they looked the log 
first. (Could construct a string, and use that for print and log)
- We can just {{throw e}}, instead of {{throw (e)}}.

Unit test is not required for logging and supportability changes. The output 
you pasted looks good to me.
If you're interested, I think a unit test can technically be done in 
{{TestStartSecureDataNode}}: create a similar test like {{testSecureNameNode}}, 
manually bind to the port. Then start the minicluster, and try-catch IOE. 
Verify the IOE's message contains the port you want to see using 
{{GenericTestUtils#assertExceptionContains}}.

> Secure Datanode Starter should log the port when it 
> 
>
> Key: HDFS-12910
> URL: https://issues.apache.org/jira/browse/HDFS-12910
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.1.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Minor
> Attachments: HDFS-12910.001.patch
>
>
> When running a secure data node, the default ports it uses are 1004 and 1006. 
> Sometimes other OS services can start on these ports causing the DN to fail 
> to start (eg the nfs service can use random ports under 1024).
> When this happens an error is logged by jsvc, but it is confusing as it does 
> not tell you which port it is having issues binding to, for example, when 
> port 1004 is used by another process:
> {code}
> Initializing secure datanode resources
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:105)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.init(SecureDataNodeStarter.java:71)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
> Cannot load daemon
> Service exit with a return value of 3
> {code}
> And when port 1006 is used:
> {code}
> Opened streaming server at /0.0.0.0:1004
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:129)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.init(SecureDataNodeStarter.java:71)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
> Cannot load daemon
> Service exit with a return value of 3
> {code}
> We should catch the BindException exception and log out the problem 
> address:port and then re-throw the exception to make the problem more clear.
> I will upload a patch for this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-

[jira] [Assigned] (HDFS-12910) Secure Datanode Starter should log the port when it

2017-12-08 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen reassigned HDFS-12910:


Assignee: Stephen O'Donnell

> Secure Datanode Starter should log the port when it 
> 
>
> Key: HDFS-12910
> URL: https://issues.apache.org/jira/browse/HDFS-12910
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.1.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Minor
> Attachments: HDFS-12910.001.patch
>
>
> When running a secure data node, the default ports it uses are 1004 and 1006. 
> Sometimes other OS services can start on these ports causing the DN to fail 
> to start (eg the nfs service can use random ports under 1024).
> When this happens an error is logged by jsvc, but it is confusing as it does 
> not tell you which port it is having issues binding to, for example, when 
> port 1004 is used by another process:
> {code}
> Initializing secure datanode resources
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:105)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.init(SecureDataNodeStarter.java:71)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
> Cannot load daemon
> Service exit with a return value of 3
> {code}
> And when port 1006 is used:
> {code}
> Opened streaming server at /0.0.0.0:1004
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:129)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.init(SecureDataNodeStarter.java:71)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
> Cannot load daemon
> Service exit with a return value of 3
> {code}
> We should catch the BindException exception and log out the problem 
> address:port and then re-throw the exception to make the problem more clear.
> I will upload a patch for this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12818) Support multiple storages in DataNodeCluster / SimulatedFSDataset

2017-12-08 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16284586#comment-16284586
 ] 

genericqa commented on HDFS-12818:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
48s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 52 unchanged - 8 fixed = 52 total (was 60) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 53s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}118m 31s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}172m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS 
|
|   | hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer |
|   | hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup |
|   | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
|   | hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
|   | hadoop.hdfs.TestFileCreation |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12818 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901334/HDFS-12818.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 920ac91db25c 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 670e8d4 |
| maven | version: Apache

[jira] [Commented] (HDFS-12911) [SPS]: Fix review comments from discussions in HDFS-10285

2017-12-08 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16284572#comment-16284572
 ] 

Anu Engineer commented on HDFS-12911:
-

bq. 1. Lock should not kept while executing placement policy.

if we are pulling this out of NN, wouldn't this be a no-op?

> [SPS]: Fix review comments from discussions in HDFS-10285
> -
>
> Key: HDFS-12911
> URL: https://issues.apache.org/jira/browse/HDFS-12911
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Rakesh R
>
> This is the JIRA for tracking the possible improvements or issues discussed 
> in main JIRA
> So, far from Daryn:
>   1. Lock should not kept while executing placement policy.
>2. While starting up the NN, SPS Xattrs checks happen even if feature 
> disabled. This could potentially impact the startup speed. 
> I am adding one more possible improvement to reduce Xattr objects 
> significantly.
>  SPS Xattr is constant object. So, we create one Xattr deduplication object 
> once statically and use the same object reference when required to add SPS 
> Xattr to Inode. So, here additional bytes required for storing SPS Xattr 
> would turn to same as single object ref ( i.e 4 bytes in 32 bit). So Xattr 
> overhead should come down significantly IMO. Lets explore the feasibility on 
> this option.
> Xattr list Future will not be specially created for SPS, that list would have 
> been created by SetStoragePolicy already on the same directory. So, no extra 
> Future creation because of SPS alone.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12912) [READ] Fix configuration and implementation of LevelDB-based alias maps

2017-12-08 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16284545#comment-16284545
 ] 

Virajith Jalaparti commented on HDFS-12912:
---

Attached patch fixes the mentioned issues.

> [READ] Fix configuration and implementation of LevelDB-based alias maps
> ---
>
> Key: HDFS-12912
> URL: https://issues.apache.org/jira/browse/HDFS-12912
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
> Attachments: HDFS-12912-HDFS-9806.001.patch
>
>
> {{LevelDBFileRegionAliasMap}} fails to create the leveldb store if the 
> directory is absent.
> {{InMemoryAliasMap}} does not support reading from leveldb-based alias map 
> created from {{LevelDBFileRegionAliasMap}} with the block id configured. 
> Further, the configuration for these aliasmaps must be specified using local 
> paths and not as URIs as currently shown in the documentation 
> ({{HdfsProvidedStorage.md}}).
> This JIRA is to fix these issues. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12912) [READ] Fix configuration and implementation of LevelDB-based alias maps

2017-12-08 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12912:
--
Attachment: HDFS-12912-HDFS-9806.001.patch

> [READ] Fix configuration and implementation of LevelDB-based alias maps
> ---
>
> Key: HDFS-12912
> URL: https://issues.apache.org/jira/browse/HDFS-12912
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
> Attachments: HDFS-12912-HDFS-9806.001.patch
>
>
> {{LevelDBFileRegionAliasMap}} fails to create the leveldb store if the 
> directory is absent.
> {{InMemoryAliasMap}} does not support reading from leveldb-based alias map 
> created from {{LevelDBFileRegionAliasMap}} with the block id configured. 
> Further, the configuration for these aliasmaps must be specified using local 
> paths and not as URIs as currently shown in the documentation 
> ({{HdfsProvidedStorage.md}}).
> This JIRA is to fix these issues. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12912) [READ] Fix configuration and implementation of LevelDB-based alias maps

2017-12-08 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12912:
--
Attachment: (was: HDFS-12912-HDFS-9806.001.patch)

> [READ] Fix configuration and implementation of LevelDB-based alias maps
> ---
>
> Key: HDFS-12912
> URL: https://issues.apache.org/jira/browse/HDFS-12912
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>
> {{LevelDBFileRegionAliasMap}} fails to create the leveldb store if the 
> directory is absent.
> {{InMemoryAliasMap}} does not support reading from leveldb-based alias map 
> created from {{LevelDBFileRegionAliasMap}} with the block id configured. 
> Further, the configuration for these aliasmaps must be specified using local 
> paths and not as URIs as currently shown in the documentation 
> ({{HdfsProvidedStorage.md}}).
> This JIRA is to fix these issues. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12912) [READ] Fix configuration and implementation of LevelDB-based alias maps

2017-12-08 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12912:
--
Attachment: HDFS-12912-HDFS-9806.001.patch

> [READ] Fix configuration and implementation of LevelDB-based alias maps
> ---
>
> Key: HDFS-12912
> URL: https://issues.apache.org/jira/browse/HDFS-12912
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
> Attachments: HDFS-12912-HDFS-9806.001.patch
>
>
> {{LevelDBFileRegionAliasMap}} fails to create the leveldb store if the 
> directory is absent.
> {{InMemoryAliasMap}} does not support reading from leveldb-based alias map 
> created from {{LevelDBFileRegionAliasMap}} with the block id configured. 
> Further, the configuration for these aliasmaps must be specified using local 
> paths and not as URIs as currently shown in the documentation 
> ({{HdfsProvidedStorage.md}}).
> This JIRA is to fix these issues. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12912) [READ] Fix configuration and implementation of LevelDB-based alias maps

2017-12-08 Thread Virajith Jalaparti (JIRA)
Virajith Jalaparti created HDFS-12912:
-

 Summary: [READ] Fix configuration and implementation of 
LevelDB-based alias maps
 Key: HDFS-12912
 URL: https://issues.apache.org/jira/browse/HDFS-12912
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Virajith Jalaparti


{{LevelDBFileRegionAliasMap}} fails to create the leveldb store if the 
directory is absent.
{{InMemoryAliasMap}} does not support reading from leveldb-based alias map 
created from {{LevelDBFileRegionAliasMap}} with the block id configured. 
Further, the configuration for these aliasmaps must be specified using local 
paths and not as URIs as currently shown in the documentation 
({{HdfsProvidedStorage.md}}).

This JIRA is to fix these issues. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12891) TestDNFencingWithReplication.testFencingStress: java.lang.AssertionError

2017-12-08 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16284498#comment-16284498
 ] 

Wei-Chiu Chuang commented on HDFS-12891:


[~zvenczel] thanks for working on the patch.
The fix looks good to me. Would you please not change FSNamesystem.setOwner()? 
That change seems unrelated to the test failure. +1 after that.

> TestDNFencingWithReplication.testFencingStress: java.lang.AssertionError
> 
>
> Key: HDFS-12891
> URL: https://issues.apache.org/jira/browse/HDFS-12891
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>  Labels: flaky-test
> Attachments: HDFS-12891.01.patch
>
>
> {code:java}
> java.lang.AssertionError: Test resulted in an unexpected exit
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress(TestDNFencingWithReplication.java:147)
> :
> :
> 2017-10-19 21:39:40,068 [main] INFO  hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1965)) - Shutting down the Mini HDFS Cluster
> 2017-10-19 21:39:40,068 [main] FATAL hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1968)) - Test resulted in an unexpected exit
> 1: java.lang.AssertionError
>   at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:265)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$RedundancyMonitor.run(BlockManager.java:4437)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.AssertionError
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.addBlocksToBeInvalidated(DatanodeDescriptor.java:641)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.InvalidateBlocks.invalidateWork(InvalidateBlocks.java:299)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.invalidateWorkForOneNode(BlockManager.java:4246)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeInvalidateWork(BlockManager.java:1736)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:4561)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$RedundancyMonitor.run(BlockManager.java:4418)
>   ... 1 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12818) Support multiple storages in DataNodeCluster / SimulatedFSDataset

2017-12-08 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-12818:
---
Attachment: HDFS-12818.004.patch

{quote}
I had to make a minor modification to DataNode to support this change. If a 
DataStorage is passed in to the SimulatedFSDataset factory, that storage's 
volume count will be used. Previously, DataNode would pass in a DataStorage to 
the factory, but it does not call recoverTransitionRead() on that storage if 
the FSDataset is simulated, so the storage that gets passed in always has 0 
volumes. I added a check to just pass a null storage down if the FSDataset is 
simulated. This works fine but I am also open to other suggestions on the best 
way to handle this.
{quote}

Hm, turns out some tests don't like how I changed {{DataNode}}. I came up with 
a cleaner solution to the originally described problem which doesn't touch any 
code outside of {{SimulatedFSDataset}}. Adding v004 patch.

> Support multiple storages in DataNodeCluster / SimulatedFSDataset
> -
>
> Key: HDFS-12818
> URL: https://issues.apache.org/jira/browse/HDFS-12818
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, test
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Minor
> Attachments: HDFS-12818.000.patch, HDFS-12818.001.patch, 
> HDFS-12818.002.patch, HDFS-12818.003.patch, HDFS-12818.004.patch
>
>
> Currently {{SimulatedFSDataset}} (and thus, {{DataNodeCluster}} with 
> {{-simulated}}) only supports a single storage per {{DataNode}}. Given that 
> the number of storages can have important implications on the performance of 
> block report processing, it would be useful for these classes to support a 
> multiple storage configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12000) Ozone: Container : Add key versioning support-1

2017-12-08 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16284473#comment-16284473
 ] 

genericqa commented on HDFS-12000:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
45s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 2s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
53s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
57s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-hdfs in HDFS-7240 failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
3s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
31s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
29s{color} | {color:red} hadoop-hdfs-project in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  0m 29s{color} | 
{color:red} hadoop-hdfs-project in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 29s{color} 
| {color:red} hadoop-hdfs-project in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 43s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
1 unchanged - 0 fixed = 2 total (was 1) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
31s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
41s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 31s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d11161b |
| JIRA Issue | HDFS-12000 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901325/HDFS-12000-HDFS-7240.009.patch
 |
| Optional Tests |  asflicen

[jira] [Commented] (HDFS-12818) Support multiple storages in DataNodeCluster / SimulatedFSDataset

2017-12-08 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16284457#comment-16284457
 ] 

genericqa commented on HDFS-12818:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
46s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 206 unchanged - 8 fixed = 206 total (was 214) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 50s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}125m 54s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}171m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
|   | hadoop.hdfs.TestFileCreation |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
|   | hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
|   | hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer |
|   | hadoop.hdfs.server.datanode.TestDataNodeInitStorage |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
|   | hadoop.hdfs.TestMaintenanceState |
|   | hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12818 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901304/HDFS-12818.003.patch |
| Optiona

[jira] [Created] (HDFS-12911) [SPS]: Fix review comments from discussions in HDFS-10285

2017-12-08 Thread Uma Maheswara Rao G (JIRA)
Uma Maheswara Rao G created HDFS-12911:
--

 Summary: [SPS]: Fix review comments from discussions in HDFS-10285
 Key: HDFS-12911
 URL: https://issues.apache.org/jira/browse/HDFS-12911
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Uma Maheswara Rao G
Assignee: Rakesh R


This is the JIRA for tracking the possible improvements or issues discussed in 
main JIRA

So, far from Daryn:
  1. Lock should not kept while executing placement policy.
   2. While starting up the NN, SPS Xattrs checks happen even if feature 
disabled. This could potentially impact the startup speed. 

I am adding one more possible improvement to reduce Xattr objects significantly.
 SPS Xattr is constant object. So, we create one Xattr deduplication object 
once statically and use the same object reference when required to add SPS 
Xattr to Inode. So, here additional bytes required for storing SPS Xattr would 
turn to same as single object ref ( i.e 4 bytes in 32 bit). So Xattr overhead 
should come down significantly IMO. Lets explore the feasibility on this option.

Xattr list Future will not be specially created for SPS, that list would have 
been created by SetStoragePolicy already on the same directory. So, no extra 
Future creation because of SPS alone.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12882) Support full open(PathHandle) contract in HDFS

2017-12-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16284435#comment-16284435
 ] 

Íñigo Goiri commented on HDFS-12882:


The {{needBlockToken}} now is more clear.
Minor comment, some of the calls have the buffersize while is actually ignored, 
not sure if there is a point keeping that.
Other than this, it looks good.

> Support full open(PathHandle) contract in HDFS
> --
>
> Key: HDFS-12882
> URL: https://issues.apache.org/jira/browse/HDFS-12882
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Chris Douglas
>Assignee: Chris Douglas
> Attachments: HDFS-12882.00.patch, HDFS-12882.00.salient.txt, 
> HDFS-12882.01.patch, HDFS-12882.02.patch, HDFS-12882.03.patch, 
> HDFS-12882.04.patch, HDFS-12882.05.patch
>
>
> HDFS-7878 added support for {{open(PathHandle)}}, but it only partially 
> implemented the semantics specified in the contract (i.e., open-by-inodeID). 
> HDFS should implement all permutations of the default options for 
> {{PathHandle}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12626) Ozone : delete open key entries that will no longer be closed

2017-12-08 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16284415#comment-16284415
 ] 

Xiaoyu Yao commented on HDFS-12626:
---

Thanks [~vagarychen] for working on this. The patch looks pretty good to me. I 
just have few minor comments:

OzoneConfigKeys.java
Line 139:  suggest have two separate keys, one for the service check interval 
for the periodical background thread.
ozone.open.key.deleting.service.interval.seconds=> 
ozone.open.key.cleanup.service.interval.seconds

The other for the open key expire/stale threshold that will be used in 
KSMMetadataManagerImpl.java and document them separately

ozone.open.key.expire.threshold.seconds

KeyManager.java
Line 154: NIT: getHangingOpenKeys-> getExpiredOpenKeys

Line 164: NIT: deleteExpriredOpenKey


KeyManagerImpl.java
Line 201: Do we update the keyInfo with the modification time when the block is 
written to the container as well?

OpenKeyCleanupService.java
Line 81-88: some of the info logs can be changed into debug log.

TestKeySpaceManager.java
Line 1094-1095: can we wrap the Outputstream with try/final to ensure it is 
closed.

> Ozone : delete open key entries that will no longer be closed
> -
>
> Key: HDFS-12626
> URL: https://issues.apache.org/jira/browse/HDFS-12626
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-12626-HDFS-7240.001.patch, 
> HDFS-12626-HDFS-7240.002.patch, HDFS-12626-HDFS-7240.003.patch, 
> HDFS-12626-HDFS-7240.004.patch
>
>
> HDFS-12543 introduced the notion of "open key" where when a key is opened, an 
> open key entry gets persisted, only after client calls a close will this 
> entry be made visible. One issue is that if the client does not call close 
> (e.g. failed), then that open key entry will never be deleted from meta data. 
> This JIRA tracks this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12000) Ozone: Container : Add key versioning support-1

2017-12-08 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12000:
--
Attachment: HDFS-12000-HDFS-7240.009.patch

Thanks [~xyao] for the review and the comments! Update v009 patch.

bq. I think it will work when we only have one version, i.e., latest version. 
Correct me if I'm wrong, say we have K1 (B1V1, B2V2), with 
getBlocksLatestVersionOnly, are we going to get B2V2 instead of (B1V1, B2V2)?

Exactly right, for now, getBlocksLatestVersionOnly() will only return the 
blocks from the most recent version, B2V2 only in your case. In the next few 
steps, my plan for multiple versions is to augment APIs that specifies which 
historical version to read. For example, an API that specifies read(K1, 
version=1), then it will ignore B2V2 but only look at B1V1.

bq. Line 173: NIT: more comment does not valid any more

Fixed

Additionally, I have one major right now, that I'm looking for advices, is that 
I believe the dominating majority of read will be reading only the most recent 
version, in this case always iterating all the blocks including old version can 
become inefficient. Any comments on this are appreciated.

bq. Line 268: should this openVersion as part of the request so that the client 
can request open certain version? It is ok to assume open the latest version 
for now. Maybe add a TODO for next JIRA on this feature.

Yes, like mentioned above, in follow JIRA, there will be API that allows 
request specific version, old or recent. It will likely be using this field. 
Added a TODO note to the comments and follow up in next JIRAs.

bq. Line 111: Is there a reason why the KsyKeyLocationInfo#Builder does not 
support setCreateVersion? Do we expect it to be set directly on the 
KsyKeyLocationInfo afterwards?

I found that when a block is allocated, it gets allocated first then version is 
set based on whether it's appending to current version, or added as a new 
version. I think conceptually, a block itself does not have the notion of 
version. So, yes, I leave the block builder not setting version but the caller 
should set it after creating the block.

bq. Line 22: NIT: unused imports

Fixed

bq. Line 58: if the version starts from 0, the special handling for 
currentVersion==-1 is not needed. Can you confirm?

You are right. Thanks for the catch. Fixed

bq. Line 30: can the open version be committed without close, something like 
hsync to populate the write without closing file.

We don't need the open version, as it is only used when opening a key to 
disambiguate preallocated blocks. 

More specifically, when a key is opened, depending on whether size is 
specified, KeyManager *may or may not* have pre-allocated some blocks and 
returned in open session. If pre-allocation does happen, the returned latest 
version is the version to write. But if pre-allocation did not happen, the 
returned latest version is actually an old, already committed version that 
should not be written. The only purpose of this open version field is to 
distinguish these two cases. This value gets checked one time on client when 
loading pre-allocated blocks, then never used. So I think we don't need to 
commit it.


> Ozone: Container : Add key versioning support-1
> ---
>
> Key: HDFS-12000
> URL: https://issues.apache.org/jira/browse/HDFS-12000
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Chen Liang
>  Labels: OzonePostMerge
> Attachments: HDFS-12000-HDFS-7240.001.patch, 
> HDFS-12000-HDFS-7240.002.patch, HDFS-12000-HDFS-7240.003.patch, 
> HDFS-12000-HDFS-7240.004.patch, HDFS-12000-HDFS-7240.005.patch, 
> HDFS-12000-HDFS-7240.007.patch, HDFS-12000-HDFS-7240.008.patch, 
> HDFS-12000-HDFS-7240.009.patch, OzoneVersion.001.pdf
>
>
> The rest interface of ozone supports versioning of keys. This support comes 
> from the containers and how chunks are managed to support this feature. This 
> JIRA tracks that feature. Will post a detailed design doc so that we can talk 
> about this feature.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12881) Output streams closed with IOUtils suppressing write errors

2017-12-08 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16284397#comment-16284397
 ] 

Ajay Kumar commented on HDFS-12881:
---

[~jlowe], thanks for review. Will update the patch for output stream only.

> Output streams closed with IOUtils suppressing write errors
> ---
>
> Key: HDFS-12881
> URL: https://issues.apache.org/jira/browse/HDFS-12881
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jason Lowe
>Assignee: Ajay Kumar
> Attachments: HDFS-12881.001.patch
>
>
> There are a few places in HDFS code that are closing an output stream with 
> IOUtils.cleanupWithLogger like this:
> {code}
>   try {
> ...write to outStream...
>   } finally {
> IOUtils.cleanupWithLogger(LOG, outStream);
>   }
> {code}
> This suppresses any IOException that occurs during the close() method which 
> could lead to partial/corrupted output without throwing a corresponding 
> exception.  The code should either use try-with-resources or explicitly close 
> the stream within the try block so the exception thrown during close() is 
> properly propagated as exceptions during write operations are.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12893) [READ] Support replication of Provided blocks with non-default topologies.

2017-12-08 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12893:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> [READ] Support replication of Provided blocks with non-default topologies.
> --
>
> Key: HDFS-12893
> URL: https://issues.apache.org/jira/browse/HDFS-12893
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12893-HDFS-9806.001.patch, 
> HDFS-12893-HDFS-9806.002.patch, HDFS-12893-HDFS-9806.003.patch, 
> HDFS-12893-HDFS-9806.004.patch
>
>
> {{chooseSourceDatanodes}} returns the {{ProvidedDatanodeDescriptor}} as the 
> source of Provided blocks. As this isn't a physical datanode and doesn't 
> exist the topology, {{ReplicationWork.chooseTargets}} might fail depending on 
> the chosen {{BlockPlacementPolicy}} implementation. This JIRA aims to fix 
> this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12893) [READ] Support replication of Provided blocks with non-default topologies.

2017-12-08 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16284362#comment-16284362
 ] 

Virajith Jalaparti commented on HDFS-12893:
---

The failed tests are not related to this patch. Will commit patch v4 to the 
feature branch soon.

> [READ] Support replication of Provided blocks with non-default topologies.
> --
>
> Key: HDFS-12893
> URL: https://issues.apache.org/jira/browse/HDFS-12893
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12893-HDFS-9806.001.patch, 
> HDFS-12893-HDFS-9806.002.patch, HDFS-12893-HDFS-9806.003.patch, 
> HDFS-12893-HDFS-9806.004.patch
>
>
> {{chooseSourceDatanodes}} returns the {{ProvidedDatanodeDescriptor}} as the 
> source of Provided blocks. As this isn't a physical datanode and doesn't 
> exist the topology, {{ReplicationWork.chooseTargets}} might fail depending on 
> the chosen {{BlockPlacementPolicy}} implementation. This JIRA aims to fix 
> this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12855) Fsck violates namesystem locking

2017-12-08 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy reassigned HDFS-12855:
-

Assignee: Manoj Govindassamy

> Fsck violates namesystem locking 
> -
>
> Key: HDFS-12855
> URL: https://issues.apache.org/jira/browse/HDFS-12855
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.4
>Reporter: Konstantin Shvachko
>Assignee: Manoj Govindassamy
>
> {{NamenodeFsck}} access {{FSNamesystem}} structures, such as INodes, 
> BlockInfo without holding a lock. See e.g. {{NamenodeFsck.blockIdCK()}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12907) Allow read-only access to reserved raw for non-superusers

2017-12-08 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16284350#comment-16284350
 ] 

Daryn Sharp commented on HDFS-12907:


Minor things:
# The case statements should be indented within the switch block.  It's like 
writing an if or while w/o indenting the body.
# Should change the test {{testAdminAccessOnly}} to something like 
{{testUserReadAccessOnly}} to reflect what it's now testing.
# There's no need to try-catch just to call Assert.fail.  All it does is 
swallow the exception that caused the unexpected test failure.  Get rid of the 
try blocks and just let the exception itself fail the test.

> Allow read-only access to reserved raw for non-superusers
> -
>
> Key: HDFS-12907
> URL: https://issues.apache.org/jira/browse/HDFS-12907
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: Rushabh S Shah
> Attachments: HDFS-12907.001.patch, HDFS-12907.patch
>
>
> HDFS-6509 added a special /.reserved/raw path prefix to access the raw file 
> contents of EZ files.  In the simplest sense it doesn't return the FE info in 
> the {{LocatedBlocks}} so the dfs client doesn't try to decrypt the data.  
> This facilitates allowing tools like distcp to copy raw bytes.
> Access to the raw hierarchy is restricted to superusers.  This seems like an 
> overly broad restriction designed to prevent non-admins from munging the EZ 
> related xattrs.  I believe we should relax the restriction to allow 
> non-admins to perform read-only operations.  Allowing non-superusers to 
> easily read the raw bytes will be extremely useful for regular users, esp. 
> for enabling webhdfs client-side encryption.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12893) [READ] Support replication of Provided blocks with non-default topologies.

2017-12-08 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16284319#comment-16284319
 ] 

genericqa commented on HDFS-12893:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-9806 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
34s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
32s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
53s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
31s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
34s{color} | {color:red} hadoop-tools/hadoop-fs2img in HDFS-9806 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} HDFS-9806 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 42s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}118m 36s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
2s{color} | {color:green} hadoop-fs2img in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}203m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
|   | hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12893 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901287/HDFS-12893-HDFS-9806.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 05eaeda4edf1 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_6

[jira] [Resolved] (HDFS-10262) Change HdfsFileStatus::fileId to an opaque identifier

2017-12-08 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas resolved HDFS-10262.
--
Resolution: Duplicate

Fixed in HDFS-7878

> Change HdfsFileStatus::fileId to an opaque identifier
> -
>
> Key: HDFS-10262
> URL: https://issues.apache.org/jira/browse/HDFS-10262
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, webhdfs
>Reporter: Chris Douglas
>
> HDFS exposes the INode ID as a long via HdfsFileStatus::getFileId. Since 
> equality is the only valid client operation (sequential/monotonically 
> increasing ids are not guaranteed in any spec; leases do not rely on any 
> other property), this identifier can be opaque instead of assigning it a 
> primitive type in HdfsFileStatus.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.

2017-12-08 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16284261#comment-16284261
 ] 

genericqa commented on HDFS-12574:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
57s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
23s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 11s{color} | {color:orange} root: The patch generated 16 new + 623 unchanged 
- 1 fixed = 639 total (was 624) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
36s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
44s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
6s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 44s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}121m 58s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}227m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  org.apache.hadoop.fs.FileEncryptionInfo defines equals and uses 
Object.hashCode()  At FileEncryptionInfo.java:Object.hashCode()  At 
FileEncryptionInfo.java:[lines 153-178] |
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  org.apache.hadoop.h

[jira] [Updated] (HDFS-12818) Support multiple storages in DataNodeCluster / SimulatedFSDataset

2017-12-08 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-12818:
---
Attachment: HDFS-12818.003.patch

I stand corrected; the checkstyle was legitimate. I also checked the test 
failures; most were due to OOM errors. {{TestDataNodeVolumeFailure}} and 
{{TestDNFencing}} seemed potentially legitimate but pass fine locally. 
{{TestDataNodeVolumeMetrics}}, however, was a legitimate failure and v003 patch 
also addresses this issue.

> Support multiple storages in DataNodeCluster / SimulatedFSDataset
> -
>
> Key: HDFS-12818
> URL: https://issues.apache.org/jira/browse/HDFS-12818
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, test
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Minor
> Attachments: HDFS-12818.000.patch, HDFS-12818.001.patch, 
> HDFS-12818.002.patch, HDFS-12818.003.patch
>
>
> Currently {{SimulatedFSDataset}} (and thus, {{DataNodeCluster}} with 
> {{-simulated}}) only supports a single storage per {{DataNode}}. Given that 
> the number of storages can have important implications on the performance of 
> block report processing, it would be useful for these classes to support a 
> multiple storage configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.

2017-12-08 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16284051#comment-16284051
 ] 

Rushabh S Shah edited comment on HDFS-12574 at 12/8/17 8:39 PM:


Attaching a patch for jenkins to run and point out silly mistakes/checkstyles 
issues.

{quote}
This is bad: CryptoProtocolVersion.values(). The values method always allocates 
a new garbage array for every invocation. I forget where else I made a change 
to have a static array assignment of the values and created a static valueOf to 
return the item from the static array. I can't find it, looks like it might 
have been undone... Note that protobufs actually do this.
{quote}
Addressed in v2 of patch.

{quote}
WebHdfsFileSystem#open contains a copy-n-paste of the same code in 
DFSClient#createWrappedInputStream. CryptoInputStream can work with any general 
stream so let's make a general wrapping method. Maybe create an interface 
something like EncryptableInputStream for the getFileEncryptionInfo which 
DFSInputStream and WebHdfsInputStream implements. Pass an encryptable stream 
and it returns a wrapped stream if necessary.
{quote}
Addressed in latest patch.
But I am thinking of alternative.
Instead of creating {{EncryptableInputStream}} and {{EncryptableOutputStream}}, 
how about creating {{EncryptableStream}} and let {{WebHdfsFileSystem}} and 
{{DistributedFileSystem}} implement it.
Just an idea. Let me know if you have pros and cons in that approach.

{quote}
I'm not thrilled with stream construction always calling file info but I 
understand the stream is lazily opened which creates a chicken and egg problem 
for determining whether to return a crypto stream. 
{quote}
Exactly client connects to namenode when {{InputStream#read}} is being called. 
But by then it is too late to determine.

{quote}
Double check that failing in the ReadRunner ctor doesn't cause any retry loop 
issues or partial stream leakage. I'll scrutinize too.
{quote}
I added try catch block in {{WebHdfsFileSystem#open}} to close the stream in 
case of any Exception.
Please let me know if I missed any case.


{quote}
I think using the cached file status at open in 
ReadRunner#initializeInputStream subtly changes semantics. 
{quote}
I retained the old behaviour.

{quote}
Why the change to MiniDFSCluster?
{quote}
Since {{NamenodeWebHdfsMethods#serverDefaultsResponse}} is static, so in{{ 
MiniDfsCluster#restartNamenode}} it caches the old value of key provider 
address.

Also note that patch #002 is built on top of HDFS-12907.
Once that gets reviewed and resolved, I will create a new patch with one more 
added test case.


was (Author: shahrs87):
Attaching a patch for jenkins to run and point out silly mistakes/checkstyles 
issues.

{quote}
This is bad: CryptoProtocolVersion.values(). The values method always allocates 
a new garbage array for every invocation. I forget where else I made a change 
to have a static array assignment of the values and created a static valueOf to 
return the item from the static array. I can't find it, looks like it might 
have been undone... Note that protobufs actually do this.
{quote}
Addressed in v2 of patch.

{quote}
WebHdfsFileSystem#open contains a copy-n-paste of the same code in 
DFSClient#createWrappedInputStream. CryptoInputStream can work with any general 
stream so let's make a general wrapping method. Maybe create an interface 
something like EncryptableInputStream for the getFileEncryptionInfo which 
DFSInputStream and WebHdfsInputStream implements. Pass an encryptable stream 
and it returns a wrapped stream if necessary.
{quote}
Addressed in latest patch.
But I am thinking of alternative.
Instead of creating {{EncryptableInputStream}} and {{EncryptableOutputStream}}, 
how about creating {{EncryptableStream}} and let {{WebHdfsFileSystem}} and 
{{DistributedFileSystem}} implement it.
Just an idea. Let me know if you have pros and cons in that approach.

{quote}
I'm not thrilled with stream construction always calling file info but I 
understand the stream is lazily opened which creates a chicken and egg problem 
for determining whether to return a crypto stream. 
{quote}
Exactly client connects to namenode when {{InputStream#read}} is being called. 
But by then it is too late to determine.

{quote}
Double check that failing in the ReadRunner ctor doesn't cause any retry loop 
issues or partial stream leakage. I'll scrutinize too.
{quote}
I added try catch block in {{WebHdfsFileSystem#open}} to close the stream in 
case of any Exception.
Please let me know if I missed any case.


{quote}
I think using the cached file status at open in 
ReadRunner#initializeInputStream subtly changes semantics. 
{quote}
I retained the old behaviour.

{quote}
Why the change to MiniDFSCluster?
{quote}
Since {{NamenodeWebHdfsMethods#serverDefaultsResponse}} is static, so in{{ 
MiniDfsCluster#restartNamenode}} it cache

[jira] [Commented] (HDFS-12825) Fsck report shows config key name for min replication issues

2017-12-08 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16284172#comment-16284172
 ] 

Manoj Govindassamy commented on HDFS-12825:
---

Thanks for the contribution [~gabor.bota]. Committed to trunk.

> Fsck report shows config key name for min replication issues
> 
>
> Key: HDFS-12825
> URL: https://issues.apache.org/jira/browse/HDFS-12825
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Harshakiran Reddy
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Fix For: 3.1.0
>
> Attachments: HDFS-12825.001.patch, error.JPG
>
>
> Scenario:
> Corrupt the Block in any datanode
> Take the *FSCK *Report for that file.
> Actual Output:
> ==
> printing the direct configuration in fsck report
> {{dfs.namenode.replication.min}}
> Expected Output:
> 
> it should be {{MINIMAL BLOCK REPLICATION}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12825) Fsck report shows config key name for min replication issues

2017-12-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16284141#comment-16284141
 ] 

Hudson commented on HDFS-12825:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13349 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13349/])
HDFS-12825. Fsck report shows config key name for min replication issues 
(manojpec: rev ef7d334d364378070880e647eaf8bac2f12561ee)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java


> Fsck report shows config key name for min replication issues
> 
>
> Key: HDFS-12825
> URL: https://issues.apache.org/jira/browse/HDFS-12825
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Harshakiran Reddy
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Fix For: 3.1.0
>
> Attachments: HDFS-12825.001.patch, error.JPG
>
>
> Scenario:
> Corrupt the Block in any datanode
> Take the *FSCK *Report for that file.
> Actual Output:
> ==
> printing the direct configuration in fsck report
> {{dfs.namenode.replication.min}}
> Expected Output:
> 
> it should be {{MINIMAL BLOCK REPLICATION}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12825) Fsck report shows config key name for min replication issues

2017-12-08 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-12825:
--
   Resolution: Fixed
 Hadoop Flags: Incompatible change,Reviewed  (was: Incompatible change)
Fix Version/s: 3.1.0
   Status: Resolved  (was: Patch Available)

> Fsck report shows config key name for min replication issues
> 
>
> Key: HDFS-12825
> URL: https://issues.apache.org/jira/browse/HDFS-12825
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Harshakiran Reddy
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Fix For: 3.1.0
>
> Attachments: HDFS-12825.001.patch, error.JPG
>
>
> Scenario:
> Corrupt the Block in any datanode
> Take the *FSCK *Report for that file.
> Actual Output:
> ==
> printing the direct configuration in fsck report
> {{dfs.namenode.replication.min}}
> Expected Output:
> 
> it should be {{MINIMAL BLOCK REPLICATION}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12882) Support full open(PathHandle) contract in HDFS

2017-12-08 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-12882:
-
Attachment: HDFS-12882.05.patch

Thanks for the review, [~elgoiri]. I added more comments on the use of block 
tokens, both in javadoc around the client and the reasoning behind the audit 
log switch based on {{needBlockToken}}. This approach does seem cleaner than 
extending {{getFileInfo}}.

Updated patch also fixes some checkstyle warnings and removes some unnecessary 
formatting changes. The test failures in v04 were not related.

> Support full open(PathHandle) contract in HDFS
> --
>
> Key: HDFS-12882
> URL: https://issues.apache.org/jira/browse/HDFS-12882
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Chris Douglas
>Assignee: Chris Douglas
> Attachments: HDFS-12882.00.patch, HDFS-12882.00.salient.txt, 
> HDFS-12882.01.patch, HDFS-12882.02.patch, HDFS-12882.03.patch, 
> HDFS-12882.04.patch, HDFS-12882.05.patch
>
>
> HDFS-7878 added support for {{open(PathHandle)}}, but it only partially 
> implemented the semantics specified in the contract (i.e., open-by-inodeID). 
> HDFS should implement all permutations of the default options for 
> {{PathHandle}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12875) RBF: Complete logic for -readonly option of dfsrouteradmin add command

2017-12-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12875:
---
Attachment: HDFS-12875.007.patch

> RBF: Complete logic for -readonly option of dfsrouteradmin add command
> --
>
> Key: HDFS-12875
> URL: https://issues.apache.org/jira/browse/HDFS-12875
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Íñigo Goiri
>  Labels: RBF
> Attachments: HDFS-12875.001.patch, HDFS-12875.002.patch, 
> HDFS-12875.003.patch, HDFS-12875.004.patch, HDFS-12875.005.patch, 
> HDFS-12875.006.patch, HDFS-12875.007.patch
>
>
> The dfsrouteradmin has an option for readonly mount points but this is not 
> implemented. We should add an special mount point which allows reading but 
> not writing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12000) Ozone: Container : Add key versioning support-1

2017-12-08 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16284078#comment-16284078
 ] 

Xiaoyu Yao commented on HDFS-12000:
---

Thanks [~vagarychen] for the update. The patch looks good to me overall. Here 
are a few comments:

ChunkGroupInputStream.java

Line 172: Can you briefly describe the plan to handle multiple version of the 
blocks later with the following API.
{{keyInfo.getLatestVersionLocations().getBlocksLatestVersionOnly()}}. 
I think it will work when we only have one version, i.e., latest version. 
Correct me if I'm wrong, say we have K1 (B1V1, B2V2), with 
getBlocksLatestVersionOnly, are we going to get B2V2 instead of (B1V1, B2V2)?

Line 173: NIT: more comment  does not valid any more

KeySpaceManagerProtocol.proto

Line 268: should this openVersion as part of the request so that the client can 
request open certain version? It is ok to 
assume  open the latest version for now. Maybe add a TODO for next JIRA on this 
feature.

KsyKeyLocationInfo.java
Line 111: Is there a reason why the KsyKeyLocationInfo#Builder does not support 
setCreateVersion? Do we expect it to be set directly on the KsyKeyLocationInfo 
afterwards?

KsmKeyInfo.java
Line 22: NIT: unused imports
Line 58: if the version starts from 0, the special handling for 
currentVersion==-1 is not needed. Can you confirm?
We could wrap this check into a util method for reuse.

OpenKeySession.java
Line 30: can the open version be committed without close, something like hsync 
to populate the write without closing file.



> Ozone: Container : Add key versioning support-1
> ---
>
> Key: HDFS-12000
> URL: https://issues.apache.org/jira/browse/HDFS-12000
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Chen Liang
>  Labels: OzonePostMerge
> Attachments: HDFS-12000-HDFS-7240.001.patch, 
> HDFS-12000-HDFS-7240.002.patch, HDFS-12000-HDFS-7240.003.patch, 
> HDFS-12000-HDFS-7240.004.patch, HDFS-12000-HDFS-7240.005.patch, 
> HDFS-12000-HDFS-7240.007.patch, HDFS-12000-HDFS-7240.008.patch, 
> OzoneVersion.001.pdf
>
>
> The rest interface of ozone supports versioning of keys. This support comes 
> from the containers and how chunks are managed to support this feature. This 
> JIRA tracks that feature. Will post a detailed design doc so that we can talk 
> about this feature.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.

2017-12-08 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16284051#comment-16284051
 ] 

Rushabh S Shah commented on HDFS-12574:
---

Attaching a patch for jenkins to run and point out silly mistakes/checkstyles 
issues.

{quote}
This is bad: CryptoProtocolVersion.values(). The values method always allocates 
a new garbage array for every invocation. I forget where else I made a change 
to have a static array assignment of the values and created a static valueOf to 
return the item from the static array. I can't find it, looks like it might 
have been undone... Note that protobufs actually do this.
{quote}
Addressed in v2 of patch.

{quote}
WebHdfsFileSystem#open contains a copy-n-paste of the same code in 
DFSClient#createWrappedInputStream. CryptoInputStream can work with any general 
stream so let's make a general wrapping method. Maybe create an interface 
something like EncryptableInputStream for the getFileEncryptionInfo which 
DFSInputStream and WebHdfsInputStream implements. Pass an encryptable stream 
and it returns a wrapped stream if necessary.
{quote}
Addressed in latest patch.
But I am thinking of alternative.
Instead of creating {{EncryptableInputStream}} and {{EncryptableOutputStream}}, 
how about creating {{EncryptableStream}} and let {{WebHdfsFileSystem}} and 
{{DistributedFileSystem}} implement it.
Just an idea. Let me know if you have pros and cons in that approach.

{quote}
I'm not thrilled with stream construction always calling file info but I 
understand the stream is lazily opened which creates a chicken and egg problem 
for determining whether to return a crypto stream. 
{quote}
Exactly client connects to namenode when {{InputStream#read}} is being called. 
But by then it is too late to determine.

{quote}
Double check that failing in the ReadRunner ctor doesn't cause any retry loop 
issues or partial stream leakage. I'll scrutinize too.
{quote}
I added try catch block in {{WebHdfsFileSystem#open}} to close the stream in 
case of any Exception.
Please let me know if I missed any case.


{quote}
I think using the cached file status at open in 
ReadRunner#initializeInputStream subtly changes semantics. 
{quote}
I retained the old behaviour.

{quote}
Why the change to MiniDFSCluster?
{quote}
Since {{NamenodeWebHdfsMethods#serverDefaultsResponse}} is static, so in{{ 
MiniDfsCluster#restartNamenode}} it caches the old value of key provider 
address.

> Add CryptoInputStream to WebHdfsFileSystem read call.
> -
>
> Key: HDFS-12574
> URL: https://issues.apache.org/jira/browse/HDFS-12574
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: encryption, kms, webhdfs
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HDFS-12574.001.patch, HDFS-12574.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12813) RequestHedgingProxyProvider can hide Exception thrown from the Namenode for proxy size of 1

2017-12-08 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12813:
---
Fix Version/s: (was: 3.0.0)
   3.1.0

> RequestHedgingProxyProvider can hide Exception thrown from the Namenode for 
> proxy size of 1
> ---
>
> Key: HDFS-12813
> URL: https://issues.apache.org/jira/browse/HDFS-12813
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: 3.1.0, 2.10.0
>
> Attachments: HDFS-12813.001.patch, HDFS-12813.002.patch, 
> HDFS-12813.003.patch, HDFS-12813.004.patch
>
>
> HDFS-11395 fixed the problem where the MultiException thrown by 
> RequestHedgingProxyProvider was hidden. However when the target proxy size is 
> 1, then unwrapping is not done for the InvocationTargetException. for target 
> proxy size of 1, the unwrapping should be done till first level where as for 
> multiple proxy size, it should be done at 2 levels.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12881) Output streams closed with IOUtils suppressing write errors

2017-12-08 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16284047#comment-16284047
 ] 

Jason Lowe commented on HDFS-12881:
---

Thanks for the patch!

The patch updates the handling of input streams, but this bug only applies to 
output streams.  For an input stream, once the code has read the data it needs 
then we're not interested in any errors that happen on close.  We've already 
read what we need to from the stream, so anything else that happens to it isn't 
very interesting after that point and we don't want to fail the operation if 
something with that stream does happen.  However for output streams, we need 
the close() to complete successfully otherwise data previously written could be 
lost (e.g.: due to buffering, etc.).

> Output streams closed with IOUtils suppressing write errors
> ---
>
> Key: HDFS-12881
> URL: https://issues.apache.org/jira/browse/HDFS-12881
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jason Lowe
>Assignee: Ajay Kumar
> Attachments: HDFS-12881.001.patch
>
>
> There are a few places in HDFS code that are closing an output stream with 
> IOUtils.cleanupWithLogger like this:
> {code}
>   try {
> ...write to outStream...
>   } finally {
> IOUtils.cleanupWithLogger(LOG, outStream);
>   }
> {code}
> This suppresses any IOException that occurs during the close() method which 
> could lead to partial/corrupted output without throwing a corresponding 
> exception.  The code should either use try-with-resources or explicitly close 
> the stream within the try block so the exception thrown during close() is 
> properly propagated as exceptions during write operations are.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12814) Add blockId when warning slow mirror/disk in BlockReceiver

2017-12-08 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12814:
---
Fix Version/s: (was: 3.0.0)
   3.0.1
   3.1.0

> Add blockId when warning slow mirror/disk in BlockReceiver
> --
>
> Key: HDFS-12814
> URL: https://issues.apache.org/jira/browse/HDFS-12814
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
>Priority: Trivial
> Fix For: 3.1.0, 3.0.1
>
> Attachments: HDFS-12814.001.patch, HDFS-12814.002.patch
>
>
> HDFS-11603 add downstream DataNodeIds and volume path.
> In order to better debug, those warnning log should include blockId



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12890) Ozone: XceiverClient should have upper bound on async requests

2017-12-08 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16284031#comment-16284031
 ] 

genericqa commented on HDFS-12890:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
38s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
28s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
47s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
49s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
1s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
1s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 43s{color} | {color:orange} hadoop-hdfs-project: The patch generated 2 new + 
0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
39s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 52s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}172m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.TestErasureCodingMultipleRacks |
|   | hadoop.fs.TestUnbuffer |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.ozone.TestStorageContainerManager |
|   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/ha

[jira] [Commented] (HDFS-12875) RBF: Complete logic for -readonly option of dfsrouteradmin add command

2017-12-08 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16284025#comment-16284025
 ] 

genericqa commented on HDFS-12875:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  4m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
40s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
43s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
43s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 43s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 32s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
46s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
30s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 48s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12875 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901281/HDFS-12875.006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux dd3e88803121 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f196383 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22330/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22330/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22330/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| javac | 
https:/

[jira] [Updated] (HDFS-12908) Ozone: write chunk call fails because of Metrics registry exception

2017-12-08 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-12908:
-
Attachment: HDFS-12908-HDFS-7240.001.patch

[~anu], this error message in a different form can be seen in 
{{TestContainerPersistence#testDeleteContainer}}.

The problem in this bug is the jmx initialization with just the last component 
of db file is used for jmx registration. I have added a patch where the entire 
absolute patch is used for the jmx registration. 

> Ozone: write chunk call fails because of Metrics registry exception
> ---
>
> Key: HDFS-12908
> URL: https://issues.apache.org/jira/browse/HDFS-12908
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
> Attachments: HDFS-12908-HDFS-7240.001.patch
>
>
> write chunk call fail because of Metric registration exception.
> {code}
> 2017-12-08 04:02:19,894 WARN org.apache.hadoop.metrics2.util.MBeans: Error 
> creating MBean object name: 
> Hadoop:service=Ozone,name=RocksDbStore,dbName=container.db
> org.apache.hadoop.metrics2.MetricsException: 
> org.apache.hadoop.metrics2.MetricsException: 
> Hadoop:service=Ozone,name=RocksDbStore,dbName=container.db already exists!
> at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newObjectName(DefaultMetricsSystem.java:135)
> at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newMBeanName(DefaultMetricsSystem.java:110)
> at 
> org.apache.hadoop.metrics2.util.MBeans.getMBeanName(MBeans.java:155)
> at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:87)
> at org.apache.hadoop.utils.RocksDBStore.(RocksDBStore.java:77)
> at 
> org.apache.hadoop.utils.MetadataStoreBuilder.build(MetadataStoreBuilder.java:115)
> at 
> org.apache.hadoop.ozone.container.common.utils.ContainerCache.getDB(ContainerCache.java:138)
> at 
> org.apache.hadoop.ozone.container.common.helpers.KeyUtils.getDB(KeyUtils.java:65)
> at 
> org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl.readContainerInfo(ContainerManagerImpl.java:261)
> at 
> org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl.createContainer(ContainerManagerImpl.java:330)
> at 
> org.apache.hadoop.ozone.container.common.impl.Dispatcher.handleCreateContainer(Dispatcher.java:399)
> at 
> org.apache.hadoop.ozone.container.common.impl.Dispatcher.containerProcessHandler(Dispatcher.java:158)
> at 
> org.apache.hadoop.ozone.container.common.impl.Dispatcher.dispatch(Dispatcher.java:105)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.XceiverServerHandler.channelRead0(XceiverServerHandler.java:61)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.XceiverServerHandler.channelRead0(XceiverServerHandler.java:32)
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
> at 
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
> at 
> io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:312)
> at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:286)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
> at 
> io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1302)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
> at 
> io.netty.channel.AbstractChannelHandlerContext.i

[jira] [Commented] (HDFS-12893) [READ] Support replication of Provided blocks with non-default topologies.

2017-12-08 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16284011#comment-16284011
 ] 

Virajith Jalaparti commented on HDFS-12893:
---

Will check once the latest jenkins run comes back. Thanks for reviewing 
[~elgoiri]

> [READ] Support replication of Provided blocks with non-default topologies.
> --
>
> Key: HDFS-12893
> URL: https://issues.apache.org/jira/browse/HDFS-12893
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12893-HDFS-9806.001.patch, 
> HDFS-12893-HDFS-9806.002.patch, HDFS-12893-HDFS-9806.003.patch, 
> HDFS-12893-HDFS-9806.004.patch
>
>
> {{chooseSourceDatanodes}} returns the {{ProvidedDatanodeDescriptor}} as the 
> source of Provided blocks. As this isn't a physical datanode and doesn't 
> exist the topology, {{ReplicationWork.chooseTargets}} might fail depending on 
> the chosen {{BlockPlacementPolicy}} implementation. This JIRA aims to fix 
> this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12893) [READ] Support replication of Provided blocks with non-default topologies.

2017-12-08 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12893:
--
Status: Patch Available  (was: Open)

> [READ] Support replication of Provided blocks with non-default topologies.
> --
>
> Key: HDFS-12893
> URL: https://issues.apache.org/jira/browse/HDFS-12893
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12893-HDFS-9806.001.patch, 
> HDFS-12893-HDFS-9806.002.patch, HDFS-12893-HDFS-9806.003.patch, 
> HDFS-12893-HDFS-9806.004.patch
>
>
> {{chooseSourceDatanodes}} returns the {{ProvidedDatanodeDescriptor}} as the 
> source of Provided blocks. As this isn't a physical datanode and doesn't 
> exist the topology, {{ReplicationWork.chooseTargets}} might fail depending on 
> the chosen {{BlockPlacementPolicy}} implementation. This JIRA aims to fix 
> this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12893) [READ] Support replication of Provided blocks with non-default topologies.

2017-12-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16284007#comment-16284007
 ] 

Íñigo Goiri commented on HDFS-12893:


Adding a new DN descriptor is probably overdoing it.

The unit tests seem unrelated, can you double check?
+1 on [^HDFS-12893-HDFS-9806.004.patch].

> [READ] Support replication of Provided blocks with non-default topologies.
> --
>
> Key: HDFS-12893
> URL: https://issues.apache.org/jira/browse/HDFS-12893
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12893-HDFS-9806.001.patch, 
> HDFS-12893-HDFS-9806.002.patch, HDFS-12893-HDFS-9806.003.patch, 
> HDFS-12893-HDFS-9806.004.patch
>
>
> {{chooseSourceDatanodes}} returns the {{ProvidedDatanodeDescriptor}} as the 
> source of Provided blocks. As this isn't a physical datanode and doesn't 
> exist the topology, {{ReplicationWork.chooseTargets}} might fail depending on 
> the chosen {{BlockPlacementPolicy}} implementation. This JIRA aims to fix 
> this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12893) [READ] Support replication of Provided blocks with non-default topologies.

2017-12-08 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16283994#comment-16283994
 ] 

Virajith Jalaparti commented on HDFS-12893:
---

Posting a rebased patch.

> [READ] Support replication of Provided blocks with non-default topologies.
> --
>
> Key: HDFS-12893
> URL: https://issues.apache.org/jira/browse/HDFS-12893
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12893-HDFS-9806.001.patch, 
> HDFS-12893-HDFS-9806.002.patch, HDFS-12893-HDFS-9806.003.patch, 
> HDFS-12893-HDFS-9806.004.patch
>
>
> {{chooseSourceDatanodes}} returns the {{ProvidedDatanodeDescriptor}} as the 
> source of Provided blocks. As this isn't a physical datanode and doesn't 
> exist the topology, {{ReplicationWork.chooseTargets}} might fail depending on 
> the chosen {{BlockPlacementPolicy}} implementation. This JIRA aims to fix 
> this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12893) [READ] Support replication of Provided blocks with non-default topologies.

2017-12-08 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12893:
--
Status: Open  (was: Patch Available)

> [READ] Support replication of Provided blocks with non-default topologies.
> --
>
> Key: HDFS-12893
> URL: https://issues.apache.org/jira/browse/HDFS-12893
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12893-HDFS-9806.001.patch, 
> HDFS-12893-HDFS-9806.002.patch, HDFS-12893-HDFS-9806.003.patch, 
> HDFS-12893-HDFS-9806.004.patch
>
>
> {{chooseSourceDatanodes}} returns the {{ProvidedDatanodeDescriptor}} as the 
> source of Provided blocks. As this isn't a physical datanode and doesn't 
> exist the topology, {{ReplicationWork.chooseTargets}} might fail depending on 
> the chosen {{BlockPlacementPolicy}} implementation. This JIRA aims to fix 
> this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12893) [READ] Support replication of Provided blocks with non-default topologies.

2017-12-08 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12893:
--
Attachment: HDFS-12893-HDFS-9806.004.patch

> [READ] Support replication of Provided blocks with non-default topologies.
> --
>
> Key: HDFS-12893
> URL: https://issues.apache.org/jira/browse/HDFS-12893
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12893-HDFS-9806.001.patch, 
> HDFS-12893-HDFS-9806.002.patch, HDFS-12893-HDFS-9806.003.patch, 
> HDFS-12893-HDFS-9806.004.patch
>
>
> {{chooseSourceDatanodes}} returns the {{ProvidedDatanodeDescriptor}} as the 
> source of Provided blocks. As this isn't a physical datanode and doesn't 
> exist the topology, {{ReplicationWork.chooseTargets}} might fail depending on 
> the chosen {{BlockPlacementPolicy}} implementation. This JIRA aims to fix 
> this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12893) [READ] Support replication of Provided blocks with non-default topologies.

2017-12-08 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16283984#comment-16283984
 ] 

Virajith Jalaparti commented on HDFS-12893:
---

bq. You are pretty much just copying the storage. 

Yes. that was intention -- to ensure that going forward, this is treated as a 
regular {{DatanodeStorageInfo}} which has only one {{DatanodeDescriptor}} 
associated with it.

I don't think creating a new subtype is required just to replace this. We 
already have the {{ProvidedDatanodeStorageInfo}} that is a subclass of 
{{DatanodeStorageInfo}} but only one instance of that exists and it is shared 
by all {{DatanodeDescriptor}}s that have PROVIDED storage.

> [READ] Support replication of Provided blocks with non-default topologies.
> --
>
> Key: HDFS-12893
> URL: https://issues.apache.org/jira/browse/HDFS-12893
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12893-HDFS-9806.001.patch, 
> HDFS-12893-HDFS-9806.002.patch, HDFS-12893-HDFS-9806.003.patch
>
>
> {{chooseSourceDatanodes}} returns the {{ProvidedDatanodeDescriptor}} as the 
> source of Provided blocks. As this isn't a physical datanode and doesn't 
> exist the topology, {{ReplicationWork.chooseTargets}} might fail depending on 
> the chosen {{BlockPlacementPolicy}} implementation. This JIRA aims to fix 
> this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12905) [READ] Handle decommissioning and under-maintenance Datanodes with Provided storage.

2017-12-08 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12905:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> [READ] Handle decommissioning and under-maintenance Datanodes with Provided 
> storage.
> 
>
> Key: HDFS-12905
> URL: https://issues.apache.org/jira/browse/HDFS-12905
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12905-HDFS-9806.001.patch, 
> HDFS-12905-HDFS-9806.002.patch
>
>
> {{ProvidedStorageMap}} doesn't keep track of the state of the datanodes with 
> Provided storage. As a result, it can return nodes that are being 
> decommissioned or under-maintenance even when live datanodes exist. This JIRA 
> is to prefer live datanodes to datanodes in other states.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12905) [READ] Handle decommissioning and under-maintenance Datanodes with Provided storage.

2017-12-08 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16283947#comment-16283947
 ] 

Virajith Jalaparti commented on HDFS-12905:
---

Thanks for taking a look [~elgoiri]. Will commit patch v2 after fixing the 
checkstyle issues.

> [READ] Handle decommissioning and under-maintenance Datanodes with Provided 
> storage.
> 
>
> Key: HDFS-12905
> URL: https://issues.apache.org/jira/browse/HDFS-12905
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12905-HDFS-9806.001.patch, 
> HDFS-12905-HDFS-9806.002.patch
>
>
> {{ProvidedStorageMap}} doesn't keep track of the state of the datanodes with 
> Provided storage. As a result, it can return nodes that are being 
> decommissioned or under-maintenance even when live datanodes exist. This JIRA 
> is to prefer live datanodes to datanodes in other states.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12875) RBF: Complete logic for -readonly option of dfsrouteradmin add command

2017-12-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12875:
---
Attachment: HDFS-12875.006.patch

> RBF: Complete logic for -readonly option of dfsrouteradmin add command
> --
>
> Key: HDFS-12875
> URL: https://issues.apache.org/jira/browse/HDFS-12875
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Íñigo Goiri
>  Labels: RBF
> Attachments: HDFS-12875.001.patch, HDFS-12875.002.patch, 
> HDFS-12875.003.patch, HDFS-12875.004.patch, HDFS-12875.005.patch, 
> HDFS-12875.006.patch
>
>
> The dfsrouteradmin has an option for readonly mount points but this is not 
> implemented. We should add an special mount point which allows reading but 
> not writing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12875) RBF: Complete logic for -readonly option of dfsrouteradmin add command

2017-12-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12875:
---
Attachment: HDFS-12875.006.patch

> RBF: Complete logic for -readonly option of dfsrouteradmin add command
> --
>
> Key: HDFS-12875
> URL: https://issues.apache.org/jira/browse/HDFS-12875
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Íñigo Goiri
>  Labels: RBF
> Attachments: HDFS-12875.001.patch, HDFS-12875.002.patch, 
> HDFS-12875.003.patch, HDFS-12875.004.patch, HDFS-12875.005.patch
>
>
> The dfsrouteradmin has an option for readonly mount points but this is not 
> implemented. We should add an special mount point which allows reading but 
> not writing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12875) RBF: Complete logic for -readonly option of dfsrouteradmin add command

2017-12-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12875:
---
Attachment: (was: HDFS-12875.006.patch)

> RBF: Complete logic for -readonly option of dfsrouteradmin add command
> --
>
> Key: HDFS-12875
> URL: https://issues.apache.org/jira/browse/HDFS-12875
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Íñigo Goiri
>  Labels: RBF
> Attachments: HDFS-12875.001.patch, HDFS-12875.002.patch, 
> HDFS-12875.003.patch, HDFS-12875.004.patch, HDFS-12875.005.patch
>
>
> The dfsrouteradmin has an option for readonly mount points but this is not 
> implemented. We should add an special mount point which allows reading but 
> not writing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.

2017-12-08 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-12574:
--
Status: Patch Available  (was: Open)

> Add CryptoInputStream to WebHdfsFileSystem read call.
> -
>
> Key: HDFS-12574
> URL: https://issues.apache.org/jira/browse/HDFS-12574
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: encryption, kms, webhdfs
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HDFS-12574.001.patch, HDFS-12574.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.

2017-12-08 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-12574:
--
Attachment: HDFS-12574.002.patch

> Add CryptoInputStream to WebHdfsFileSystem read call.
> -
>
> Key: HDFS-12574
> URL: https://issues.apache.org/jira/browse/HDFS-12574
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: encryption, kms, webhdfs
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HDFS-12574.001.patch, HDFS-12574.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12910) Secure Datanode Starter should log the port when it

2017-12-08 Thread Stephen O'Donnell (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16283890#comment-16283890
 ] 

Stephen O'Donnell commented on HDFS-12910:
--

After the patch, if port 1004 is in use we get this:

{code}
Failed to bind to 0.0.0.0:1004
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at 
org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:106)
at 
org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.init(SecureDataNodeStarter.java:71)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
Cannot load daemon
Service exit with a return value of 3
{code}

And if port 1006 is in use:

{code}
Initializing secure datanode resources
Opened streaming server at /0.0.0.0:1004
Failed to bind to 0.0.0.0:1006
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
at 
org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:136)
at 
org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.init(SecureDataNodeStarter.java:71)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
Cannot load daemon
Service exit with a return value of 3
{code}

The only difference is the address that failed to bind is logged along with the 
exception stack trace.

I cannot think of a good way to add a unit test for this, but I tested 
manually. I am open to suggestions on a good unit test for this.

> Secure Datanode Starter should log the port when it 
> 
>
> Key: HDFS-12910
> URL: https://issues.apache.org/jira/browse/HDFS-12910
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.1.0
>Reporter: Stephen O'Donnell
>Priority: Minor
> Attachments: HDFS-12910.001.patch
>
>
> When running a secure data node, the default ports it uses are 1004 and 1006. 
> Sometimes other OS services can start on these ports causing the DN to fail 
> to start (eg the nfs service can use random ports under 1024).
> When this happens an error is logged by jsvc, but it is confusing as it does 
> not tell you which port it is having issues binding to, for example, when 
> port 1004 is used by another process:
> {code}
> Initializing secure datanode resources
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:105)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.init(SecureDataNodeStarter.java:71)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
> Cannot load daemon
> Service exit with a return value of 3
> {code}
> And when port 1006 is used:
> {code}
> Opened

[jira] [Updated] (HDFS-12910) Secure Datanode Starter should log the port when it

2017-12-08 Thread Stephen O'Donnell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-12910:
-
Attachment: HDFS-12910.001.patch

> Secure Datanode Starter should log the port when it 
> 
>
> Key: HDFS-12910
> URL: https://issues.apache.org/jira/browse/HDFS-12910
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.1.0
>Reporter: Stephen O'Donnell
>Priority: Minor
> Attachments: HDFS-12910.001.patch
>
>
> When running a secure data node, the default ports it uses are 1004 and 1006. 
> Sometimes other OS services can start on these ports causing the DN to fail 
> to start (eg the nfs service can use random ports under 1024).
> When this happens an error is logged by jsvc, but it is confusing as it does 
> not tell you which port it is having issues binding to, for example, when 
> port 1004 is used by another process:
> {code}
> Initializing secure datanode resources
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:105)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.init(SecureDataNodeStarter.java:71)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
> Cannot load daemon
> Service exit with a return value of 3
> {code}
> And when port 1006 is used:
> {code}
> Opened streaming server at /0.0.0.0:1004
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:129)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.init(SecureDataNodeStarter.java:71)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
> Cannot load daemon
> Service exit with a return value of 3
> {code}
> We should catch the BindException exception and log out the problem 
> address:port and then re-throw the exception to make the problem more clear.
> I will upload a patch for this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12909) SSLConnectionConfigurator creation error should be printed only if security is enabled

2017-12-08 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16283880#comment-16283880
 ] 

Wei-Chiu Chuang commented on HDFS-12909:


UserGroupInformation.isSecurityEnabled determines whether or not a user is 
Kerberos authenticated. It doe not necessarily mean SSL is supposed to be 
enabled.

> SSLConnectionConfigurator creation error should be printed only if security 
> is enabled
> --
>
> Key: HDFS-12909
> URL: https://issues.apache.org/jira/browse/HDFS-12909
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
> Attachments: HDFS-12909.patch
>
>
> Currently URLConnectionFactory#getSSLConnectionConfiguration attempts to 
> create a SSL connection configurator even if security is not enabled. This 
> raises the below false warning in the logs.
> {code:java}
> 17/12/08 10:12:03 WARN web.URLConnectionFactory: Cannot load customized ssl 
> related configuration. Fallback to system-generic settings.
> java.io.FileNotFoundException: /etc/security/clientKeys/all.jks (No such file 
> or directory)
>   at java.io.FileInputStream.open0(Native Method)
>   at java.io.FileInputStream.open(FileInputStream.java:195)
>   at java.io.FileInputStream.(FileInputStream.java:138)
>   at 
> org.apache.hadoop.security.ssl.ReloadingX509TrustManager.loadTrustManager(ReloadingX509TrustManager.java:169)
>   at 
> org.apache.hadoop.security.ssl.ReloadingX509TrustManager.(ReloadingX509TrustManager.java:87)
>   at 
> org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.init(FileBasedKeyStoresFactory.java:219)
>   at org.apache.hadoop.security.ssl.SSLFactory.init(SSLFactory.java:176)
>   at 
> org.apache.hadoop.hdfs.web.URLConnectionFactory.newSslConnConfigurator(URLConnectionFactory.java:164)
>   at 
> org.apache.hadoop.hdfs.web.URLConnectionFactory.getSSLConnectionConfiguration(URLConnectionFactory.java:106)
>   at 
> org.apache.hadoop.hdfs.web.URLConnectionFactory.newDefaultURLConnectionFactory(URLConnectionFactory.java:85)
>   at org.apache.hadoop.hdfs.tools.DFSck.(DFSck.java:136)
>   at org.apache.hadoop.hdfs.tools.DFSck.(DFSck.java:128)
>   at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:396)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12907) Allow read-only access to reserved raw for non-superusers

2017-12-08 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16283859#comment-16283859
 ] 

Xiao Chen commented on HDFS-12907:
--

bq. KSMCP
FWIW I also dreamt the KMS uses Hadoop RPC instead of restful APIs over tomcat, 
when I was dealing with KMSCP auth issues and perf issues. Though that has its 
own pros and cons for sure...

At least fixes like HADOOP-12787 should work after your webhdfs jiras.

FWIW, I think the initial KMSCP design _was_ to make it to be when they were 
created. But it didn't work for some cases with the stir of client cache, proxy 
and token. HADOOP-13749 is where this was changed. 

> Allow read-only access to reserved raw for non-superusers
> -
>
> Key: HDFS-12907
> URL: https://issues.apache.org/jira/browse/HDFS-12907
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: Rushabh S Shah
> Attachments: HDFS-12907.001.patch, HDFS-12907.patch
>
>
> HDFS-6509 added a special /.reserved/raw path prefix to access the raw file 
> contents of EZ files.  In the simplest sense it doesn't return the FE info in 
> the {{LocatedBlocks}} so the dfs client doesn't try to decrypt the data.  
> This facilitates allowing tools like distcp to copy raw bytes.
> Access to the raw hierarchy is restricted to superusers.  This seems like an 
> overly broad restriction designed to prevent non-admins from munging the EZ 
> related xattrs.  I believe we should relax the restriction to allow 
> non-admins to perform read-only operations.  Allowing non-superusers to 
> easily read the raw bytes will be extremely useful for regular users, esp. 
> for enabling webhdfs client-side encryption.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12910) Secure Datanode Starter should log the port when it

2017-12-08 Thread Stephen O'Donnell (JIRA)
Stephen O'Donnell created HDFS-12910:


 Summary: Secure Datanode Starter should log the port when it 
 Key: HDFS-12910
 URL: https://issues.apache.org/jira/browse/HDFS-12910
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 3.1.0
Reporter: Stephen O'Donnell
Priority: Minor


When running a secure data node, the default ports it uses are 1004 and 1006. 
Sometimes other OS services can start on these ports causing the DN to fail to 
start (eg the nfs service can use random ports under 1024).

When this happens an error is logged by jsvc, but it is confusing as it does 
not tell you which port it is having issues binding to, for example, when port 
1004 is used by another process:

{code}
Initializing secure datanode resources
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at 
org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:105)
at 
org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.init(SecureDataNodeStarter.java:71)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
Cannot load daemon
Service exit with a return value of 3
{code}

And when port 1006 is used:

{code}
Opened streaming server at /0.0.0.0:1004
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
at 
org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:129)
at 
org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.init(SecureDataNodeStarter.java:71)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
Cannot load daemon
Service exit with a return value of 3
{code}

We should catch the BindException exception and log out the problem 
address:port and then re-throw the exception to make the problem more clear.

I will upload a patch for this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10285) Storage Policy Satisfier in Namenode

2017-12-08 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16283795#comment-16283795
 ] 

Daryn Sharp commented on HDFS-10285:


*Philosophical aside – please refrain from debating*
My rhetorical question was only illustrating why end-user convenience or 
support overhead isn't a valid argument for an internal service.  Don't want to 
get mired here but I'll quickly touch on the philosophical questions regarding 
current services.  For me, the most basic consideration for an internal service 
is can hdfs effectively function w/o it?
* The repl monitor is unquestionably critical for durability.  Not running for 
a day or two would be disastrous.
* EZ already relies on an external KMS service.  The "internal service" caches 
EDEKs.  
* Balancer is non-critical.  Hdfs functions w/o it albeit with degraded 
performance on full cluster.

By this standard: SPS is non-critical.  Admittedly the HSM feature isn't very 
useful w/o it but hdfs can function.  I don't want to specifically 
belabor/debate this point, but rather evaluate the feasibility of designs that 
should make everyone happy.

*Strictly external*
Let's discuss how/why it's difficult to separate.  Specifically, what new 
interfaces will be required?  {{listLocatedStatus}} probably returns most 
necessary data.  It's not the cheapest call but I have patches in the wing to 
heavily optimize it.  Now it feels like a different form of balancing.  What am 
I missing?

Or we can forego that evaluation and discuss:

*Hybrid design*
Let's consider how a hybrid approach might work: NN handles basic HSM duties to 
satisfy the feature, with an adjunct service that provides an optional and 
non-critical coordinator that can independently evolve and become more 
sophisticated.

Conceptually, blocks violating the HSM policy are a form of mis-replication 
that doesn't satisfy a placement policy – which would truly prevent performance 
issues if the feature isn't needed.  The NN's repl monitor ignorantly handles 
the moves as a low priority transfer (if/since it's sufficiently replicated).  
The changes to the NN are minimalistic.

DNs need to support/honor storages in transfer requests.  Transfers to itself 
become moves.  Now HSM "just works", eventually, similar to increasing the repl 
factor.

An external SPS can provide fancier policies for accelerating the processing 
for those users like hbase.

> Storage Policy Satisfier in Namenode
> 
>
> Key: HDFS-10285
> URL: https://issues.apache.org/jira/browse/HDFS-10285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-10285-consolidated-merge-patch-00.patch, 
> HDFS-10285-consolidated-merge-patch-01.patch, 
> HDFS-10285-consolidated-merge-patch-02.patch, 
> HDFS-10285-consolidated-merge-patch-03.patch, 
> HDFS-SPS-TestReport-20170708.pdf, 
> Storage-Policy-Satisfier-in-HDFS-June-20-2017.pdf, 
> Storage-Policy-Satisfier-in-HDFS-May10.pdf, 
> Storage-Policy-Satisfier-in-HDFS-Oct-26-2017.pdf
>
>
> Heterogeneous storage in HDFS introduced the concept of storage policy. These 
> policies can be set on directory/file to specify the user preference, where 
> to store the physical block. When user set the storage policy before writing 
> data, then the blocks could take advantage of storage policy preferences and 
> stores physical block accordingly. 
> If user set the storage policy after writing and completing the file, then 
> the blocks would have been written with default storage policy (nothing but 
> DISK). User has to run the ‘Mover tool’ explicitly by specifying all such 
> file names as a list. In some distributed system scenarios (ex: HBase) it 
> would be difficult to collect all the files and run the tool as different 
> nodes can write files separately and file can have different paths.
> Another scenarios is, when user rename the files from one effected storage 
> policy file (inherited policy from parent directory) to another storage 
> policy effected directory, it will not copy inherited storage policy from 
> source. So it will take effect from destination file/dir parent storage 
> policy. This rename operation is just a metadata change in Namenode. The 
> physical blocks still remain with source storage policy.
> So, Tracking all such business logic based file names could be difficult for 
> admins from distributed nodes(ex: region servers) and running the Mover tool. 
> Here the proposal is to provide an API from Namenode itself for trigger the 
> storage policy satisfaction. A Daemon thread inside Namenode should track 
> such calls and process to DN as movement commands. 
> Will post the detailed design though

[jira] [Commented] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.

2017-12-08 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16283777#comment-16283777
 ] 

Rushabh S Shah commented on HDFS-12574:
---

The current patch (#002) is very different in approach from #001 patch.

Current limitations that we took leverage of.
* {{WebHdfsFileSystem#getHdfsFileStatus(Path p)}} does not return 
{{FileEncryptionInfo}} in the response.

Constraints:
* When client issues {{open}} call and if the file is in Encryption Zone (EZ), 
we need to wrap {{CryptoInputStream}} inside {{FSDataInputStream}}.
Since the connection to namenode is lazy, i.e the client connects to namenode 
only when read is issued, there is no way to know beforehand that the file is 
encrypted.
* Need to think about compatibility between client and namenode.
Below I have a matrix of all the 4 possibilities.


Below is the high level design.
* In {{WebHdfsFileSystem#open}} call, client issues geHdfsFileStatus call to 
namenode.
Upgraded namenode will return {{FileEncryptionInfo}} (feInfo) in 
{{HdfsFileStatus}} object.
* If feInfo is not null, client will wrap {{CryptoInputStream}} within 
{{FsDataInputStream}}.

* When client calls read() on the returned stream, it adds an extra header 
{{supportsEZ}} which tells namenode that client is capable to decrpyt the file.
* Namenode will return {{/.reserved/raw}} path (/.reserved/raw + path)in 
{{Location}} header only if it detects {{supportEZ}} flag is true.
* This way Datanode will not try to decrypt the file since its /reserved/raw 
path.

||Client||Namenode ||Notes||
| Old |Old| Nothing changes. DN will decrpyt|
|Old|New|Namenode will not return /.reserved/raw path and DN will decrypt|
|New|Old|Still Namenode will not return /.reserved/raw and will not return 
feInfo so DN will decrpyt. |
|New|New|Namenode will retrun FeInfo and return /.reserved/raw path. So client 
will decrypt and datanode will not.|

Thanks a lot [~daryn] for helping to come up with this clean design.
I worked on approx. 4-5 approaches before finalizing this approach.
Cc [~xiaochen], [~andrew.wang] for comments.
I have a patch ready with this approach. Need to add couple of tests. Will 
upload soon.

> Add CryptoInputStream to WebHdfsFileSystem read call.
> -
>
> Key: HDFS-12574
> URL: https://issues.apache.org/jira/browse/HDFS-12574
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: encryption, kms, webhdfs
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HDFS-12574.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12909) SSLConnectionConfigurator creation error should be printed only if security is enabled

2017-12-08 Thread Lokesh Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDFS-12909:
---
Summary: SSLConnectionConfigurator creation error should be printed only if 
security is enabled  (was: SSLConnectionConfigurator should be created only if 
security is enabled)

> SSLConnectionConfigurator creation error should be printed only if security 
> is enabled
> --
>
> Key: HDFS-12909
> URL: https://issues.apache.org/jira/browse/HDFS-12909
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
> Attachments: HDFS-12909.patch
>
>
> Currently URLConnectionFactory#getSSLConnectionConfiguration attempts to 
> create a SSL connection configurator even if security is not enabled. This 
> raises the below false warning in the logs.
> {code:java}
> 17/12/08 10:12:03 WARN web.URLConnectionFactory: Cannot load customized ssl 
> related configuration. Fallback to system-generic settings.
> java.io.FileNotFoundException: /etc/security/clientKeys/all.jks (No such file 
> or directory)
>   at java.io.FileInputStream.open0(Native Method)
>   at java.io.FileInputStream.open(FileInputStream.java:195)
>   at java.io.FileInputStream.(FileInputStream.java:138)
>   at 
> org.apache.hadoop.security.ssl.ReloadingX509TrustManager.loadTrustManager(ReloadingX509TrustManager.java:169)
>   at 
> org.apache.hadoop.security.ssl.ReloadingX509TrustManager.(ReloadingX509TrustManager.java:87)
>   at 
> org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.init(FileBasedKeyStoresFactory.java:219)
>   at org.apache.hadoop.security.ssl.SSLFactory.init(SSLFactory.java:176)
>   at 
> org.apache.hadoop.hdfs.web.URLConnectionFactory.newSslConnConfigurator(URLConnectionFactory.java:164)
>   at 
> org.apache.hadoop.hdfs.web.URLConnectionFactory.getSSLConnectionConfiguration(URLConnectionFactory.java:106)
>   at 
> org.apache.hadoop.hdfs.web.URLConnectionFactory.newDefaultURLConnectionFactory(URLConnectionFactory.java:85)
>   at org.apache.hadoop.hdfs.tools.DFSck.(DFSck.java:136)
>   at org.apache.hadoop.hdfs.tools.DFSck.(DFSck.java:128)
>   at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:396)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12908) Ozone: write chunk call fails because of Metrics registry exception

2017-12-08 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16283723#comment-16283723
 ] 

Anu Engineer commented on HDFS-12908:
-

[~msingh] Thanks for filing this, could you please share how to reproduce this? 

> Ozone: write chunk call fails because of Metrics registry exception
> ---
>
> Key: HDFS-12908
> URL: https://issues.apache.org/jira/browse/HDFS-12908
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
>
> write chunk call fail because of Metric registration exception.
> {code}
> 2017-12-08 04:02:19,894 WARN org.apache.hadoop.metrics2.util.MBeans: Error 
> creating MBean object name: 
> Hadoop:service=Ozone,name=RocksDbStore,dbName=container.db
> org.apache.hadoop.metrics2.MetricsException: 
> org.apache.hadoop.metrics2.MetricsException: 
> Hadoop:service=Ozone,name=RocksDbStore,dbName=container.db already exists!
> at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newObjectName(DefaultMetricsSystem.java:135)
> at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newMBeanName(DefaultMetricsSystem.java:110)
> at 
> org.apache.hadoop.metrics2.util.MBeans.getMBeanName(MBeans.java:155)
> at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:87)
> at org.apache.hadoop.utils.RocksDBStore.(RocksDBStore.java:77)
> at 
> org.apache.hadoop.utils.MetadataStoreBuilder.build(MetadataStoreBuilder.java:115)
> at 
> org.apache.hadoop.ozone.container.common.utils.ContainerCache.getDB(ContainerCache.java:138)
> at 
> org.apache.hadoop.ozone.container.common.helpers.KeyUtils.getDB(KeyUtils.java:65)
> at 
> org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl.readContainerInfo(ContainerManagerImpl.java:261)
> at 
> org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl.createContainer(ContainerManagerImpl.java:330)
> at 
> org.apache.hadoop.ozone.container.common.impl.Dispatcher.handleCreateContainer(Dispatcher.java:399)
> at 
> org.apache.hadoop.ozone.container.common.impl.Dispatcher.containerProcessHandler(Dispatcher.java:158)
> at 
> org.apache.hadoop.ozone.container.common.impl.Dispatcher.dispatch(Dispatcher.java:105)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.XceiverServerHandler.channelRead0(XceiverServerHandler.java:61)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.XceiverServerHandler.channelRead0(XceiverServerHandler.java:32)
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
> at 
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
> at 
> io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:312)
> at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:286)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
> at 
> io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1302)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
> at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
> at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
> at 
> io.netty.

[jira] [Updated] (HDFS-12909) SSLConnectionConfigurator should be created only if security is enabled

2017-12-08 Thread Lokesh Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDFS-12909:
---
Attachment: HDFS-12909.patch

Through v1 patch if UserGroupInformation.isSecurityEnabled() is true then 
warning log is printed in case of error in creating SSLConnectionConfigurator 
otherwise it is not printed.

> SSLConnectionConfigurator should be created only if security is enabled
> ---
>
> Key: HDFS-12909
> URL: https://issues.apache.org/jira/browse/HDFS-12909
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
> Attachments: HDFS-12909.patch
>
>
> Currently URLConnectionFactory#getSSLConnectionConfiguration attempts to 
> create a SSL connection configurator even if security is not enabled. This 
> raises the below false warning in the logs.
> {code:java}
> 17/12/08 10:12:03 WARN web.URLConnectionFactory: Cannot load customized ssl 
> related configuration. Fallback to system-generic settings.
> java.io.FileNotFoundException: /etc/security/clientKeys/all.jks (No such file 
> or directory)
>   at java.io.FileInputStream.open0(Native Method)
>   at java.io.FileInputStream.open(FileInputStream.java:195)
>   at java.io.FileInputStream.(FileInputStream.java:138)
>   at 
> org.apache.hadoop.security.ssl.ReloadingX509TrustManager.loadTrustManager(ReloadingX509TrustManager.java:169)
>   at 
> org.apache.hadoop.security.ssl.ReloadingX509TrustManager.(ReloadingX509TrustManager.java:87)
>   at 
> org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.init(FileBasedKeyStoresFactory.java:219)
>   at org.apache.hadoop.security.ssl.SSLFactory.init(SSLFactory.java:176)
>   at 
> org.apache.hadoop.hdfs.web.URLConnectionFactory.newSslConnConfigurator(URLConnectionFactory.java:164)
>   at 
> org.apache.hadoop.hdfs.web.URLConnectionFactory.getSSLConnectionConfiguration(URLConnectionFactory.java:106)
>   at 
> org.apache.hadoop.hdfs.web.URLConnectionFactory.newDefaultURLConnectionFactory(URLConnectionFactory.java:85)
>   at org.apache.hadoop.hdfs.tools.DFSck.(DFSck.java:136)
>   at org.apache.hadoop.hdfs.tools.DFSck.(DFSck.java:128)
>   at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:396)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12907) Allow read-only access to reserved raw for non-superusers

2017-12-08 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16283706#comment-16283706
 ] 

Rushabh S Shah edited comment on HDFS-12907 at 12/8/17 3:15 PM:


Test failures from #001 patch.
It fixed the actual test failures from previous patch.
I have no idea when  EC related test failures will be fixed.
{noformat}

[INFO] Running org.apache.hadoop.fs.TestUnbuffer
[ERROR] Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 6.3 s 
<<< FAILURE! - in org.apache.hadoop.fs.TestUnbuffer
[ERROR] testUnbufferException(org.apache.hadoop.fs.TestUnbuffer)  Time elapsed: 
0.125 s  <<< FAILURE!
java.lang.AssertionError: Expected test to throw (an instance of 
java.lang.UnsupportedOperationException and exception with message a string 
containing "this stream 
org.apache.hadoop.fs.FSInputStream$$EnhancerByMockitoWithCGLIB$$67df4153 does 
not support unbuffering")
at org.junit.Assert.fail(Assert.java:88)
at 
org.junit.rules.ExpectedException.failDueToMissingException(ExpectedException.java:184)
at 
org.junit.rules.ExpectedException.access$100(ExpectedException.java:85)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:170)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407)

[INFO] Running 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.369 s 
- in org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean
[INFO] Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes
[INFO] Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 69.555 
s - in org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes
[INFO] Running org.apache.hadoop.hdfs.server.namenode.TestSecurityTokenEditLog
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.583 s 
- in org.apache.hadoop.hdfs.server.namenode.TestSecurityTokenEditLog
[INFO] Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
[WARNING] Tests run: 18, Failures: 0, Errors: 0, Skipped: 10, Time elapsed: 
180.385 s - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
[INFO] Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050
[WARNING] Tests run: 18, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 
166.602 s - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050
[INFO] Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080
[WARNING] Tests run: 18, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 
268.573 s - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080
[INFO] Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160
[WARNING] Tests run: 18, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 
226.743 s - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160
[INFO] Running org.apache.hadoop.hdfs.TestErasureCodingPolicies
[INFO] Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 89.64 s 
- in org.apache.hadoop.hdfs.TestErasureCodingPolicies
[INFO] Running org.apache.hadoop.hdfs.TestFileChecksum
[INFO] Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 293.067 
s - in org.apache.hadoop.hdfs.TestFileChecksum
[INFO] Running 
org.apache.hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPoli

[jira] [Commented] (HDFS-12907) Allow read-only access to reserved raw for non-superusers

2017-12-08 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16283706#comment-16283706
 ] 

Rushabh S Shah commented on HDFS-12907:
---

{noformat}

[INFO] Running org.apache.hadoop.fs.TestUnbuffer
[ERROR] Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 6.3 s 
<<< FAILURE! - in org.apache.hadoop.fs.TestUnbuffer
[ERROR] testUnbufferException(org.apache.hadoop.fs.TestUnbuffer)  Time elapsed: 
0.125 s  <<< FAILURE!
java.lang.AssertionError: Expected test to throw (an instance of 
java.lang.UnsupportedOperationException and exception with message a string 
containing "this stream 
org.apache.hadoop.fs.FSInputStream$$EnhancerByMockitoWithCGLIB$$67df4153 does 
not support unbuffering")
at org.junit.Assert.fail(Assert.java:88)
at 
org.junit.rules.ExpectedException.failDueToMissingException(ExpectedException.java:184)
at 
org.junit.rules.ExpectedException.access$100(ExpectedException.java:85)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:170)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407)

[INFO] Running 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.369 s 
- in org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean
[INFO] Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes
[INFO] Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 69.555 
s - in org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes
[INFO] Running org.apache.hadoop.hdfs.server.namenode.TestSecurityTokenEditLog
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.583 s 
- in org.apache.hadoop.hdfs.server.namenode.TestSecurityTokenEditLog
[INFO] Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
[WARNING] Tests run: 18, Failures: 0, Errors: 0, Skipped: 10, Time elapsed: 
180.385 s - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
[INFO] Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050
[WARNING] Tests run: 18, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 
166.602 s - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050
[INFO] Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080
[WARNING] Tests run: 18, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 
268.573 s - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080
[INFO] Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160
[WARNING] Tests run: 18, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 
226.743 s - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160
[INFO] Running org.apache.hadoop.hdfs.TestErasureCodingPolicies
[INFO] Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 89.64 s 
- in org.apache.hadoop.hdfs.TestErasureCodingPolicies
[INFO] Running org.apache.hadoop.hdfs.TestFileChecksum
[INFO] Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 293.067 
s - in org.apache.hadoop.hdfs.TestFileChecksum
[INFO] Running 
org.apache.hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.744 s 
- in org.apache.hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy

[INFO] Results:
[INFO] 
[ERROR] Failu

[jira] [Commented] (HDFS-12907) Allow read-only access to reserved raw for non-superusers

2017-12-08 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16283681#comment-16283681
 ] 

Daryn Sharp commented on HDFS-12907:


bq.  I hope the kms clients were designed / documented / implemented / reviewed 
perfectly too. Interestingly if you see the history of KMSCP class you'll see 
quite a few attempts to make it 'work for the case of xxx'.
Sadly, no.  The kms client, specifically KSMCP, is the source of the detailed 
problems.

bq. consider some of the past behaviors simply wrong so we don't worry about 
compatibility.
That would be fantastic but I suspect this was done as a rush to support 
components like Hue, HttpFs. Jupyter, maybe Knox, etc when means it's going to 
be a hard sell to break them...


> Allow read-only access to reserved raw for non-superusers
> -
>
> Key: HDFS-12907
> URL: https://issues.apache.org/jira/browse/HDFS-12907
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: Rushabh S Shah
> Attachments: HDFS-12907.001.patch, HDFS-12907.patch
>
>
> HDFS-6509 added a special /.reserved/raw path prefix to access the raw file 
> contents of EZ files.  In the simplest sense it doesn't return the FE info in 
> the {{LocatedBlocks}} so the dfs client doesn't try to decrypt the data.  
> This facilitates allowing tools like distcp to copy raw bytes.
> Access to the raw hierarchy is restricted to superusers.  This seems like an 
> overly broad restriction designed to prevent non-admins from munging the EZ 
> related xattrs.  I believe we should relax the restriction to allow 
> non-admins to perform read-only operations.  Allowing non-superusers to 
> easily read the raw bytes will be extremely useful for regular users, esp. 
> for enabling webhdfs client-side encryption.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12909) SSLConnectionConfigurator should be created only if security is enabled

2017-12-08 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16283672#comment-16283672
 ] 

Kihwal Lee commented on HDFS-12909:
---

It has more to do with the SSL setting than the security setting.  Currently 
there is no clean way to propagate dfs level ssl setting to 
URLConnectionFactory.


> SSLConnectionConfigurator should be created only if security is enabled
> ---
>
> Key: HDFS-12909
> URL: https://issues.apache.org/jira/browse/HDFS-12909
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>
> Currently URLConnectionFactory#getSSLConnectionConfiguration attempts to 
> create a SSL connection configurator even if security is not enabled. This 
> raises the below false warning in the logs.
> {code:java}
> 17/12/08 10:12:03 WARN web.URLConnectionFactory: Cannot load customized ssl 
> related configuration. Fallback to system-generic settings.
> java.io.FileNotFoundException: /etc/security/clientKeys/all.jks (No such file 
> or directory)
>   at java.io.FileInputStream.open0(Native Method)
>   at java.io.FileInputStream.open(FileInputStream.java:195)
>   at java.io.FileInputStream.(FileInputStream.java:138)
>   at 
> org.apache.hadoop.security.ssl.ReloadingX509TrustManager.loadTrustManager(ReloadingX509TrustManager.java:169)
>   at 
> org.apache.hadoop.security.ssl.ReloadingX509TrustManager.(ReloadingX509TrustManager.java:87)
>   at 
> org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.init(FileBasedKeyStoresFactory.java:219)
>   at org.apache.hadoop.security.ssl.SSLFactory.init(SSLFactory.java:176)
>   at 
> org.apache.hadoop.hdfs.web.URLConnectionFactory.newSslConnConfigurator(URLConnectionFactory.java:164)
>   at 
> org.apache.hadoop.hdfs.web.URLConnectionFactory.getSSLConnectionConfiguration(URLConnectionFactory.java:106)
>   at 
> org.apache.hadoop.hdfs.web.URLConnectionFactory.newDefaultURLConnectionFactory(URLConnectionFactory.java:85)
>   at org.apache.hadoop.hdfs.tools.DFSck.(DFSck.java:136)
>   at org.apache.hadoop.hdfs.tools.DFSck.(DFSck.java:128)
>   at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:396)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12890) Ozone: XceiverClient should have upper bound on async requests

2017-12-08 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-12890:
---
Attachment: HDFS-12890-HDFS-7240.004.patch

Thanks [~msingh] , for the review comments. 

As per our discussion, in case, the exception is caught at netty client side, 
the code now handles it , by removing all the pending future responses from the 
response map and marking them completeExceptionally.

The same Exception is caught at XceiverClient where the exception is being 
unwrapped in case its an IOException as being done in 
RequestHedgingProxyProvider and propagated through to the client. I tried a 
test by harcoding the Exception in ChannelRead0 function in XceiverClientHandler
and running putKey test in testOzoneRpcClient and it works.

> Ozone: XceiverClient should have upper bound on async requests
> --
>
> Key: HDFS-12890
> URL: https://issues.apache.org/jira/browse/HDFS-12890
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Fix For: HDFS-7240
>
> Attachments: HDFS-12890-HDFS-7240.001.patch, 
> HDFS-12890-HDFS-7240.002.patch, HDFS-12890-HDFS-7240.003.patch, 
> HDFS-12890-HDFS-7240.004.patch
>
>
> XceiverClient-ratis maintains upper bound on the no of outstanding async 
> requests . XceiverClient
> should also impose an upper bound on the no of outstanding async requests 
> received from client
> for write.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Moved] (HDFS-12909) SSLConnectionConfigurator should be created only if security is enabled

2017-12-08 Thread Lokesh Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain moved HADOOP-15103 to HDFS-12909:
-

Key: HDFS-12909  (was: HADOOP-15103)
Project: Hadoop HDFS  (was: Hadoop Common)

> SSLConnectionConfigurator should be created only if security is enabled
> ---
>
> Key: HDFS-12909
> URL: https://issues.apache.org/jira/browse/HDFS-12909
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>
> Currently URLConnectionFactory#getSSLConnectionConfiguration attempts to 
> create a SSL connection configurator even if security is not enabled. This 
> raises the below false warning in the logs.
> {code:java}
> 17/12/08 10:12:03 WARN web.URLConnectionFactory: Cannot load customized ssl 
> related configuration. Fallback to system-generic settings.
> java.io.FileNotFoundException: /etc/security/clientKeys/all.jks (No such file 
> or directory)
>   at java.io.FileInputStream.open0(Native Method)
>   at java.io.FileInputStream.open(FileInputStream.java:195)
>   at java.io.FileInputStream.(FileInputStream.java:138)
>   at 
> org.apache.hadoop.security.ssl.ReloadingX509TrustManager.loadTrustManager(ReloadingX509TrustManager.java:169)
>   at 
> org.apache.hadoop.security.ssl.ReloadingX509TrustManager.(ReloadingX509TrustManager.java:87)
>   at 
> org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.init(FileBasedKeyStoresFactory.java:219)
>   at org.apache.hadoop.security.ssl.SSLFactory.init(SSLFactory.java:176)
>   at 
> org.apache.hadoop.hdfs.web.URLConnectionFactory.newSslConnConfigurator(URLConnectionFactory.java:164)
>   at 
> org.apache.hadoop.hdfs.web.URLConnectionFactory.getSSLConnectionConfiguration(URLConnectionFactory.java:106)
>   at 
> org.apache.hadoop.hdfs.web.URLConnectionFactory.newDefaultURLConnectionFactory(URLConnectionFactory.java:85)
>   at org.apache.hadoop.hdfs.tools.DFSck.(DFSck.java:136)
>   at org.apache.hadoop.hdfs.tools.DFSck.(DFSck.java:128)
>   at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:396)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12308) Erasure Coding: Provide DistributedFileSystem & DFSClient API to return the effective EC policy on a directory or file, including the replication policy

2017-12-08 Thread chencan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16283275#comment-16283275
 ] 

chencan commented on HDFS-12308:


HI,@SammiChen,i have submit the first patch, please check if it as your 
expected,thank you!

> Erasure Coding: Provide DistributedFileSystem &  DFSClient API to return the 
> effective EC policy on a directory or file, including the replication policy
> -
>
> Key: HDFS-12308
> URL: https://issues.apache.org/jira/browse/HDFS-12308
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
> Environment: Provide DistributedFileSystem &  DFSClient API to return 
> the effective EC policy on a directory or file, including the replication 
> policy. Policy name will like {{getNominalErasureCodingPolicy(PATH)}} and 
> {{getAllNominalErasureCodingPolicies}}. 
>Reporter: SammiChen
>Assignee: chencan
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HADOOP-12308.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12308) Erasure Coding: Provide DistributedFileSystem & DFSClient API to return the effective EC policy on a directory or file, including the replication policy

2017-12-08 Thread chencan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDFS-12308:
---
Attachment: HADOOP-12308.patch

> Erasure Coding: Provide DistributedFileSystem &  DFSClient API to return the 
> effective EC policy on a directory or file, including the replication policy
> -
>
> Key: HDFS-12308
> URL: https://issues.apache.org/jira/browse/HDFS-12308
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
> Environment: Provide DistributedFileSystem &  DFSClient API to return 
> the effective EC policy on a directory or file, including the replication 
> policy. Policy name will like {{getNominalErasureCodingPolicy(PATH)}} and 
> {{getAllNominalErasureCodingPolicies}}. 
>Reporter: SammiChen
>Assignee: chencan
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HADOOP-12308.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10285) Storage Policy Satisfier in Namenode

2017-12-08 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16283263#comment-16283263
 ] 

Chris Douglas commented on HDFS-10285:
--

As [~umamaheswararao] mentioned earlier, HDFS-12090 proposes to build on the 
SPS. [~virajith] wrote a 
[prototype|https://github.com/Microsoft-CISL/hadoop-prototype/tree/SPS-9806] 
demonstrating the core of the design. The Mover is not sufficient; we're 
banking on a more robust solution for HSM, and wherever it lives, we need 
something like the SPS.

NameNode load is important, but the decision to implement the balancer as an 
external process [predates|https://issues.apache.org/jira/browse/HADOOP-1652] 
many scalability and performance improvements. To pick one salient example, it 
precedes running with a [read-write 
lock|https://issues.apache.org/jira/browse/HDFS-1093] by almost three years. 
The Mover (IIRC) started with the balancer code. Scans are outside of the 
NameNode today due to a decade-old analysis, and because to move scans into the 
NameNode, features added subsequently would need to be reexamined and possibly 
redesigned. Also, subsequent extensions to- and comfort with- the balancer make 
replacing it unessential. This particular precedent for scans is not a reliable 
guide, on its own. We can be confident that adding load to the NN will drop 
throughput in some cases, but without benchmarks we don't know whether those 
cases are blockers. Have any benchmarks been run, particularly with the SPS 
disabled?

Also, the state-of-the-art for HSM supports neither sophisticated deployment 
nor failover. Many new services and features in YARN are available in preview 
before they even support secure deployments. The NameNode acquired these 
features over years; insisting that services implement that full complement of 
capabilities before anyone can be certain the service is _useful_ is not 
workable, particularly in an open-source project. On that subject, if this 
approach doesn't work out, deleting a separate server is much easier than 
extracting a feature from the NN. Offhand, I can't think of a single example of 
the latter. The aspect-oriented fault injection maybe, but that was both 
outside the NN and only for testing.

[~rakeshr] started to quantify the impact, which will help to either 
tranquilize anxiety about this feature or define thresholds for accepting it. 
Skimming the implementation, some of this could be extracted into an external 
service, but it would not be straightforward. Specifically, the SPS keeps 
references to the namesystem and block manager. To [~anu]'s earlier point, 
"smart" policies using internal data will be very difficult to extract into a 
separate service later, should that become necessary or desirable.

Would it be possible to extract an API for the SPS to other NN components 
(particularly the namesystem, block manager, and datanode manager)? That might 
make the couplings more explicit, ideally so the interface would be sufficient 
as an RPC protocol, if the SPS were moved outside the NN.

bq. I’m curious why it isn’t just part of the standard replication monitoring. 
If the DN is told to replicate to itself, it just does the storage movement.
In addition to the points that Uma raised, for in-memory and provided replicas, 
we'd like to support more than one replica per DN (HDFS-9810). Intra-DN 
rebalancing also may not benefit from deleting replicas until the volume is 
short on space. Copying a temporarily hot replica to SSD, then back to HDD when 
it's cold again is also avoidable overhead, if the SSD replica just be deleted. 
Agreed, it does seem like this is one operation with parameters, not separate 
mechanisms.

> Storage Policy Satisfier in Namenode
> 
>
> Key: HDFS-10285
> URL: https://issues.apache.org/jira/browse/HDFS-10285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-10285-consolidated-merge-patch-00.patch, 
> HDFS-10285-consolidated-merge-patch-01.patch, 
> HDFS-10285-consolidated-merge-patch-02.patch, 
> HDFS-10285-consolidated-merge-patch-03.patch, 
> HDFS-SPS-TestReport-20170708.pdf, 
> Storage-Policy-Satisfier-in-HDFS-June-20-2017.pdf, 
> Storage-Policy-Satisfier-in-HDFS-May10.pdf, 
> Storage-Policy-Satisfier-in-HDFS-Oct-26-2017.pdf
>
>
> Heterogeneous storage in HDFS introduced the concept of storage policy. These 
> policies can be set on directory/file to specify the user preference, where 
> to store the physical block. When user set the storage policy before writing 
> data, then the blocks could take advantage of storage policy preferences and 
> stores physical block accordingly. 
> If user set the s

[jira] [Created] (HDFS-12908) Ozone: write chunk call fails because of Metrics registry exception

2017-12-08 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDFS-12908:


 Summary: Ozone: write chunk call fails because of Metrics registry 
exception
 Key: HDFS-12908
 URL: https://issues.apache.org/jira/browse/HDFS-12908
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh
 Fix For: HDFS-7240


write chunk call fail because of Metric registration exception.

{code}
2017-12-08 04:02:19,894 WARN org.apache.hadoop.metrics2.util.MBeans: Error 
creating MBean object name: 
Hadoop:service=Ozone,name=RocksDbStore,dbName=container.db
org.apache.hadoop.metrics2.MetricsException: 
org.apache.hadoop.metrics2.MetricsException: 
Hadoop:service=Ozone,name=RocksDbStore,dbName=container.db already exists!
at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newObjectName(DefaultMetricsSystem.java:135)
at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newMBeanName(DefaultMetricsSystem.java:110)
at org.apache.hadoop.metrics2.util.MBeans.getMBeanName(MBeans.java:155)
at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:87)
at org.apache.hadoop.utils.RocksDBStore.(RocksDBStore.java:77)
at 
org.apache.hadoop.utils.MetadataStoreBuilder.build(MetadataStoreBuilder.java:115)
at 
org.apache.hadoop.ozone.container.common.utils.ContainerCache.getDB(ContainerCache.java:138)
at 
org.apache.hadoop.ozone.container.common.helpers.KeyUtils.getDB(KeyUtils.java:65)
at 
org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl.readContainerInfo(ContainerManagerImpl.java:261)
at 
org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl.createContainer(ContainerManagerImpl.java:330)
at 
org.apache.hadoop.ozone.container.common.impl.Dispatcher.handleCreateContainer(Dispatcher.java:399)
at 
org.apache.hadoop.ozone.container.common.impl.Dispatcher.containerProcessHandler(Dispatcher.java:158)
at 
org.apache.hadoop.ozone.container.common.impl.Dispatcher.dispatch(Dispatcher.java:105)
at 
org.apache.hadoop.ozone.container.common.transport.server.XceiverServerHandler.channelRead0(XceiverServerHandler.java:61)
at 
org.apache.hadoop.ozone.container.common.transport.server.XceiverServerHandler.channelRead0(XceiverServerHandler.java:32)
at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
at 
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
at 
io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:312)
at 
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:286)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
at 
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1302)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at 
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at 
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:646)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:581)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:460)
at 
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at 
io.netty.util.concurr

[jira] [Commented] (HDFS-9668) Optimize the locking in FsDatasetImpl

2017-12-08 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16283219#comment-16283219
 ] 

Jingcheng Du commented on HDFS-9668:


Thanks a lot for the review [~jojochuang].
I will find time to renew the patch these days. Thanks. 

> Optimize the locking in FsDatasetImpl
> -
>
> Key: HDFS-9668
> URL: https://issues.apache.org/jira/browse/HDFS-9668
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
> Attachments: HDFS-9668-1.patch, HDFS-9668-10.patch, 
> HDFS-9668-11.patch, HDFS-9668-12.patch, HDFS-9668-13.patch, 
> HDFS-9668-14.patch, HDFS-9668-14.patch, HDFS-9668-15.patch, 
> HDFS-9668-16.patch, HDFS-9668-17.patch, HDFS-9668-18.patch, 
> HDFS-9668-19.patch, HDFS-9668-19.patch, HDFS-9668-2.patch, 
> HDFS-9668-20.patch, HDFS-9668-21.patch, HDFS-9668-22.patch, 
> HDFS-9668-23.patch, HDFS-9668-23.patch, HDFS-9668-24.patch, 
> HDFS-9668-25.patch, HDFS-9668-26.patch, HDFS-9668-3.patch, HDFS-9668-4.patch, 
> HDFS-9668-5.patch, HDFS-9668-6.patch, HDFS-9668-7.patch, HDFS-9668-8.patch, 
> HDFS-9668-9.patch, execution_time.png
>
>
> During the HBase test on a tiered storage of HDFS (WAL is stored in 
> SSD/RAMDISK, and all other files are stored in HDD), we observe many 
> long-time BLOCKED threads on FsDatasetImpl in DataNode. The following is part 
> of the jstack result:
> {noformat}
> "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at 
> /192.168.50.16:48521 [Receiving block 
> BP-1042877462-192.168.50.13-1446173170517:blk_1073779272_40852]" - Thread 
> t@93336
>java.lang.Thread.State: BLOCKED
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:)
>   - waiting to lock <18324c9> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl) owned by 
> "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at 
> /192.168.50.16:48520 [Receiving block 
> BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" t@93335
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:113)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:183)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:615)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
>   at java.lang.Thread.run(Thread.java:745)
>Locked ownable synchronizers:
>   - None
>   
> "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at 
> /192.168.50.16:48520 [Receiving block 
> BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" - Thread 
> t@93335
>java.lang.Thread.State: RUNNABLE
>   at java.io.UnixFileSystem.createFileExclusively(Native Method)
>   at java.io.File.createNewFile(File.java:1012)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:66)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:271)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:286)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:1140)
>   - locked <18324c9> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:113)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:183)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:615)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
>   at java.lang.Thread.run(Thread.java:745)
>Locked ownable synchronizers:
>   - None
> {noformat}
> We measured the execution of some operations in FsDatasetImpl during the 
> test. Here following is the result.
> !execution_time.png!
> The operations of finalizeBlock, addBlock and createRbw on HDD in a heavy 
> load take a really long time.
> It means one slow operation of finalizeBlock, addBlock and createRbw in a 
> slow storage can block all the other same operations in the same DataNode, 
> esp

[jira] [Comment Edited] (HDFS-10285) Storage Policy Satisfier in Namenode

2017-12-08 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16283102#comment-16283102
 ] 

Uma Maheswara Rao G edited comment on HDFS-10285 at 12/8/17 8:44 AM:
-

{quote}
Here's a rhetorical question: If managing multiple services is hard, why not 
bundle oozie, spark, storm, sqoop, kafka, ranger, knox, hive server, etc in the 
same process? Or ZK so HA is easier to deploy/manage?
{quote}
Few of my thoughts on this question, each of these projects build for their own 
purpose, with its own spec, not for just for helping HDFS or any other single 
project. And none of that projects need to access other project internal data 
structures. Where as SPS is only functions for HDFS and access internal data 
structures. Even forcibly separated out, we need to expose ‘for SPS only’ RPC 
APIs. This strikes me to put a question in other way as well, is it make sense 
to separate ReplicationMonitor as one separate process? is it fine to start 
EDEK as one separate? is it ok to start other threads (like decommissioning 
task) as separate processes and co-ordinate via RPC? so that NameSystem class 
may become very light weight? I think its the value vs cost will decide whether 
to separate or merge into single. 

Coming to ZK part, As ZK is not build only for HDFS, I don’t think to have any 
such thoughts. Its general purpose co-ordination system. Technically we can’t 
keep monitoring services inside NN, because the worry itself is, NN may die, 
need failover and so need external process to monitor. Anyway. I think the 
whole discussion is about services inside a project, but not cross projects 
itself, IMHO.
Here SPS providing only the missing functionality of HSM, that is end-end 
policy satisfaction. So, IMV, for users it may not worth to manage additional 
process to achieve that missing functionality for particular feature.

{quote}
Today, I looked at the code more closely. It can hold the lock (read lock, but 
still) way too long. Notably, but not limited to, you can’t hold the lock while 
doing block placement.
{quote}

Appreciate your review Daryn. I think it should be easy to address. We will 
make sure to address the comment before merge? is that make sense.

{quote}
I should start sending bills to everyone who makes this fraudulent claim. . 
FSDirectory#addToInodeMap imposes a nontrivial performance penalty even when 
SPS is not enabled. We had to hack out the similar EZ check because it had a 
noticeable performance impact esp. on startup. However now that we support EZ, 
I need to revisit optimizing it.
{quote}
Thanks for review!. Nice find. Fundamentally, if sps disabled we don't even 
need to load the things into Qs as no one will process them. So, adding 
condition of checking enabled, can avoid even that enqueuing calls, in disabled 
case. So, it will end up having one extra bool if check if disabled. With this 
change, impact should be negligible IIUC. we will take this comment. Thanks

{quote}
I’m curious why it isn’t just part of the standard replication monitoring. If 
the DN is told to replicate to itself, it just does the storage movement.
{quote}
That's a good question. Overall approach is exactly same as RM. RM is has its 
own q build up for redundancy blocks, and Underreplication scan/check happens 
at block level, it make sense. Where as in SPS, policy changes for file, so all 
blocks in that file needs movement and policy check should happen 
in-co-ordination with replication blocks where they stored currently. So, we 
track the queues at file level here and scan/check all block for that files 
together at once. Also , We wanted to provide, on the fly reconfigure feature 
and we carefully thought that, we don’t want to interfere replication logic 
should be given more priority than SPS work. While scheduling blocks, we 
respect xmits counts, they are shared between, RM, SPS for controlling DN load. 
Assignment priority given to replication/EC blocks, then SPS blocks, when 
sending tasks to DN. So, as part of impact analysis, we thought of keeping SPS 
in it's own thread than RM thread would be clean and safer than running in that 
same loop of RM.


was (Author: umamaheswararao):
{quote}
Here's a rhetorical question: If managing multiple services is hard, why not 
bundle oozie, spark, storm, sqoop, kafka, ranger, knox, hive server, etc in the 
same process? Or ZK so HA is easier to deploy/manage?
{quote}
Few of my thoughts on this question, each of these projects build for their own 
purpose, with its own spec, not for just for helping HDFS or any other single 
project. And none of that projects need to access other project internal data 
structures. Where as SPS is only functions for HDFS and access internal data 
structures. Even forcibly separated out, we need to expose ‘for SPS only’ RPC 
APIs. This strikes me to put a question in other way