[jira] [Commented] (HDFS-5692) viewfs shows resolved path in FileNotFoundException

2016-11-11 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15659241#comment-15659241
 ] 

Manoj Govindassamy commented on HDFS-5692:
--

Thanks for the comment. Yes, it is possible to create a new exception by 
extending FileNotFoundException and later doing initCause(). The only problem 
here is the top level exception will again be the custom one and not the 
FileNotFoundException. try/catch would still work though. The reluctance to go 
this route is that the new custom exception is not a well known one and would 
end up more specific to {{listStatus}} API. Would be preferable to throw either 
the well known Exceptions or the other FileSystem/ViewFileSystem publicly 
exposed ones. There is {{NotInMointPointException}} which is already available 
under {{ViewFileSystem}} and closest match for the errors relating to {{Path}}. 
The current definition of {{NotInMountPointException}} doesn't accept a 
throwable, but it should be doable. The real question here is would any other 
Exception match the correctness as in FileNotFoundException. Any further 
thoughts on this please ?

> viewfs shows resolved path in FileNotFoundException
> ---
>
> Key: HDFS-5692
> URL: https://issues.apache.org/jira/browse/HDFS-5692
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.2.0
>Reporter: Keith Turner
>Assignee: Manoj Govindassamy
>
> With the following config, if I call fs.listStatus("/nn1/a/b") when 
> {{/nn1/a/b}} does not exist then ...
> {noformat}
> 
>   
> fs.default.name
> viewfs:///
>   
>   
> fs.viewfs.mounttable.default.link./nn1
> hdfs://host1:9000
>   
>   
> fs.viewfs.mounttable.default.link./nn2
> hdfs://host2:9000
>   
> 
> {noformat}
> I will see an error message like the following.  
> {noformat}
> java.io.FileNotFoundException: File /a/b does not exist.
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:644)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:92)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:702)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:698)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:698)
> at 
> org.apache.hadoop.fs.FilterFileSystem.listStatus(FilterFileSystem.java:222)
> at 
> org.apache.hadoop.fs.viewfs.ChRootedFileSystem.listStatus(ChRootedFileSystem.java:228)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.listStatus(ViewFileSystem.java:366)
> {noformat}
> I think it would be useful for ViewFS to wrap the FileNotFoundException from 
> the inner filesystem, giving an error message like the following.  The 
> following error message has the resolved and unresolved paths which is very 
> useful for debugging.
> {noformat}
> java.io.FileNotFoundException: File /nn1/a/b does not exist.
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.listStatus(ViewFileSystem.java:366)
> Caused by: java.io.FileNotFoundException: File /a/b does not exist.
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:644)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:92)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:702)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:698)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:698)
> at 
> org.apache.hadoop.fs.FilterFileSystem.listStatus(FilterFileSystem.java:222)
> at 
> org.apache.hadoop.fs.viewfs.ChRootedFileSystem.listStatus(ChRootedFileSystem.java:228)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.listStatus(ViewFileSystem.java:366)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11111) Delete something in .Trash using "rm" should be forbidden without safety option

2016-11-11 Thread Lantao Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lantao Jin updated HDFS-1:
--
Affects Version/s: 2.7.2
   Status: Patch Available  (was: Open)

> Delete something in  .Trash using "rm" should be forbidden without safety 
> option 
> -
>
> Key: HDFS-1
> URL: https://issues.apache.org/jira/browse/HDFS-1
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.7.2
>Reporter: Lantao Jin
>
> As we discussed in HDFS-11102, double confirmation does not seem to be a 
> graceful solution for users. Deleting files in .Trash accidentally is still 
> an issue though. The behaviour of users I'm worried about is {{rm}} ing 
> something in .Trash (without explicitly understanding that those files will 
> not be recoverable). This is in contrast to {{rm}} ing something with 
> "-skipTrash" option (That's a very purposeful action).
> So it is not the same case as HADOOP-12358. The solution is throwing an 
> exception and remind user to add "-trash" option to delete dirs in trash for 
> safely:
> {code}
> Can not delete somehing trash directly! Please add "-trash" or "-T" in "rm" 
> command to do that.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11129) TestAppendSnapshotTruncate fails with bind exception

2016-11-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15659149#comment-15659149
 ] 

Hudson commented on HDFS-11129:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10828 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10828/])
HDFS-11129. TestAppendSnapshotTruncate fails with bind exception. (liuml07: rev 
2ee18fc15e60dff9a42a80eb0c30e8bd8cedc26a)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestAppendSnapshotTruncate.java


> TestAppendSnapshotTruncate fails with bind exception
> 
>
> Key: HDFS-11129
> URL: https://issues.apache.org/jira/browse/HDFS-11129
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11129.000.patch
>
>
> {noformat}
> java.net.BindException: Problem binding to [localhost:9820] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:535)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:919)
>   at org.apache.hadoop.ipc.Server.(Server.java:2667)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:959)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:801)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:434)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:796)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:916)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1633)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1263)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1032)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:907)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:839)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:491)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450)
>   at 
> org.apache.hadoop.hdfs.TestAppendSnapshotTruncate.startUp(TestAppendSnapshotTruncate.java:95)
>  Standard Output  
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10872) Add MutableRate metrics for FSNamesystemLock operations

2016-11-11 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15659125#comment-15659125
 ] 

Mingliang Liu commented on HDFS-10872:
--

Thanks for taking care of this. I usually don't bump the patch version number; 
the JIRA will show old patch of same version (well, name) as gray so it's not 
confusing anyway.

> Add MutableRate metrics for FSNamesystemLock operations
> ---
>
> Key: HDFS-10872
> URL: https://issues.apache.org/jira/browse/HDFS-10872
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: FSLockPerf.java, HDFS-10872.000.patch, 
> HDFS-10872.001.patch, HDFS-10872.002.patch, HDFS-10872.003.patch, 
> HDFS-10872.004.patch, HDFS-10872.005.patch, HDFS-10872.006.patch, 
> HDFS-10872.007.patch, HDFS-10872.008.patch, HDFS-10872.009.patch, 
> HDFS-10872.010.patch, HDFS-10872.011.patch, jmx-output
>
>
> Add metrics for FSNamesystemLock operations to see, overall, how long each 
> operation is holding the lock for. Use MutableRate metrics for now. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11129) TestAppendSnapshotTruncate fails with bind exception

2016-11-11 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-11129:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.8.0
   Status: Resolved  (was: Patch Available)

Committed to {{trunk}} through {{branch-2.8}} branches. Thanks for your 
contribution, [~brahmareddy].

> TestAppendSnapshotTruncate fails with bind exception
> 
>
> Key: HDFS-11129
> URL: https://issues.apache.org/jira/browse/HDFS-11129
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11129.000.patch
>
>
> {noformat}
> java.net.BindException: Problem binding to [localhost:9820] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:535)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:919)
>   at org.apache.hadoop.ipc.Server.(Server.java:2667)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:959)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:801)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:434)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:796)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:916)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1633)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1263)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1032)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:907)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:839)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:491)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450)
>   at 
> org.apache.hadoop.hdfs.TestAppendSnapshotTruncate.startUp(TestAppendSnapshotTruncate.java:95)
>  Standard Output  
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10872) Add MutableRate metrics for FSNamesystemLock operations

2016-11-11 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-10872:
-
Attachment: HDFS-10872.011.patch

Looks like Yetus was confused because the newest file was the JMX output. I'm 
attaching v11 patch which is a dup of v10.

> Add MutableRate metrics for FSNamesystemLock operations
> ---
>
> Key: HDFS-10872
> URL: https://issues.apache.org/jira/browse/HDFS-10872
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: FSLockPerf.java, HDFS-10872.000.patch, 
> HDFS-10872.001.patch, HDFS-10872.002.patch, HDFS-10872.003.patch, 
> HDFS-10872.004.patch, HDFS-10872.005.patch, HDFS-10872.006.patch, 
> HDFS-10872.007.patch, HDFS-10872.008.patch, HDFS-10872.009.patch, 
> HDFS-10872.010.patch, HDFS-10872.011.patch, jmx-output
>
>
> Add metrics for FSNamesystemLock operations to see, overall, how long each 
> operation is holding the lock for. Use MutableRate metrics for now. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11129) TestAppendSnapshotTruncate fails with bind exception

2016-11-11 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15659104#comment-15659104
 ] 

Mingliang Liu commented on HDFS-11129:
--

+1

It's a good idea to revisit quickly other places that have this problem. 
Luckily I did not find any other. If any, we can address it separately.

> TestAppendSnapshotTruncate fails with bind exception
> 
>
> Key: HDFS-11129
> URL: https://issues.apache.org/jira/browse/HDFS-11129
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-11129.000.patch
>
>
> {noformat}
> java.net.BindException: Problem binding to [localhost:9820] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:535)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:919)
>   at org.apache.hadoop.ipc.Server.(Server.java:2667)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:959)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:801)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:434)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:796)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:916)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1633)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1263)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1032)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:907)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:839)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:491)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450)
>   at 
> org.apache.hadoop.hdfs.TestAppendSnapshotTruncate.startUp(TestAppendSnapshotTruncate.java:95)
>  Standard Output  
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11087) NamenodeFsck should check if the output writer is still writable.

2016-11-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15659012#comment-15659012
 ] 

Hadoop QA commented on HDFS-11087:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
49s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 105 unchanged - 2 fixed = 105 total (was 107) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 45s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}164m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_101 Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot |
|   | hadoop.tools.TestDelegationTokenFetcher |
| JDK v1.7.0_111 Failed junit tests | hadoop.tools.TestDelegationTokenFetcher |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | HDFS-11087 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838638/HDFS-11087-branch-2.001.patch
 |
| Optional Tests |  asflicense  compile  javac  java

[jira] [Commented] (HDFS-11117) Refactor striped file tests to allow flexibly test erasure coding policy

2016-11-11 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15658998#comment-15658998
 ] 

Kai Zheng commented on HDFS-7:
--

Thanks [~Sammi] for the good work!

1. For the following function, by the way better: {{numBytes}} -> 
{{numBytesInStrip}}. Similar to {{getRealTotalBlockNum}}.
{code}
-  public static short getRealDataBlockNum(int numBytes) {
-return (short) Math.min(NUM_DATA_BLOCKS,
-(numBytes - 1) / BLOCK_STRIPED_CELL_SIZE + 1);
+  public static short getRealDataBlockNum(int numBytes,
+  ErasureCodingPolicy ecPolicy) {
+return (short) Math.min(ecPolicy.getNumDataUnits(),
+(numBytes - 1) / ecPolicy.getCellSize() + 1);
   }
{code}

2. In {{TestSortLocatedStripedBlock}} and {{TestBlockTokenWithDFSStriped}}:
Why the variables are static? It's the effort here that tries to avoid static, 
right. Please check other tests as well. In such tests, even we use the system 
default policy right now, we may also try other policies in future.
Could we change by the way to make the variable naming styles be consistent?
{code}
+  private static final short NUM_DATA_BLOCKS =
+  (short) ecPolicy.getNumDataUnits();
+  private static final short NUM_PARITY_BLOCKS =
+  (short) ecPolicy.getNumParityUnits();
+  static final int BLK_GROUP_WIDTH = NUM_DATA_BLOCKS + NUM_PARITY_BLOCKS;

...

+  private final static ErasureCodingPolicy ecPolicy =
+  ErasureCodingPolicyManager.getSystemDefaultPolicy();
+  private final static int dataBlocks = ecPolicy.getNumDataUnits();
+  private final static int parityBlocks = ecPolicy.getNumParityUnits();
+  private final static int cellSize = ecPolicy.getCellSize();
{code}


> Refactor striped file tests to allow flexibly test erasure coding policy
> 
>
> Key: HDFS-7
> URL: https://issues.apache.org/jira/browse/HDFS-7
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: SammiChen
>Assignee: SammiChen
> Attachments: HDFS-7-v1.patch, HDFS-7-v2.patch
>
>
> This task is going to refactor current striped file test case structures, 
> especially {{StripedFileTestUtil}} file which is used in many striped file 
> test  cases. All current striped file test cases only support one erasure 
> coding policy, that's the default RS-DEFAULT-6-3-64k policy.  The goal of the 
> refactor is to make the structures more convenient to support other erasure 
> coding policies, such as XOR policy. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10966) Enhance Dispatcher logic on deciding when to give up a source DataNode

2016-11-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15658990#comment-15658990
 ] 

Hadoop QA commented on HDFS-10966:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  4m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 32s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 6 new + 702 unchanged - 3 fixed = 708 total (was 705) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 59s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.tools.TestDelegationTokenFetcher |
|   | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-10966 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838640/HDFS-10966.00.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 24a5c6f025f4 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / fad9609 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17539/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17539/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17539/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17539/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yet

[jira] [Updated] (HDFS-11117) Refactor striped file tests to allow flexibly test erasure coding policy

2016-11-11 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-7:
-
Summary: Refactor striped file tests to allow flexibly test erasure coding 
policy  (was: Refactor striped file unit test case structure)

> Refactor striped file tests to allow flexibly test erasure coding policy
> 
>
> Key: HDFS-7
> URL: https://issues.apache.org/jira/browse/HDFS-7
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: SammiChen
>Assignee: SammiChen
> Attachments: HDFS-7-v1.patch, HDFS-7-v2.patch
>
>
> This task is going to refactor current striped file test case structures, 
> especially {{StripedFileTestUtil}} file which is used in many striped file 
> test  cases. All current striped file test cases only support one erasure 
> coding policy, that's the default RS-DEFAULT-6-3-64k policy.  The goal of the 
> refactor is to make the structures more convenient to support other erasure 
> coding policies, such as XOR policy. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-10666) Über-jira: Unit tests should not use fixed sleep interval to wait for conditions

2016-11-11 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu resolved HDFS-10666.
--
  Resolution: Fixed
Target Version/s: 2.8.0  (was: 3.0.0-alpha2)

> Über-jira: Unit tests should not use fixed sleep interval to wait for 
> conditions
> 
>
> Key: HDFS-10666
> URL: https://issues.apache.org/jira/browse/HDFS-10666
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>
> There have been dozens of intermittent failing unit tests because they depend 
> on fixed-interval sleep to wait for conditions to reach before assertion. 
> This umbrella jira is to replace these sleep statements with:
> * {{GenericTestUtils.waitFor()}} to retry the conditions/assertions
> * Trigger internal state change of code to test, e.g. for {{MiniDFSCluster}} 
> we can trigger {BlockReports,HeartBeats,DeletionReports}
> * Fails fast if specific exceptions are caught
> * Coordinate the JUnit thread with activities of the internal threads
> * _ad-hoc fixes_...
> p.s. I don't know how closures in Java 8 comes into play but I'd like to see 
> any effort.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10966) Enhance Dispatcher logic on deciding when to give up a source DataNode

2016-11-11 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-10966:
-
Attachment: HDFS-10966.00.patch

> Enhance Dispatcher logic on deciding when to give up a source DataNode
> --
>
> Key: HDFS-10966
> URL: https://issues.apache.org/jira/browse/HDFS-10966
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Reporter: Zhe Zhang
>Assignee: Mark Wagner
> Attachments: HDFS-10966.00.patch
>
>
> When a {{Dispatcher}} thread works on a source DataNode, in each iteration it 
> tries to execute a {{PendingMove}}. If no block is moved after 5 iterations, 
> this source (over-utlized) DataNode is given up for this Balancer iteration 
> (20 mins). This is problematic if the source DataNode was heavily loaded in 
> the beginning of the iteration. It will quickly encounter 5 unsuccessful 
> moves and be abandoned.
> We should enhance this logic by e.g. using elapsed time instead of number of 
> iterations.
> {code}
> // Check if the previous move was successful
> } else {
>   // source node cannot find a pending block to move, iteration +1
>   noPendingMoveIteration++;
>   // in case no blocks can be moved for source node's task,
>   // jump out of while-loop after 5 iterations.
>   if (noPendingMoveIteration >= MAX_NO_PENDING_MOVE_ITERATIONS) {
> LOG.info("Failed to find a pending move "  + 
> noPendingMoveIteration
> + " times.  Skipping " + this);
> resetScheduledSize();
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10966) Enhance Dispatcher logic on deciding when to give up a source DataNode

2016-11-11 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-10966:
-
Status: Patch Available  (was: Open)

[~mwagner] did an internal patch (based on 2.6). I'm uploading on his behalf.

[~kihwal] Could you take a look? Thanks!

> Enhance Dispatcher logic on deciding when to give up a source DataNode
> --
>
> Key: HDFS-10966
> URL: https://issues.apache.org/jira/browse/HDFS-10966
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Reporter: Zhe Zhang
>Assignee: Mark Wagner
>
> When a {{Dispatcher}} thread works on a source DataNode, in each iteration it 
> tries to execute a {{PendingMove}}. If no block is moved after 5 iterations, 
> this source (over-utlized) DataNode is given up for this Balancer iteration 
> (20 mins). This is problematic if the source DataNode was heavily loaded in 
> the beginning of the iteration. It will quickly encounter 5 unsuccessful 
> moves and be abandoned.
> We should enhance this logic by e.g. using elapsed time instead of number of 
> iterations.
> {code}
> // Check if the previous move was successful
> } else {
>   // source node cannot find a pending block to move, iteration +1
>   noPendingMoveIteration++;
>   // in case no blocks can be moved for source node's task,
>   // jump out of while-loop after 5 iterations.
>   if (noPendingMoveIteration >= MAX_NO_PENDING_MOVE_ITERATIONS) {
> LOG.info("Failed to find a pending move "  + 
> noPendingMoveIteration
> + " times.  Skipping " + this);
> resetScheduledSize();
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11051) Test Balancer behavior when some block moves are slow

2016-11-11 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15658867#comment-15658867
 ] 

Zhe Zhang commented on HDFS-11051:
--

Thanks [~linyiqun] for working on this. When creating the JIRA I was thinking 
about creating a new test to emulate the scenario where some block moves are 
slow. This will test our Balancer patches such as HDFS-11015.

> Test Balancer behavior when some block moves are slow
> -
>
> Key: HDFS-11051
> URL: https://issues.apache.org/jira/browse/HDFS-11051
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: balancer & mover
>Reporter: Zhe Zhang
>Assignee: Yiqun Lin
> Attachments: HDFS-11051.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11087) NamenodeFsck should check if the output writer is still writable.

2016-11-11 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-11087:
---
Attachment: HDFS-11087-branch-2.001.patch

Thanks for the review, [~shv]. Attaching v001 patch with an additional comment.

> NamenodeFsck should check if the output writer is still writable.
> -
>
> Key: HDFS-11087
> URL: https://issues.apache.org/jira/browse/HDFS-11087
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.5
>Reporter: Konstantin Shvachko
>Assignee: Erik Krogen
> Attachments: HDFS-11087-branch-2.000.patch, 
> HDFS-11087-branch-2.001.patch, HDFS-11087.branch-2.000.patch
>
>
> {{NamenodeFsck}} keeps running even after the client was interrupted. So if 
> you start {{fsck /}} on a large namespace and kill the client, the NameNode 
> will keep traversing the tree for hours although there is nobody to receive 
> the result. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11087) NamenodeFsck should check if the output writer is still writable.

2016-11-11 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15658651#comment-15658651
 ] 

Konstantin Shvachko edited comment on HDFS-11087 at 11/12/16 1:35 AM:
--

This makes sense, Erik.
Could you please add a comment before {{checkError()}} based on its JavaDoc.
It is not clear that the intent here to flush the stream while checking the 
error.
+1 on the patch modular the comment line. Will commit once you add it.


was (Author: shv):
This makes sense, Erik.
Could you please add a comment before {{checkError()}} based on its JavaDoc.
It is not clear that the intent here to flush the stream while checking the 
error.
+1 on the patch modular the comment line. Will commit once add it.

> NamenodeFsck should check if the output writer is still writable.
> -
>
> Key: HDFS-11087
> URL: https://issues.apache.org/jira/browse/HDFS-11087
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.5
>Reporter: Konstantin Shvachko
>Assignee: Erik Krogen
> Attachments: HDFS-11087-branch-2.000.patch, 
> HDFS-11087.branch-2.000.patch
>
>
> {{NamenodeFsck}} keeps running even after the client was interrupted. So if 
> you start {{fsck /}} on a large namespace and kill the client, the NameNode 
> will keep traversing the tree for hours although there is nobody to receive 
> the result. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11130) Block Storage : add storage client to server protocol

2016-11-11 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15658724#comment-15658724
 ] 

Anu Engineer commented on HDFS-11130:
-

[~vagarychen] Thanks for posting this patch. It looks quite good. I had a 
comment which can be handled in a later JIRA if you think it easier to do so.

The container IDs are returned or serialized via, {{repeated string 
allContainerIDs = 6;}} . Both our read and write operations will depend on the 
order being always correct. That is when we want to read block0, we have to 
locate the first container. 

I know that protoc or levelDB serialization is such that it is unlikely to mess 
this information up. However, what bothers me is that we have no way of knowing 
if we mess up this information. 

Would it make sense to create a

{noformat}
message ContainerIDProto {
   required string containerID = 1;
   required uint64 index = 2;
}
{noformat}
Then we could replace  {{repeated string allContainerIDs = 6;}} with  
{{repeated ContainerIDProto allContainerIDs = 6;}}.
This has the advantage we have a check variable, the sorted order of the array 
and actual index value which allows us to verify that we are accessing the 
right container.

If we do this I am hoping we will be persisting {{ContainerIDProto}} in the 
createVolume API in cBlock Server. 


> Block Storage : add storage client to server protocol
> -
>
> Key: HDFS-11130
> URL: https://issues.apache.org/jira/browse/HDFS-11130
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11130-HDFS-7240.001.patch
>
>
> This JIRA adds the protocol between block storage client side and server 
> side. For now, the only operation is mount volume. More specifically, when 
> user mount a volume on client side, client checks with server to verify 
> whether it is a valid volume to mount. On valid mount request, server will 
> also piggyback meta information about the volume back to client. 
> Note that the actual read/write on the volume will never go through server, 
> as long as volume is mounted on client, it is all client's job to communicate 
> with the underly storage layer (SCM in this case)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10802) [SPS]: Add satisfyStoragePolicy API in HdfsAdmin

2016-11-11 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15658719#comment-15658719
 ] 

Uma Maheswara Rao G commented on HDFS-10802:


Hey [~yuanbo], Are you working on this? Just wanted to check the status.

> [SPS]: Add satisfyStoragePolicy API in HdfsAdmin
> 
>
> Key: HDFS-10802
> URL: https://issues.apache.org/jira/browse/HDFS-10802
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Uma Maheswara Rao G
>Assignee: Yuanbo Liu
>
> This JIRA is to track the work for adding user/admin API for calling to 
> satisfyStoragePolicy



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11094) TestLargeBlockReport fails intermittently

2016-11-11 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15658693#comment-15658693
 ] 

Mingliang Liu commented on HDFS-11094:
--

I think this should work. Having the HAState namespace info in the first part 
of the handshake with the NameNode makes sense. I +1 on this idea, but need 2nd 
opinion. Ping [~daryn] and [~arpitagarwal].

One concern is about backward compatibility if we change 
{{HeartbeatRequestProto}}. By the way, can you update the subject and 
description of this JIRA to make it more dictative? The failing test is just 
one of the motivations now.

> TestLargeBlockReport fails intermittently
> -
>
> Key: HDFS-11094
> URL: https://issues.apache.org/jira/browse/HDFS-11094
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HDFS-11094.001.patch, HDFS-11094.002.patch, 
> HDFS-11094.003.patch, HDFS-11094.004.patch
>
>
> {noformat}
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hdfs.server.datanode.TestLargeBlockReport.testBlockReportSucceedsWithLargerLengthLimit(TestLargeBlockReport.java:96)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10930) Refactor: Wrap Datanode IO related operations

2016-11-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15658667#comment-15658667
 ] 

Hadoop QA commented on HDFS-10930:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  5m  
8s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 31s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 481 unchanged - 12 fixed = 484 total (was 493) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
9s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 3 new + 0 
unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
41s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 7 
unchanged - 0 fixed = 8 total (was 7) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 13s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}105m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  ris is null guaranteed to be dereferenced in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.validateIntegrityAndSetLength(File,
 long) on exception path  Dereferenced at BlockPoolSlice.java:be dereferenced 
in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.validateIntegrityAndSetLength(File,
 long) on exception path  Dereferenced at BlockPoolSlice.java:[line 753] |
|  |  Possible null pointer dereference of ris in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.validateIntegrityAndSetLength(File,
 long)  Dereferenced at BlockPoolSlice.java:ris in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.validateIntegrityAndSetLength(File,
 long)  Dereferenced at BlockPoolSlice.java:[line 753] |
|  |  
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.validateIntegrityAndSetLength(File,
 long) may fail to clean up java.io.InputStream on checked exception  
Obligation to clean up resource created at BlockPoolSlice.java:clean up 
java.io.InputStream on checked exception  Obligation to clean up resource 
created at BlockPoolSlice.java:[line 718] is not discharged |
| Failed junit tests | hadoop.hdfs.tools.TestDelegationTokenFetcher |
|   | hadoop.hdfs.server.namenode.TestAddStripedBlockInFBR |
|   | hadoop.hdfs.server.datanode

[jira] [Commented] (HDFS-11087) NamenodeFsck should check if the output writer is still writable.

2016-11-11 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15658651#comment-15658651
 ] 

Konstantin Shvachko commented on HDFS-11087:


This makes sense, Erik.
Could you please add a comment before {{checkError()}} based on its JavaDoc.
It is not clear that the intent here to flush the stream while checking the 
error.
+1 on the patch modular the comment line. Will commit once add it.

> NamenodeFsck should check if the output writer is still writable.
> -
>
> Key: HDFS-11087
> URL: https://issues.apache.org/jira/browse/HDFS-11087
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.5
>Reporter: Konstantin Shvachko
>Assignee: Erik Krogen
> Attachments: HDFS-11087-branch-2.000.patch, 
> HDFS-11087.branch-2.000.patch
>
>
> {{NamenodeFsck}} keeps running even after the client was interrupted. So if 
> you start {{fsck /}} on a large namespace and kill the client, the NameNode 
> will keep traversing the tree for hours although there is nobody to receive 
> the result. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10872) Add MutableRate metrics for FSNamesystemLock operations

2016-11-11 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15658608#comment-15658608
 ] 

Mingliang Liu commented on HDFS-10872:
--

still +1 Thanks for updating the code.

> Add MutableRate metrics for FSNamesystemLock operations
> ---
>
> Key: HDFS-10872
> URL: https://issues.apache.org/jira/browse/HDFS-10872
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: FSLockPerf.java, HDFS-10872.000.patch, 
> HDFS-10872.001.patch, HDFS-10872.002.patch, HDFS-10872.003.patch, 
> HDFS-10872.004.patch, HDFS-10872.005.patch, HDFS-10872.006.patch, 
> HDFS-10872.007.patch, HDFS-10872.008.patch, HDFS-10872.009.patch, 
> HDFS-10872.010.patch, jmx-output
>
>
> Add metrics for FSNamesystemLock operations to see, overall, how long each 
> operation is holding the lock for. Use MutableRate metrics for now. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10872) Add MutableRate metrics for FSNamesystemLock operations

2016-11-11 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-10872:
---
Attachment: jmx-output
HDFS-10872.010.patch

Thanks for looking, [~liuml07].

Attaching v010 patch with updated names for the metrics: 
{{FSN(Read|Write)LockOperationName}}. I've also attached an example of what the 
outputted metrics look like on the JMX page ({{jmx-output}}). 

> Add MutableRate metrics for FSNamesystemLock operations
> ---
>
> Key: HDFS-10872
> URL: https://issues.apache.org/jira/browse/HDFS-10872
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: FSLockPerf.java, HDFS-10872.000.patch, 
> HDFS-10872.001.patch, HDFS-10872.002.patch, HDFS-10872.003.patch, 
> HDFS-10872.004.patch, HDFS-10872.005.patch, HDFS-10872.006.patch, 
> HDFS-10872.007.patch, HDFS-10872.008.patch, HDFS-10872.009.patch, 
> HDFS-10872.010.patch, jmx-output
>
>
> Add metrics for FSNamesystemLock operations to see, overall, how long each 
> operation is holding the lock for. Use MutableRate metrics for now. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11126) Ozone: Add small file support RPC

2016-11-11 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15658549#comment-15658549
 ] 

Chen Liang commented on HDFS-11126:
---

looking good to me. +1

> Ozone: Add small file support RPC
> -
>
> Key: HDFS-11126
> URL: https://issues.apache.org/jira/browse/HDFS-11126
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: HDFS-11126-HDFS-7240.001.patch
>
>
> Add an RPC to send both data and metadata together in one RPC. This is useful 
> when we want to read and write small files, say less than 1 MB. This API is 
> very useful for ozone and cBlocks (HDFS-8)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11118) Block Storage for HDFS

2016-11-11 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15658515#comment-15658515
 ] 

Anu Engineer commented on HDFS-8:
-

[~jpallas] Thank for taking time out to comment. I really appreciate you 
voicing your concerns. I will try to address each of these points in the 
following sections.
 
bq. it's cool stuff that is useful, but I just am not convinced that it belongs 
in the existing HDFS service

I am not sure if you have seen all the discussions in HDFS­-5477 and HDFS-7240. 
Ozone(HDFS-7240) borrows from the ideas of HDFS-5477. Storage containers create 
a block manager that is separate from Namenode. The primary rationale of Block 
Management as a service is the scaling of HDFS. This is the same problem that 
we were facing when building Ozone. Separating out Block Management and 
Namespace management allows us to build different kind of namespaces on top on 
Block management as a Service (Storage Containers). So HDFS, Ozone and cBlock 
are different kind of namespaces on the same block service. In fact, due to the 
block management separation, you will see that cBlock, Ozone and HDFS itself 
becomes much simpler. They now only have to deal with namespace management.

So what we have is a unified block storage manager and a set of Namespace 
managers. For any storage service to work, it must have both name services and 
block services. Hence the choice to place this service alongside HDFS/Ozone.

Please watch HDFS-10419 if you would like to see how HDFS will evolve to use 
Storage Containers.



bq. Like Ozone, it increases the complexity of the datanode, and the datanode 
already has a history of being, well, rather buggy.


Respectfully, I disagree. Storage Containers allows us create a data node that 
can reduce the memory requirements of Namenode and will create a simpler 
datanode . Storage containers will allow us have good clear separation of name 
space and block space. In fact that core thesis is separation of these 
components will allow us to scale better. If you are interested in a deep 
discussion of this, please let me know and we can discuss that in depth in some 
of the ozone JIRAs.



bq. I'm also frankly astonished that there's a feature branch with work being 
committed already without any discussion on this proposal, but maybe I just 
don't have a good understanding of Hadoop norms.

My apologies, if you think I did not wait long enough for the community to 
comment. It is not my intention to short circuit that process at all. In fact I 
am eager to hear community feedback. 
I posted this JIRA on monday and we are posting these patches in *Ozone* 
branch. It was my intention to share with community not only the proposal, but 
some code that gives you better understanding of what is being proposed. If you 
think we should open a new branch ( after due discussion) I am more than 
willing to do so. Please let me know if that is something that would address 
your concern.



> Block Storage for HDFS
> --
>
> Key: HDFS-8
> URL: https://issues.apache.org/jira/browse/HDFS-8
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: cblock-proposal.pdf
>
>
> This JIRA proposes extending HDFS to provide replicated block storage 
> capabilities using Storage Containers. This is would allow users to run 
> unmodified programs that assume that they are running on a posix file system.
> With this extension, HDFS can be used like a block store. For example, YARN 
> jobs could mount and use a volume at will. This is made possible by 
> leveraging Storage Containers and will share the storage layer with Ozone and 
> HDFS in future.
> Please see the attached design document for more details on this proposal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11094) TestLargeBlockReport fails intermittently

2016-11-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15658493#comment-15658493
 ] 

Hadoop QA commented on HDFS-11094:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  4m 
29s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
6s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} hadoop-hdfs-project: The patch generated 15 new 
+ 256 unchanged - 7 fixed = 271 total (was 263) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
52s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 49s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 84m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.tools.TestDelegationTokenFetcher |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11094 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838619/HDFS-11094.004.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  javadoc  
mvninstall  findbugs  checkstyle  |
| uname | Linux 254f09d01d65 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ede1a47 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17534/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
 |

[jira] [Commented] (HDFS-11119) Support for parallel checking of StorageLocations on DataNode startup

2016-11-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15658488#comment-15658488
 ] 

Hudson commented on HDFS-9:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10826 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10826/])
HDFS-9. Support for parallel checking of StorageLocations on (arp: rev 
3d26716547ceb32c5b9ed04cd9ec05b3421a)
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/VolumeCheckResult.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/checker/TestStorageLocationChecker.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/StorageLocationChecker.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java


> Support for parallel checking of StorageLocations on DataNode startup
> -
>
> Key: HDFS-9
> URL: https://issues.apache.org/jira/browse/HDFS-9
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> The {{AsyncChecker}} support introduced by HDFS-4 can be used to 
> parallelize checking {{StorageLocation}} s on Datanode startup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11119) Support for parallel checking of StorageLocations on DataNode startup

2016-11-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15658448#comment-15658448
 ] 

ASF GitHub Bot commented on HDFS-9:
---

Github user asfgit closed the pull request at:

https://github.com/apache/hadoop/pull/155


> Support for parallel checking of StorageLocations on DataNode startup
> -
>
> Key: HDFS-9
> URL: https://issues.apache.org/jira/browse/HDFS-9
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> The {{AsyncChecker}} support introduced by HDFS-4 can be used to 
> parallelize checking {{StorageLocation}} s on Datanode startup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10941) Improve BlockManager#processMisReplicatesAsync log

2016-11-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15658441#comment-15658441
 ] 

Hudson commented on HDFS-10941:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10825 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10825/])
HDFS-10941. Improve BlockManager#processMisReplicatesAsync log. (xyao: rev 
4484b48498b2ab2a40a404c487c7a4e875df10dc)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java


> Improve BlockManager#processMisReplicatesAsync log
> --
>
> Key: HDFS-10941
> URL: https://issues.apache.org/jira/browse/HDFS-10941
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Xiaoyu Yao
>Assignee: Chen Liang
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10941.001.patch, HDFS-10941.002.patch, 
> HDFS-10941.002.patch, HDFS-10941.003.patch
>
>
> BlockManager#processMisReplicatesAsync is the daemon thread running inside 
> namenode to handle miserplicated blocks. As shown below, it has a trace log 
> for each of the block in the cluster being processed (1 blocks per 
> iteration after sleep 10s). 
> {code}
>   MisReplicationResult res = processMisReplicatedBlock(block);
>   if (LOG.isTraceEnabled()) {
> LOG.trace("block " + block + ": " + res);
>   }
> {code}
> However, it is not very useful as dumping every block in the cluster will 
> overwhelm the namenode log without much useful information assuming the 
> majority of the blocks are not over/under replicated. This ticket is opened 
> to improve the log for easy troubleshooting of block replication related 
> issues by:
>  
> 1) add debug log for blocks that get under/over replicated result during 
> {{processMisReplicatedBlock()}} 
> 2) or change to trace log for only blocks that get non-OK result during 
> {{processMisReplicatedBlock()}} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10930) Refactor: Wrap Datanode IO related operations

2016-11-11 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-10930:
--
Attachment: HDFS-10930.05.patch

> Refactor: Wrap Datanode IO related operations
> -
>
> Key: HDFS-10930
> URL: https://issues.apache.org/jira/browse/HDFS-10930
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-10930.01.patch, HDFS-10930.02.patch, 
> HDFS-10930.03.patch, HDFS-10930.04.patch, HDFS-10930.05.patch
>
>
> Datanode IO (Disk/Network) related operations and instrumentations are 
> currently spilled in many classes such as DataNode.java, BlockReceiver.java, 
> BlockSender.java, FsDatasetImpl.java, FsVolumeImpl.java, 
> DirectoryScanner.java, BlockScanner.java, FsDatasetAsyncDiskService.java, 
> LocalReplica.java, LocalReplicaPipeline.java, Storage.java, etc. 
> This ticket is opened to consolidate IO related operations for easy 
> instrumentation, metrics collection, logging and trouble shooting. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10930) Refactor: Wrap Datanode IO related operations

2016-11-11 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-10930:
--
Attachment: (was: HADOOP-10930.05.patch)

> Refactor: Wrap Datanode IO related operations
> -
>
> Key: HDFS-10930
> URL: https://issues.apache.org/jira/browse/HDFS-10930
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-10930.01.patch, HDFS-10930.02.patch, 
> HDFS-10930.03.patch, HDFS-10930.04.patch, HDFS-10930.05.patch
>
>
> Datanode IO (Disk/Network) related operations and instrumentations are 
> currently spilled in many classes such as DataNode.java, BlockReceiver.java, 
> BlockSender.java, FsDatasetImpl.java, FsVolumeImpl.java, 
> DirectoryScanner.java, BlockScanner.java, FsDatasetAsyncDiskService.java, 
> LocalReplica.java, LocalReplicaPipeline.java, Storage.java, etc. 
> This ticket is opened to consolidate IO related operations for easy 
> instrumentation, metrics collection, logging and trouble shooting. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10930) Refactor: Wrap Datanode IO related operations

2016-11-11 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-10930:
--
Attachment: HADOOP-10930.05.patch

Update the patch to fix the check style, find bugs and unit test issues.

> Refactor: Wrap Datanode IO related operations
> -
>
> Key: HDFS-10930
> URL: https://issues.apache.org/jira/browse/HDFS-10930
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-10930.05.patch, HDFS-10930.01.patch, 
> HDFS-10930.02.patch, HDFS-10930.03.patch, HDFS-10930.04.patch
>
>
> Datanode IO (Disk/Network) related operations and instrumentations are 
> currently spilled in many classes such as DataNode.java, BlockReceiver.java, 
> BlockSender.java, FsDatasetImpl.java, FsVolumeImpl.java, 
> DirectoryScanner.java, BlockScanner.java, FsDatasetAsyncDiskService.java, 
> LocalReplica.java, LocalReplicaPipeline.java, Storage.java, etc. 
> This ticket is opened to consolidate IO related operations for easy 
> instrumentation, metrics collection, logging and trouble shooting. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10941) Improve BlockManager#processMisReplicatesAsync log

2016-11-11 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-10941:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.8.0
   Status: Resolved  (was: Patch Available)

Thanks [~vagarychen] for the contribution and [~xiaobingo] for the reviews. 
I've commit the fix to trunk, branch-2.8 and branch-2.

> Improve BlockManager#processMisReplicatesAsync log
> --
>
> Key: HDFS-10941
> URL: https://issues.apache.org/jira/browse/HDFS-10941
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Xiaoyu Yao
>Assignee: Chen Liang
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10941.001.patch, HDFS-10941.002.patch, 
> HDFS-10941.002.patch, HDFS-10941.003.patch
>
>
> BlockManager#processMisReplicatesAsync is the daemon thread running inside 
> namenode to handle miserplicated blocks. As shown below, it has a trace log 
> for each of the block in the cluster being processed (1 blocks per 
> iteration after sleep 10s). 
> {code}
>   MisReplicationResult res = processMisReplicatedBlock(block);
>   if (LOG.isTraceEnabled()) {
> LOG.trace("block " + block + ": " + res);
>   }
> {code}
> However, it is not very useful as dumping every block in the cluster will 
> overwhelm the namenode log without much useful information assuming the 
> majority of the blocks are not over/under replicated. This ticket is opened 
> to improve the log for easy troubleshooting of block replication related 
> issues by:
>  
> 1) add debug log for blocks that get under/over replicated result during 
> {{processMisReplicatedBlock()}} 
> 2) or change to trace log for only blocks that get non-OK result during 
> {{processMisReplicatedBlock()}} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11127) Block Storage : add block storage service protocol

2016-11-11 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDFS-11127:

Fix Version/s: HDFS-7240

> Block Storage : add block storage service protocol
> --
>
> Key: HDFS-11127
> URL: https://issues.apache.org/jira/browse/HDFS-11127
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Chen Liang
>Assignee: Chen Liang
> Fix For: HDFS-7240
>
> Attachments: HDFS-11127-HDFS-7240.001.patch, 
> HDFS-11127-HDFS-7240.002.patch
>
>
> This JIRA adds block service protocol. This protocol is expose to client for 
> volume operations including create, delete, info and list. Note that this 
> protocol has nothing to do with actual data read/write on a particular volume.
> (Also note that the term "cblock" is the current term used to refer to the 
> block storage system.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10872) Add MutableRate metrics for FSNamesystemLock operations

2016-11-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15658378#comment-15658378
 ] 

Hadoop QA commented on HDFS-10872:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 23 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
22s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  9m 
10s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  4m 
34s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui hadoop-cloud-storage-project 
hadoop-cloud-storage-project/hadoop-cloud-storage {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
18s{color} | {color:red} 
branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests
 no findbugs output file 
(hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/findbugsXml.xml)
 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
41s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
30s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  5m 
57s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  5m 57s{color} | 
{color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  5m 57s{color} 
| {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 38s{color} | {color:orange} root: The patch generated 39 new + 2837 
unchanged - 11 fixed = 2876 total (was 2848) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  9m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 10 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui . {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
16s{color} | {color:red} 
patch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests
 no findbugs output file 
(hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/findbugsXml.xml)
 {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  4m 
23s{color} | {color:red} root generated 11198 new + 0 unchanged - 0 fixed = 
11198 total (was 0) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}140m  5s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
26s{color} | {color:red} The patch generated 9 ASF License warnings. {color} |
| {color:bla

[jira] [Commented] (HDFS-10872) Add MutableRate metrics for FSNamesystemLock operations

2016-11-11 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15658355#comment-15658355
 ] 

Mingliang Liu commented on HDFS-10872:
--

That's a good point. We can add prefix (e.g. FSNLock) in 
{{FSNamesystemLock#addMetric()}}.

> Add MutableRate metrics for FSNamesystemLock operations
> ---
>
> Key: HDFS-10872
> URL: https://issues.apache.org/jira/browse/HDFS-10872
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: FSLockPerf.java, HDFS-10872.000.patch, 
> HDFS-10872.001.patch, HDFS-10872.002.patch, HDFS-10872.003.patch, 
> HDFS-10872.004.patch, HDFS-10872.005.patch, HDFS-10872.006.patch, 
> HDFS-10872.007.patch, HDFS-10872.008.patch, HDFS-10872.009.patch
>
>
> Add metrics for FSNamesystemLock operations to see, overall, how long each 
> operation is holding the lock for. Use MutableRate metrics for now. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10872) Add MutableRate metrics for FSNamesystemLock operations

2016-11-11 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15658342#comment-15658342
 ] 

Zhe Zhang commented on HDFS-10872:
--

Sorry, just realized an issue with the names of the metrics, can we more 
explicitly specify those metrics are for locks?

> Add MutableRate metrics for FSNamesystemLock operations
> ---
>
> Key: HDFS-10872
> URL: https://issues.apache.org/jira/browse/HDFS-10872
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: FSLockPerf.java, HDFS-10872.000.patch, 
> HDFS-10872.001.patch, HDFS-10872.002.patch, HDFS-10872.003.patch, 
> HDFS-10872.004.patch, HDFS-10872.005.patch, HDFS-10872.006.patch, 
> HDFS-10872.007.patch, HDFS-10872.008.patch, HDFS-10872.009.patch
>
>
> Add metrics for FSNamesystemLock operations to see, overall, how long each 
> operation is holding the lock for. Use MutableRate metrics for now. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10872) Add MutableRate metrics for FSNamesystemLock operations

2016-11-11 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15658299#comment-15658299
 ] 

Mingliang Liu commented on HDFS-10872:
--

+1

> Add MutableRate metrics for FSNamesystemLock operations
> ---
>
> Key: HDFS-10872
> URL: https://issues.apache.org/jira/browse/HDFS-10872
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: FSLockPerf.java, HDFS-10872.000.patch, 
> HDFS-10872.001.patch, HDFS-10872.002.patch, HDFS-10872.003.patch, 
> HDFS-10872.004.patch, HDFS-10872.005.patch, HDFS-10872.006.patch, 
> HDFS-10872.007.patch, HDFS-10872.008.patch, HDFS-10872.009.patch
>
>
> Add metrics for FSNamesystemLock operations to see, overall, how long each 
> operation is holding the lock for. Use MutableRate metrics for now. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10872) Add MutableRate metrics for FSNamesystemLock operations

2016-11-11 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15658296#comment-15658296
 ] 

Zhe Zhang commented on HDFS-10872:
--

Thanks [~wangda] for helping fix the npm error. Thanks Erik for verifying 
pre-commit.

+1 on the patch, will work on trunk~branch-2.7 commit now.

> Add MutableRate metrics for FSNamesystemLock operations
> ---
>
> Key: HDFS-10872
> URL: https://issues.apache.org/jira/browse/HDFS-10872
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: FSLockPerf.java, HDFS-10872.000.patch, 
> HDFS-10872.001.patch, HDFS-10872.002.patch, HDFS-10872.003.patch, 
> HDFS-10872.004.patch, HDFS-10872.005.patch, HDFS-10872.006.patch, 
> HDFS-10872.007.patch, HDFS-10872.008.patch, HDFS-10872.009.patch
>
>
> Add metrics for FSNamesystemLock operations to see, overall, how long each 
> operation is holding the lock for. Use MutableRate metrics for now. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11094) TestLargeBlockReport fails intermittently

2016-11-11 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated HDFS-11094:
---
Attachment: HDFS-11094.004.patch

New patch checks for null before calling setState() to prevent the NPE present 
in all of the failing unit tests. 

> TestLargeBlockReport fails intermittently
> -
>
> Key: HDFS-11094
> URL: https://issues.apache.org/jira/browse/HDFS-11094
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HDFS-11094.001.patch, HDFS-11094.002.patch, 
> HDFS-11094.003.patch, HDFS-11094.004.patch
>
>
> {noformat}
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hdfs.server.datanode.TestLargeBlockReport.testBlockReportSucceedsWithLargerLengthLimit(TestLargeBlockReport.java:96)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11118) Block Storage for HDFS

2016-11-11 Thread Joe Pallas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15658275#comment-15658275
 ] 

Joe Pallas commented on HDFS-8:
---

I have the same concern about this that I have about Ozone: it's cool stuff 
that is useful, but I just am not convinced that it belongs in the existing 
HDFS service.  Like Ozone, it increases the complexity of the datanode, and the 
datanode already has a history of being, well, rather buggy.

I'm also frankly astonished that there's a feature branch with work being 
committed already without any discussion on this proposal, but maybe I just 
don't have a good understanding of Hadoop norms.

> Block Storage for HDFS
> --
>
> Key: HDFS-8
> URL: https://issues.apache.org/jira/browse/HDFS-8
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: cblock-proposal.pdf
>
>
> This JIRA proposes extending HDFS to provide replicated block storage 
> capabilities using Storage Containers. This is would allow users to run 
> unmodified programs that assume that they are running on a posix file system.
> With this extension, HDFS can be used like a block store. For example, YARN 
> jobs could mount and use a volume at will. This is made possible by 
> leveraging Storage Containers and will share the storage layer with Ozone and 
> HDFS in future.
> Please see the attached design document for more details on this proposal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11133) Ozone: Add allocateContainer RPC

2016-11-11 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11133:

Attachment: HDFS-11133-HDFS-7240.001.patch

Adding patch for early code review. This depends on HDFS-11108

> Ozone: Add allocateContainer RPC
> 
>
> Key: HDFS-11133
> URL: https://issues.apache.org/jira/browse/HDFS-11133
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: oz
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: HDFS-11133-HDFS-7240.001.patch
>
>
> Add allocateContainer RPC in SCM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11133) Ozone: Add allocateContainer RPC

2016-11-11 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-11133:
---

 Summary: Ozone: Add allocateContainer RPC
 Key: HDFS-11133
 URL: https://issues.apache.org/jira/browse/HDFS-11133
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: oz
Reporter: Anu Engineer
Assignee: Anu Engineer
 Fix For: HDFS-7240


Add allocateContainer RPC in SCM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10872) Add MutableRate metrics for FSNamesystemLock operations

2016-11-11 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15658252#comment-15658252
 ] 

Erik Krogen commented on HDFS-10872:


TestDFSAdmin is a known issue tracked in HDFS-11122. 
TestRenameWhileOpen/TestDelegationTokenFetcher pass locally and seem unrelated.

> Add MutableRate metrics for FSNamesystemLock operations
> ---
>
> Key: HDFS-10872
> URL: https://issues.apache.org/jira/browse/HDFS-10872
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: FSLockPerf.java, HDFS-10872.000.patch, 
> HDFS-10872.001.patch, HDFS-10872.002.patch, HDFS-10872.003.patch, 
> HDFS-10872.004.patch, HDFS-10872.005.patch, HDFS-10872.006.patch, 
> HDFS-10872.007.patch, HDFS-10872.008.patch, HDFS-10872.009.patch
>
>
> Add metrics for FSNamesystemLock operations to see, overall, how long each 
> operation is holding the lock for. Use MutableRate metrics for now. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10872) Add MutableRate metrics for FSNamesystemLock operations

2016-11-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15658222#comment-15658222
 ] 

Hadoop QA commented on HDFS-10872:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
55s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 44s{color} | {color:orange} root: The patch generated 2 new + 862 unchanged 
- 0 fixed = 864 total (was 862) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
29s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 58m 10s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}117m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.tools.TestDFSAdmin |
|   | hadoop.hdfs.tools.TestDelegationTokenFetcher |
|   | hadoop.hdfs.TestRenameWhileOpen |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-10872 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838602/HDFS-10872.009.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 621f3278cf6d 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5c61ad2 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17533/artifact/patchprocess/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/1

[jira] [Commented] (HDFS-10941) Improve BlockManager#processMisReplicatesAsync log

2016-11-11 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15658193#comment-15658193
 ] 

Xiaoyu Yao commented on HDFS-10941:
---

Thanks [~vagarychen] for the update. The latest patch LGTM. +1 and I will 
commit it shortly.


> Improve BlockManager#processMisReplicatesAsync log
> --
>
> Key: HDFS-10941
> URL: https://issues.apache.org/jira/browse/HDFS-10941
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Xiaoyu Yao
>Assignee: Chen Liang
> Attachments: HDFS-10941.001.patch, HDFS-10941.002.patch, 
> HDFS-10941.002.patch, HDFS-10941.003.patch
>
>
> BlockManager#processMisReplicatesAsync is the daemon thread running inside 
> namenode to handle miserplicated blocks. As shown below, it has a trace log 
> for each of the block in the cluster being processed (1 blocks per 
> iteration after sleep 10s). 
> {code}
>   MisReplicationResult res = processMisReplicatedBlock(block);
>   if (LOG.isTraceEnabled()) {
> LOG.trace("block " + block + ": " + res);
>   }
> {code}
> However, it is not very useful as dumping every block in the cluster will 
> overwhelm the namenode log without much useful information assuming the 
> majority of the blocks are not over/under replicated. This ticket is opened 
> to improve the log for easy troubleshooting of block replication related 
> issues by:
>  
> 1) add debug log for blocks that get under/over replicated result during 
> {{processMisReplicatedBlock()}} 
> 2) or change to trace log for only blocks that get non-OK result during 
> {{processMisReplicatedBlock()}} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11122) TestDFSAdmin#testReportCommand fails due to timed out

2016-11-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15658162#comment-15658162
 ] 

Hudson commented on HDFS-11122:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10823 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10823/])
HDFS-11122. TestDFSAdmin#testReportCommand fails due to timed out. (liuml07: 
rev aa6010ccca3045ce9f0bb819fb2cb7ff65e1822b)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdmin.java


> TestDFSAdmin#testReportCommand fails due to timed out
> -
>
> Key: HDFS-11122
> URL: https://issues.apache.org/jira/browse/HDFS-11122
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11122.001.patch, HDFS-11122.002.patch, 
> HDFS-11122.003.patch, HDFS-11122.004.patch
>
>
> After HDFS-11083, the test {{TestDFSAdmi}} fails sometimes dueto timed out. 
> The stack 
> infos(https://builds.apache.org/job/PreCommit-HDFS-Build/17484/testReport/):
> {code}
> java.lang.Exception: test timed out after 3 milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:268)
>   at 
> org.apache.hadoop.hdfs.tools.TestDFSAdmin.testReportCommand(TestDFSAdmin.java:540)
> {code}
> The timed out is happened in {{GenericTestUtils.waitFor}}. We can make a 
> improvement in the logic of waiting the corrupt blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11132) Contract test : Allow AccessControlException while GetFileStatus on subdirectory of existing file.

2016-11-11 Thread Vishwajeet Dusane (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15658130#comment-15658130
 ] 

Vishwajeet Dusane commented on HDFS-11132:
--

Contract test {{testMkdirsFailsForSubdirectoryOfExistingFile}} under 
{{FileContextMainOperationsBaseTest}} and {{FileSystemContractBaseTest}}. 
Existing behavior of contract test is to create file with permission 644. and 
once the {{mkdirs}} is failed to create directory under the file, ensure using 
{{getFileStatus}} the directory does not exist under the file.

On Azure data lake file system, ACL are default enabled on file/folder level as 
per guideline from HDFS-9552. and requires execute permission on parent to 
retrieve file information. Since the file does not have execute permission, 
{{getFileStatus}} fails with {{AccessControlException}}.

Proposal is to handle {{AccessControlException}} and ignore as expected.


{code:title=FileContextMainOperationsBaseTest.java|borderStyle=solid}
public void testMkdirsFailsForSubdirectoryOfExistingFile() throws Exception {
...
Path testSubDir = path("/test/hadoop/file/subdir");
...

// PROPOSED - IGNORE AS EXPECTED FOR FS WHEN AccessControlException is thrown 
try {
  assertFalse(fs.exists(testSubDir));
} catch (AccessControlException e) {
  //expected
}

Path testDeepSubDir = path("/test/hadoop/file/deep/sub/dir");

...

// PROPOSED - IGNORE AS EXPECTED WHEN AccessControlException is thrown 
try {
  assertFalse(fs.exists(testDeepSubDir));
} catch (AccessControlException e) {
  // expected
}
  }
{code}


[~ste...@apache.org], [~cnauroth] and [~chris.douglas] - What is your thought 
on this ? 

> Contract test : Allow AccessControlException while GetFileStatus on 
> subdirectory of existing file.
> --
>
> Key: HDFS-11132
> URL: https://issues.apache.org/jira/browse/HDFS-11132
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Vishwajeet Dusane
>
> Azure data lake file system supports traversal access on file/folder and 
> demands execute permission on parent for {{getFileStatus}} access. Ref 
> HDFS-9552.
> {{testMkdirsFailsForSubdirectoryOfExistingFile}} contract test expectation 
> fails with {{AcccessControlException}} when {{exists(...)}} check for 
> sub-directory present under file.
> Expected : {{exists(...)}} to handle {{AcccessControlException}} and ignore 
> during the check for sub-directory present under file. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11122) TestDFSAdmin#testReportCommand fails due to timed out

2016-11-11 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-11122:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.8.0
   Status: Resolved  (was: Patch Available)

Committed to {{trunk}} through {{branch-2.8}} branches. Thanks for your 
contribution [~linyiqun]. Thanks for your review and helpful comments, 
[~tasanuma0829].

> TestDFSAdmin#testReportCommand fails due to timed out
> -
>
> Key: HDFS-11122
> URL: https://issues.apache.org/jira/browse/HDFS-11122
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11122.001.patch, HDFS-11122.002.patch, 
> HDFS-11122.003.patch, HDFS-11122.004.patch
>
>
> After HDFS-11083, the test {{TestDFSAdmi}} fails sometimes dueto timed out. 
> The stack 
> infos(https://builds.apache.org/job/PreCommit-HDFS-Build/17484/testReport/):
> {code}
> java.lang.Exception: test timed out after 3 milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:268)
>   at 
> org.apache.hadoop.hdfs.tools.TestDFSAdmin.testReportCommand(TestDFSAdmin.java:540)
> {code}
> The timed out is happened in {{GenericTestUtils.waitFor}}. We can make a 
> improvement in the logic of waiting the corrupt blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-5692) viewfs shows resolved path in FileNotFoundException

2016-11-11 Thread Joe Pallas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15658103#comment-15658103
 ] 

Joe Pallas commented on HDFS-5692:
--

Make a subclass of FileNotFoundException that calls initCause with the wrapped 
FileNotFoundException?  Seems like that would work and still satisfy the 
contract.


> viewfs shows resolved path in FileNotFoundException
> ---
>
> Key: HDFS-5692
> URL: https://issues.apache.org/jira/browse/HDFS-5692
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.2.0
>Reporter: Keith Turner
>Assignee: Manoj Govindassamy
>
> With the following config, if I call fs.listStatus("/nn1/a/b") when 
> {{/nn1/a/b}} does not exist then ...
> {noformat}
> 
>   
> fs.default.name
> viewfs:///
>   
>   
> fs.viewfs.mounttable.default.link./nn1
> hdfs://host1:9000
>   
>   
> fs.viewfs.mounttable.default.link./nn2
> hdfs://host2:9000
>   
> 
> {noformat}
> I will see an error message like the following.  
> {noformat}
> java.io.FileNotFoundException: File /a/b does not exist.
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:644)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:92)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:702)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:698)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:698)
> at 
> org.apache.hadoop.fs.FilterFileSystem.listStatus(FilterFileSystem.java:222)
> at 
> org.apache.hadoop.fs.viewfs.ChRootedFileSystem.listStatus(ChRootedFileSystem.java:228)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.listStatus(ViewFileSystem.java:366)
> {noformat}
> I think it would be useful for ViewFS to wrap the FileNotFoundException from 
> the inner filesystem, giving an error message like the following.  The 
> following error message has the resolved and unresolved paths which is very 
> useful for debugging.
> {noformat}
> java.io.FileNotFoundException: File /nn1/a/b does not exist.
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.listStatus(ViewFileSystem.java:366)
> Caused by: java.io.FileNotFoundException: File /a/b does not exist.
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:644)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:92)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:702)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:698)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:698)
> at 
> org.apache.hadoop.fs.FilterFileSystem.listStatus(FilterFileSystem.java:222)
> at 
> org.apache.hadoop.fs.viewfs.ChRootedFileSystem.listStatus(ChRootedFileSystem.java:228)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.listStatus(ViewFileSystem.java:366)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11122) TestDFSAdmin#testReportCommand fails due to timed out

2016-11-11 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-11122:
-
Summary: TestDFSAdmin#testReportCommand fails due to timed out  (was: 
TestDFSAdmin.testReportCommand fails due to timed out)

> TestDFSAdmin#testReportCommand fails due to timed out
> -
>
> Key: HDFS-11122
> URL: https://issues.apache.org/jira/browse/HDFS-11122
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-11122.001.patch, HDFS-11122.002.patch, 
> HDFS-11122.003.patch, HDFS-11122.004.patch
>
>
> After HDFS-11083, the test {{TestDFSAdmi}} fails sometimes dueto timed out. 
> The stack 
> infos(https://builds.apache.org/job/PreCommit-HDFS-Build/17484/testReport/):
> {code}
> java.lang.Exception: test timed out after 3 milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:268)
>   at 
> org.apache.hadoop.hdfs.tools.TestDFSAdmin.testReportCommand(TestDFSAdmin.java:540)
> {code}
> The timed out is happened in {{GenericTestUtils.waitFor}}. We can make a 
> improvement in the logic of waiting the corrupt blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11122) TestDFSAdmin.testReportCommand fails due to timed out

2016-11-11 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15658089#comment-15658089
 ] 

Mingliang Liu commented on HDFS-11122:
--

Failing tests are not related. Will commit in one second.

> TestDFSAdmin.testReportCommand fails due to timed out
> -
>
> Key: HDFS-11122
> URL: https://issues.apache.org/jira/browse/HDFS-11122
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-11122.001.patch, HDFS-11122.002.patch, 
> HDFS-11122.003.patch, HDFS-11122.004.patch
>
>
> After HDFS-11083, the test {{TestDFSAdmi}} fails sometimes dueto timed out. 
> The stack 
> infos(https://builds.apache.org/job/PreCommit-HDFS-Build/17484/testReport/):
> {code}
> java.lang.Exception: test timed out after 3 milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:268)
>   at 
> org.apache.hadoop.hdfs.tools.TestDFSAdmin.testReportCommand(TestDFSAdmin.java:540)
> {code}
> The timed out is happened in {{GenericTestUtils.waitFor}}. We can make a 
> improvement in the logic of waiting the corrupt blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11130) Block Storage : add storage client to server protocol

2016-11-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15658086#comment-15658086
 ] 

Hadoop QA commented on HDFS-11130:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
49s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 33s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.web.TestOzoneRestWithMiniCluster |
|   | hadoop.hdfs.tools.TestDelegationTokenFetcher |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HDFS-11130 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838599/HDFS-11130-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  cc  |
| uname | Linux 6867ea9be880 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / e55bdef |
| Default Java | 1.8.0_101 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17532/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17532/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17532/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Block Storage : add storage client to server protocol
> -
>
> Key: HDFS-11130
> URL: https://issues.apache.org/jira/browse/HDFS-11130
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11130-HDFS-7240.001.patch
>
>
> This JIRA adds the protocol betw

[jira] [Created] (HDFS-11132) Contract test : Allow AccessControlException while GetFileStatus on subdirectory of existing file.

2016-11-11 Thread Vishwajeet Dusane (JIRA)
Vishwajeet Dusane created HDFS-11132:


 Summary: Contract test : Allow AccessControlException while 
GetFileStatus on subdirectory of existing file.
 Key: HDFS-11132
 URL: https://issues.apache.org/jira/browse/HDFS-11132
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Vishwajeet Dusane


Azure data lake file system supports traversal access on file/folder and 
demands execute permission on parent for {{getFileStatus}} access. Ref 
HDFS-9552.

{{testMkdirsFailsForSubdirectoryOfExistingFile}} contract test expectation 
fails with {{AcccessControlException}} when {{exists(...)}} check for 
sub-directory present under file.

Expected : {{exists(...)}} to handle {{AcccessControlException}} and ignore 
during the check for sub-directory present under file. 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11060) make DEFAULT_MAX_CORRUPT_FILEBLOCKS_RETURNED configurable

2016-11-11 Thread Lantao Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lantao Jin updated HDFS-11060:
--
Status: Patch Available  (was: Open)

> make DEFAULT_MAX_CORRUPT_FILEBLOCKS_RETURNED configurable
> -
>
> Key: HDFS-11060
> URL: https://issues.apache.org/jira/browse/HDFS-11060
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Lantao Jin
>Priority: Minor
>
> Current, the easiest way to determine which blocks is missing is using NN web 
> UI or JMX. Unfortunately, because the 
> DEFAULT_MAX_CORRUPT_FILEBLOCKS_RETURNED=100 is hard code in FSNamesystem, 
> only 100 missing blocks can be return by UI and JMX. Even the result of URL 
> "https://nn:50070/fsck?listcorruptfileblocks=1&path=%2F"; is limited by this 
> hard code value too.
> I did know FSCK can return more than 100 result but due to the security 
> reason(with kerberos), it is very hard to involve to costumer programs and 
> scripts.
> So I think it should add a configurable var "maxCorruptFileBlocksReturned" to 
> fix above case.
> If community also think it's worth to do, I will patch this. If not, please 
> feel free to tell me the reason. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11060) make DEFAULT_MAX_CORRUPT_FILEBLOCKS_RETURNED configurable

2016-11-11 Thread Lantao Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15658057#comment-15658057
 ] 

Lantao Jin commented on HDFS-11060:
---

[~hadoopqa] Can this ticket be resolved?

> make DEFAULT_MAX_CORRUPT_FILEBLOCKS_RETURNED configurable
> -
>
> Key: HDFS-11060
> URL: https://issues.apache.org/jira/browse/HDFS-11060
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Lantao Jin
>Priority: Minor
>
> Current, the easiest way to determine which blocks is missing is using NN web 
> UI or JMX. Unfortunately, because the 
> DEFAULT_MAX_CORRUPT_FILEBLOCKS_RETURNED=100 is hard code in FSNamesystem, 
> only 100 missing blocks can be return by UI and JMX. Even the result of URL 
> "https://nn:50070/fsck?listcorruptfileblocks=1&path=%2F"; is limited by this 
> hard code value too.
> I did know FSCK can return more than 100 result but due to the security 
> reason(with kerberos), it is very hard to involve to costumer programs and 
> scripts.
> So I think it should add a configurable var "maxCorruptFileBlocksReturned" to 
> fix above case.
> If community also think it's worth to do, I will patch this. If not, please 
> feel free to tell me the reason. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11111) Delete something in .Trash using "rm" should be forbidden without safety option

2016-11-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15658045#comment-15658045
 ] 

ASF GitHub Bot commented on HDFS-1:
---

GitHub user LantaoJin opened a pull request:

https://github.com/apache/hadoop/pull/159

HDFS-1. Delete items in .Trash using rm should be forbidden witho…

…ut safety option

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/LantaoJin/hadoop HDFS-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/159.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #159


commit b65cc0708c7433e79ec20e06f0b3f783e0871585
Author: LantaoJin 
Date:   2016-11-11T19:40:06Z

HDFS-1. Delete items in .Trash using rm should be forbidden without 
safety option




> Delete something in  .Trash using "rm" should be forbidden without safety 
> option 
> -
>
> Key: HDFS-1
> URL: https://issues.apache.org/jira/browse/HDFS-1
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Lantao Jin
>
> As we discussed in HDFS-11102, double confirmation does not seem to be a 
> graceful solution for users. Deleting files in .Trash accidentally is still 
> an issue though. The behaviour of users I'm worried about is {{rm}} ing 
> something in .Trash (without explicitly understanding that those files will 
> not be recoverable). This is in contrast to {{rm}} ing something with 
> "-skipTrash" option (That's a very purposeful action).
> So it is not the same case as HADOOP-12358. The solution is throwing an 
> exception and remind user to add "-trash" option to delete dirs in trash for 
> safely:
> {code}
> Can not delete somehing trash directly! Please add "-trash" or "-T" in "rm" 
> command to do that.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11119) Support for parallel checking of StorageLocations on DataNode startup

2016-11-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15658015#comment-15658015
 ] 

ASF GitHub Bot commented on HDFS-9:
---

Github user anuengineer commented on the issue:

https://github.com/apache/hadoop/pull/155
  
+1, LGTM


> Support for parallel checking of StorageLocations on DataNode startup
> -
>
> Key: HDFS-9
> URL: https://issues.apache.org/jira/browse/HDFS-9
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> The {{AsyncChecker}} support introduced by HDFS-4 can be used to 
> parallelize checking {{StorageLocation}} s on Datanode startup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10941) Improve BlockManager#processMisReplicatesAsync log

2016-11-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15657997#comment-15657997
 ] 

Hadoop QA commented on HDFS-10941:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m 24s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.tools.TestDelegationTokenFetcher |
|   | hadoop.hdfs.TestPersistBlocks |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotFileLength |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HDFS-10941 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838588/HDFS-10941.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0bb9a235a3bc 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 503e73e |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17529/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17529/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17529/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Improve BlockManager#processMisReplicatesAsync log
> --
>
>

[jira] [Commented] (HDFS-11131) TestThrottledAsyncChecker#testContextIsPassed is flaky

2016-11-11 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15657985#comment-15657985
 ] 

Arpit Agarwal commented on HDFS-11131:
--

Thanks to [~kihwal] for pointing this out.

> TestThrottledAsyncChecker#testContextIsPassed is flaky
> --
>
> Key: HDFS-11131
> URL: https://issues.apache.org/jira/browse/HDFS-11131
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> This test failed in a few precommit runs. e.g.
> https://builds.apache.org/job/PreCommit-HDFS-Build/17481/testReport/org.apache.hadoop.hdfs.server.datanode.checker/TestThrottledAsyncChecker/testContextIsPassed/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11131) TestThrottledAsyncChecker#testContextIsPassed is flaky

2016-11-11 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-11131:


 Summary: TestThrottledAsyncChecker#testContextIsPassed is flaky
 Key: HDFS-11131
 URL: https://issues.apache.org/jira/browse/HDFS-11131
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0-alpha2
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


This test failed in a few precommit runs. e.g.
https://builds.apache.org/job/PreCommit-HDFS-Build/17481/testReport/org.apache.hadoop.hdfs.server.datanode.checker/TestThrottledAsyncChecker/testContextIsPassed/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10872) Add MutableRate metrics for FSNamesystemLock operations

2016-11-11 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-10872:
---
Attachment: HDFS-10872.009.patch

Looks like YARN-5868 went through faster than expected, reuploading v007 patch 
as v009 to trigger Jenkins. 

> Add MutableRate metrics for FSNamesystemLock operations
> ---
>
> Key: HDFS-10872
> URL: https://issues.apache.org/jira/browse/HDFS-10872
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: FSLockPerf.java, HDFS-10872.000.patch, 
> HDFS-10872.001.patch, HDFS-10872.002.patch, HDFS-10872.003.patch, 
> HDFS-10872.004.patch, HDFS-10872.005.patch, HDFS-10872.006.patch, 
> HDFS-10872.007.patch, HDFS-10872.008.patch, HDFS-10872.009.patch
>
>
> Add metrics for FSNamesystemLock operations to see, overall, how long each 
> operation is holding the lock for. Use MutableRate metrics for now. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9668) Optimize the locking in FsDatasetImpl

2016-11-11 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15657896#comment-15657896
 ] 

Arpit Agarwal commented on HDFS-9668:
-

Hi [~jingcheng...@intel.com], 

bq. RIght, this doesn't chang the DN state, but what if other places change 
blocks? Should we protect the read code from that?
The answer is easy without the block-lock. Another thread cannot modify state 
without getting the writer lock, so the read lock will be sufficient. We've 
seen the directory scanner cause multi-second pauses in real clusters so that 
seems like an important one to replace.

bq. The volume map has a mutex to synchronize the operations on the map, I 
guess a outside read lock and a block-related lock are enough?
volumeMap is modified while holding the dataset global lock today. It will take 
rigorous work to reason we won't introduce inconsistencies in the DN state by 
changing this behavior.

I've suggested in earlier comments that wholesale changes to locking are risky 
and we should first aim to get the exclusive lock replaced with a read-write 
lock. It's fine if we have mostly write locks to start with. I know you spent a 
lot of effort on this already, hence I'll again offer to help split out the 
block-locking from the read-write lock changes.

> Optimize the locking in FsDatasetImpl
> -
>
> Key: HDFS-9668
> URL: https://issues.apache.org/jira/browse/HDFS-9668
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
> Attachments: HDFS-9668-1.patch, HDFS-9668-10.patch, 
> HDFS-9668-11.patch, HDFS-9668-12.patch, HDFS-9668-13.patch, 
> HDFS-9668-14.patch, HDFS-9668-14.patch, HDFS-9668-15.patch, 
> HDFS-9668-16.patch, HDFS-9668-17.patch, HDFS-9668-18.patch, 
> HDFS-9668-19.patch, HDFS-9668-19.patch, HDFS-9668-2.patch, 
> HDFS-9668-20.patch, HDFS-9668-21.patch, HDFS-9668-22.patch, 
> HDFS-9668-23.patch, HDFS-9668-23.patch, HDFS-9668-3.patch, HDFS-9668-4.patch, 
> HDFS-9668-5.patch, HDFS-9668-6.patch, HDFS-9668-7.patch, HDFS-9668-8.patch, 
> HDFS-9668-9.patch, execution_time.png
>
>
> During the HBase test on a tiered storage of HDFS (WAL is stored in 
> SSD/RAMDISK, and all other files are stored in HDD), we observe many 
> long-time BLOCKED threads on FsDatasetImpl in DataNode. The following is part 
> of the jstack result:
> {noformat}
> "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at 
> /192.168.50.16:48521 [Receiving block 
> BP-1042877462-192.168.50.13-1446173170517:blk_1073779272_40852]" - Thread 
> t@93336
>java.lang.Thread.State: BLOCKED
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:)
>   - waiting to lock <18324c9> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl) owned by 
> "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at 
> /192.168.50.16:48520 [Receiving block 
> BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" t@93335
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:113)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:183)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:615)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
>   at java.lang.Thread.run(Thread.java:745)
>Locked ownable synchronizers:
>   - None
>   
> "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at 
> /192.168.50.16:48520 [Receiving block 
> BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" - Thread 
> t@93335
>java.lang.Thread.State: RUNNABLE
>   at java.io.UnixFileSystem.createFileExclusively(Native Method)
>   at java.io.File.createNewFile(File.java:1012)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:66)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:271)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:286)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:1140)
>   - locked <18324c9> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:113)
>   at 
> org.apache.hadoop.

[jira] [Updated] (HDFS-11130) Block Storage : add storage client to server protocol

2016-11-11 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11130:
--
Attachment: HDFS-11130-HDFS-7240.001.patch

> Block Storage : add storage client to server protocol
> -
>
> Key: HDFS-11130
> URL: https://issues.apache.org/jira/browse/HDFS-11130
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11130-HDFS-7240.001.patch
>
>
> This JIRA adds the protocol between block storage client side and server 
> side. For now, the only operation is mount volume. More specifically, when 
> user mount a volume on client side, client checks with server to verify 
> whether it is a valid volume to mount. On valid mount request, server will 
> also piggyback meta information about the volume back to client. 
> Note that the actual read/write on the volume will never go through server, 
> as long as volume is mounted on client, it is all client's job to communicate 
> with the underly storage layer (SCM in this case)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11130) Block Storage : add storage client to server protocol

2016-11-11 Thread Chen Liang (JIRA)
Chen Liang created HDFS-11130:
-

 Summary: Block Storage : add storage client to server protocol
 Key: HDFS-11130
 URL: https://issues.apache.org/jira/browse/HDFS-11130
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Chen Liang
Assignee: Chen Liang


This JIRA adds the protocol between block storage client side and server side. 
For now, the only operation is mount volume. More specifically, when user mount 
a volume on client side, client checks with server to verify whether it is a 
valid volume to mount. On valid mount request, server will also piggyback meta 
information about the volume back to client. 

Note that the actual read/write on the volume will never go through server, as 
long as volume is mounted on client, it is all client's job to communicate with 
the underly storage layer (SCM in this case)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11130) Block Storage : add storage client to server protocol

2016-11-11 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11130:
--
Status: Patch Available  (was: Open)

> Block Storage : add storage client to server protocol
> -
>
> Key: HDFS-11130
> URL: https://issues.apache.org/jira/browse/HDFS-11130
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11130-HDFS-7240.001.patch
>
>
> This JIRA adds the protocol between block storage client side and server 
> side. For now, the only operation is mount volume. More specifically, when 
> user mount a volume on client side, client checks with server to verify 
> whether it is a valid volume to mount. On valid mount request, server will 
> also piggyback meta information about the volume back to client. 
> Note that the actual read/write on the volume will never go through server, 
> as long as volume is mounted on client, it is all client's job to communicate 
> with the underly storage layer (SCM in this case)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10872) Add MutableRate metrics for FSNamesystemLock operations

2016-11-11 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-10872:
---
Attachment: HDFS-10872.008.patch

Attaching v008 patch which includes the changes within YARN-5868 just to be 
able to get a clean Jenkins build. Patch is otherwise identical to v007.

> Add MutableRate metrics for FSNamesystemLock operations
> ---
>
> Key: HDFS-10872
> URL: https://issues.apache.org/jira/browse/HDFS-10872
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: FSLockPerf.java, HDFS-10872.000.patch, 
> HDFS-10872.001.patch, HDFS-10872.002.patch, HDFS-10872.003.patch, 
> HDFS-10872.004.patch, HDFS-10872.005.patch, HDFS-10872.006.patch, 
> HDFS-10872.007.patch, HDFS-10872.008.patch
>
>
> Add metrics for FSNamesystemLock operations to see, overall, how long each 
> operation is holding the lock for. Use MutableRate metrics for now. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10872) Add MutableRate metrics for FSNamesystemLock operations

2016-11-11 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15657707#comment-15657707
 ] 

Erik Krogen edited comment on HDFS-10872 at 11/11/16 6:18 PM:
--

Attaching v008 patch which includes the changes within YARN-5868 (fixing an 
issue with {{npm}} during the build process, entirely unrelated to this ticket) 
just to be able to get a clean Jenkins build. Patch is otherwise identical to 
v007.


was (Author: xkrogen):
Attaching v008 patch which includes the changes within YARN-5868 just to be 
able to get a clean Jenkins build. Patch is otherwise identical to v007.

> Add MutableRate metrics for FSNamesystemLock operations
> ---
>
> Key: HDFS-10872
> URL: https://issues.apache.org/jira/browse/HDFS-10872
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: FSLockPerf.java, HDFS-10872.000.patch, 
> HDFS-10872.001.patch, HDFS-10872.002.patch, HDFS-10872.003.patch, 
> HDFS-10872.004.patch, HDFS-10872.005.patch, HDFS-10872.006.patch, 
> HDFS-10872.007.patch, HDFS-10872.008.patch
>
>
> Add metrics for FSNamesystemLock operations to see, overall, how long each 
> operation is holding the lock for. Use MutableRate metrics for now. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11128) CreateEditsLog throws NullPointerException

2016-11-11 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15657684#comment-15657684
 ] 

Hanisha Koneru edited comment on HDFS-11128 at 11/11/16 6:07 PM:
-

Thank you [~arpitagarwal] and [~brahmareddy]] for reviewing and committing the 
patch.


was (Author: hanishakoneru):
Thank you [~arpitagarwal] and [~brahma] for reviewing and committing the patch.

> CreateEditsLog throws NullPointerException
> --
>
> Key: HDFS-11128
> URL: https://issues.apache.org/jira/browse/HDFS-11128
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11128.000.patch
>
>
> When trying to create edit logs through CreateEditsLog, the following 
> exception is encountered.
> {quote}
> Exception in thread "main" java.lang.NullPointerException
>   at java.io.File.(File.java:415)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NNStorage.getStorageDirectory(NNStorage.java:343)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.createStandaloneEditLog(FSImageTestUtil.java:200)
>   at 
> org.apache.hadoop.hdfs.server.namenode.CreateEditsLog.main(CreateEditsLog.java:205)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> {quote}
> This happens as Mockito is unable to access package protected method 
> _NNStorage#getStorageDirectory_
> We need to change the access of this method to _public_



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11128) CreateEditsLog throws NullPointerException

2016-11-11 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15657684#comment-15657684
 ] 

Hanisha Koneru commented on HDFS-11128:
---

Thank you [~arpitagarwal] and [~brahma] for reviewing and committing the patch.

> CreateEditsLog throws NullPointerException
> --
>
> Key: HDFS-11128
> URL: https://issues.apache.org/jira/browse/HDFS-11128
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11128.000.patch
>
>
> When trying to create edit logs through CreateEditsLog, the following 
> exception is encountered.
> {quote}
> Exception in thread "main" java.lang.NullPointerException
>   at java.io.File.(File.java:415)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NNStorage.getStorageDirectory(NNStorage.java:343)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.createStandaloneEditLog(FSImageTestUtil.java:200)
>   at 
> org.apache.hadoop.hdfs.server.namenode.CreateEditsLog.main(CreateEditsLog.java:205)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> {quote}
> This happens as Mockito is unable to access package protected method 
> _NNStorage#getStorageDirectory_
> We need to change the access of this method to _public_



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11127) Block Storage : add block storage service protocol

2016-11-11 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11127:

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

[~vagarychen] Thank you for the contribution. I have committed this patch to 
the feature branch.

> Block Storage : add block storage service protocol
> --
>
> Key: HDFS-11127
> URL: https://issues.apache.org/jira/browse/HDFS-11127
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11127-HDFS-7240.001.patch, 
> HDFS-11127-HDFS-7240.002.patch
>
>
> This JIRA adds block service protocol. This protocol is expose to client for 
> volume operations including create, delete, info and list. Note that this 
> protocol has nothing to do with actual data read/write on a particular volume.
> (Also note that the term "cblock" is the current term used to refer to the 
> block storage system.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10941) Improve BlockManager#processMisReplicatesAsync log

2016-11-11 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-10941:
--
Attachment: HDFS-10941.003.patch

Thanks [~xyao] for the comments! Uploaded v003 patch.

> Improve BlockManager#processMisReplicatesAsync log
> --
>
> Key: HDFS-10941
> URL: https://issues.apache.org/jira/browse/HDFS-10941
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Xiaoyu Yao
>Assignee: Chen Liang
> Attachments: HDFS-10941.001.patch, HDFS-10941.002.patch, 
> HDFS-10941.002.patch, HDFS-10941.003.patch
>
>
> BlockManager#processMisReplicatesAsync is the daemon thread running inside 
> namenode to handle miserplicated blocks. As shown below, it has a trace log 
> for each of the block in the cluster being processed (1 blocks per 
> iteration after sleep 10s). 
> {code}
>   MisReplicationResult res = processMisReplicatedBlock(block);
>   if (LOG.isTraceEnabled()) {
> LOG.trace("block " + block + ": " + res);
>   }
> {code}
> However, it is not very useful as dumping every block in the cluster will 
> overwhelm the namenode log without much useful information assuming the 
> majority of the blocks are not over/under replicated. This ticket is opened 
> to improve the log for easy troubleshooting of block replication related 
> issues by:
>  
> 1) add debug log for blocks that get under/over replicated result during 
> {{processMisReplicatedBlock()}} 
> 2) or change to trace log for only blocks that get non-OK result during 
> {{processMisReplicatedBlock()}} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10941) Improve BlockManager#processMisReplicatesAsync log

2016-11-11 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15657612#comment-15657612
 ] 

Xiaoyu Yao commented on HDFS-10941:
---

Thanks [~vagarychen] for the update. The v3 patch has a potential perf issue 
with the wrapper approach. The toString() and string concat cost will always be 
there even with the {{if (LOG.isTraceEnabled())}} guard inside the wrapper. 

I would suggest we leverage the slf4j parameterized logging like below to avoid 
it without the wrapper. 
More detail about sl4fj logging performance can be found here:  
http://www.slf4j.org/faq.html#logging_performance.

{code}
   case UNDER_REPLICATED:
LOG.trace("under replicated block: {} result: {}", block, res);
nrUnderReplicated++;
{code}

> Improve BlockManager#processMisReplicatesAsync log
> --
>
> Key: HDFS-10941
> URL: https://issues.apache.org/jira/browse/HDFS-10941
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Xiaoyu Yao
>Assignee: Chen Liang
> Attachments: HDFS-10941.001.patch, HDFS-10941.002.patch, 
> HDFS-10941.002.patch
>
>
> BlockManager#processMisReplicatesAsync is the daemon thread running inside 
> namenode to handle miserplicated blocks. As shown below, it has a trace log 
> for each of the block in the cluster being processed (1 blocks per 
> iteration after sleep 10s). 
> {code}
>   MisReplicationResult res = processMisReplicatedBlock(block);
>   if (LOG.isTraceEnabled()) {
> LOG.trace("block " + block + ": " + res);
>   }
> {code}
> However, it is not very useful as dumping every block in the cluster will 
> overwhelm the namenode log without much useful information assuming the 
> majority of the blocks are not over/under replicated. This ticket is opened 
> to improve the log for easy troubleshooting of block replication related 
> issues by:
>  
> 1) add debug log for blocks that get under/over replicated result during 
> {{processMisReplicatedBlock()}} 
> 2) or change to trace log for only blocks that get non-OK result during 
> {{processMisReplicatedBlock()}} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11129) TestAppendSnapshotTruncate fails with bind exception

2016-11-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15657488#comment-15657488
 ] 

Hadoop QA commented on HDFS-11129:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 58m 11s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.tools.TestDelegationTokenFetcher |
|   | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
|   | hadoop.hdfs.tools.TestDFSAdmin |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HDFS-11129 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838563/HDFS-11129.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 49e58044f17e 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 503e73e |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17528/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17528/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17528/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestAppendSnapshotTruncate fails with bind exception
> 
>
> Key: HDFS-11129
> URL: https://issues.apache.org/jira/browse/HDFS-11129
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tes

[jira] [Commented] (HDFS-11056) Concurrent append and read operations lead to checksum error

2016-11-11 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15657402#comment-15657402
 ] 

Kihwal Lee commented on HDFS-11056:
---

+1 for the 2.7 patch. It looks to be a correct port. Thanks [~jojochuang].

> Concurrent append and read operations lead to checksum error
> 
>
> Key: HDFS-11056
> URL: https://issues.apache.org/jira/browse/HDFS-11056
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, httpfs
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11056.001.patch, HDFS-11056.002.patch, 
> HDFS-11056.branch-2.7.patch, HDFS-11056.branch-2.patch, 
> HDFS-11056.reproduce.patch
>
>
> If there are two clients, one of them open-append-close a file continuously, 
> while the other open-read-close the same file continuously, the reader 
> eventually gets a checksum error in the data read.
> On my local Mac, it takes a few minutes to produce the error. This happens to 
> httpfs clients, but there's no reason not believe this happens to any append 
> clients.
> I have a unit test that demonstrates the checksum error. Will attach later.
> Relevant log:
> {quote}
> 2016-10-25 15:34:45,153 INFO  audit - allowed=trueugi=weichiu 
> (auth:SIMPLE)   ip=/127.0.0.1   cmd=opensrc=/tmp/bar.txt
> dst=nullperm=null   proto=rpc
> 2016-10-25 15:34:45,155 INFO  DataNode - Receiving 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 src: 
> /127.0.0.1:51130 dest: /127.0.0.1:50131
> 2016-10-25 15:34:45,155 INFO  FsDatasetImpl - Appending to FinalizedReplica, 
> blk_1073741825_1182, FINALIZED
>   getNumBytes() = 182
>   getBytesOnDisk()  = 182
>   getVisibleLength()= 182
>   getVolume()   = 
> /Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1
>   getBlockURI() = 
> file:/Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1/current/BP-837130339-172.16.1.88-1477434851452/current/finalized/subdir0/subdir0/blk_1073741825
> 2016-10-25 15:34:45,167 INFO  DataNode - opReadBlock 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 received exception 
> java.io.IOException: No data exists for block 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182
> 2016-10-25 15:34:45,167 WARN  DataNode - 
> DatanodeRegistration(127.0.0.1:50131, 
> datanodeUuid=41c96335-5e4b-4950-ac22-3d21b353abb8, infoPort=50133, 
> infoSecurePort=0, ipcPort=50134, 
> storageInfo=lv=-57;cid=testClusterID;nsid=1472068852;c=1477434851452):Got 
> exception while serving 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 to /127.0.0.1:51121
> java.io.IOException: No data exists for block 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockInputStream(FsDatasetImpl.java:773)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:400)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:581)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:150)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:102)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289)
>   at java.lang.Thread.run(Thread.java:745)
> 2016-10-25 15:34:45,168 INFO  FSNamesystem - 
> updatePipeline(blk_1073741825_1182, newGS=1183, newLength=182, 
> newNodes=[127.0.0.1:50131], client=DFSClient_NONMAPREDUCE_-1743096965_197)
> 2016-10-25 15:34:45,168 ERROR DataNode - 127.0.0.1:50131:DataXceiver error 
> processing READ_BLOCK operation  src: /127.0.0.1:51121 dst: /127.0.0.1:50131
> java.io.IOException: No data exists for block 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockInputStream(FsDatasetImpl.java:773)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:400)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:581)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:150)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:102)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289)
>   at java.lang.Thread.run(Thread.java:745)
> 2016-10-25 15:34:45,168 INFO  FSNamesystem - 
> updatePipeline(blk_1073741825_1182 => blk_1073741825_1183) success
> 2016-10-25 15:34:45,170 WARN  DFSClient - Found Checksum error

[jira] [Updated] (HDFS-11129) TestAppendSnapshotTruncate fails with bind exception

2016-11-11 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-11129:

Status: Patch Available  (was: Open)

> TestAppendSnapshotTruncate fails with bind exception
> 
>
> Key: HDFS-11129
> URL: https://issues.apache.org/jira/browse/HDFS-11129
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-11129.000.patch
>
>
> {noformat}
> java.net.BindException: Problem binding to [localhost:9820] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:535)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:919)
>   at org.apache.hadoop.ipc.Server.(Server.java:2667)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:959)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:801)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:434)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:796)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:916)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1633)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1263)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1032)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:907)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:839)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:491)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450)
>   at 
> org.apache.hadoop.hdfs.TestAppendSnapshotTruncate.startUp(TestAppendSnapshotTruncate.java:95)
>  Standard Output  
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11129) TestAppendSnapshotTruncate fails with bind exception

2016-11-11 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-11129:

Attachment: HDFS-11129.000.patch

I don't think it's necessary to specify the name node RPC port.

Keeping a specific port during a restart is needed when we expect something 
(clients, DataNodes) to be looking for the NameNode at that exact port.

there is restart in {{TestAppendSnapshotTruncate}} so the default port should 
work correctly here..

Uploading the patch for same..

> TestAppendSnapshotTruncate fails with bind exception
> 
>
> Key: HDFS-11129
> URL: https://issues.apache.org/jira/browse/HDFS-11129
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-11129.000.patch
>
>
> {noformat}
> java.net.BindException: Problem binding to [localhost:9820] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:535)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:919)
>   at org.apache.hadoop.ipc.Server.(Server.java:2667)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:959)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:801)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:434)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:796)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:916)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1633)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1263)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1032)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:907)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:839)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:491)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450)
>   at 
> org.apache.hadoop.hdfs.TestAppendSnapshotTruncate.startUp(TestAppendSnapshotTruncate.java:95)
>  Standard Output  
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11128) CreateEditsLog throws NullPointerException

2016-11-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15657267#comment-15657267
 ] 

Hudson commented on HDFS-11128:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10819 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10819/])
HDFS-11128. CreateEditsLog throws NullPointerException. Contributed by (brahma: 
rev 1ae57f0f75695178dc135b0c259d066f7da31c9d)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java


> CreateEditsLog throws NullPointerException
> --
>
> Key: HDFS-11128
> URL: https://issues.apache.org/jira/browse/HDFS-11128
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11128.000.patch
>
>
> When trying to create edit logs through CreateEditsLog, the following 
> exception is encountered.
> {quote}
> Exception in thread "main" java.lang.NullPointerException
>   at java.io.File.(File.java:415)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NNStorage.getStorageDirectory(NNStorage.java:343)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.createStandaloneEditLog(FSImageTestUtil.java:200)
>   at 
> org.apache.hadoop.hdfs.server.namenode.CreateEditsLog.main(CreateEditsLog.java:205)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> {quote}
> This happens as Mockito is unable to access package protected method 
> _NNStorage#getStorageDirectory_
> We need to change the access of this method to _public_



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11128) CreateEditsLog throws NullPointerException

2016-11-11 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-11128:

Fix Version/s: 3.0.0-alpha2

> CreateEditsLog throws NullPointerException
> --
>
> Key: HDFS-11128
> URL: https://issues.apache.org/jira/browse/HDFS-11128
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11128.000.patch
>
>
> When trying to create edit logs through CreateEditsLog, the following 
> exception is encountered.
> {quote}
> Exception in thread "main" java.lang.NullPointerException
>   at java.io.File.(File.java:415)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NNStorage.getStorageDirectory(NNStorage.java:343)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.createStandaloneEditLog(FSImageTestUtil.java:200)
>   at 
> org.apache.hadoop.hdfs.server.namenode.CreateEditsLog.main(CreateEditsLog.java:205)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> {quote}
> This happens as Mockito is unable to access package protected method 
> _NNStorage#getStorageDirectory_
> We need to change the access of this method to _public_



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11128) CreateEditsLog throws NullPointerException

2016-11-11 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-11128:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed to trunk,branch-2 and branch-2.8.

Thanks [~hanishakoneru] for contributions and thanks to [~arpitagarwal] for 
review.

> CreateEditsLog throws NullPointerException
> --
>
> Key: HDFS-11128
> URL: https://issues.apache.org/jira/browse/HDFS-11128
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Fix For: 2.8.0
>
> Attachments: HDFS-11128.000.patch
>
>
> When trying to create edit logs through CreateEditsLog, the following 
> exception is encountered.
> {quote}
> Exception in thread "main" java.lang.NullPointerException
>   at java.io.File.(File.java:415)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NNStorage.getStorageDirectory(NNStorage.java:343)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.createStandaloneEditLog(FSImageTestUtil.java:200)
>   at 
> org.apache.hadoop.hdfs.server.namenode.CreateEditsLog.main(CreateEditsLog.java:205)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> {quote}
> This happens as Mockito is unable to access package protected method 
> _NNStorage#getStorageDirectory_
> We need to change the access of this method to _public_



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11128) CreateEditsLog throws NullPointerException

2016-11-11 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15657201#comment-15657201
 ] 

Brahma Reddy Battula commented on HDFS-11128:
-

+1, Nice catch, [~hanishakoneru]..

Test Failures are unrelated.

HDFS-11129 to track {{TestAppendSnapshotTruncate}} and
HDFS-11122 to track {{TestDFSAdmin}}

will commit shortly.

> CreateEditsLog throws NullPointerException
> --
>
> Key: HDFS-11128
> URL: https://issues.apache.org/jira/browse/HDFS-11128
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-11128.000.patch
>
>
> When trying to create edit logs through CreateEditsLog, the following 
> exception is encountered.
> {quote}
> Exception in thread "main" java.lang.NullPointerException
>   at java.io.File.(File.java:415)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NNStorage.getStorageDirectory(NNStorage.java:343)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.createStandaloneEditLog(FSImageTestUtil.java:200)
>   at 
> org.apache.hadoop.hdfs.server.namenode.CreateEditsLog.main(CreateEditsLog.java:205)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> {quote}
> This happens as Mockito is unable to access package protected method 
> _NNStorage#getStorageDirectory_
> We need to change the access of this method to _public_



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11129) TestAppendSnapshotTruncate fails with bind exception

2016-11-11 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HDFS-11129:
---

 Summary: TestAppendSnapshotTruncate fails with bind exception
 Key: HDFS-11129
 URL: https://issues.apache.org/jira/browse/HDFS-11129
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


{noformat}
java.net.BindException: Problem binding to [localhost:9820] 
java.net.BindException: Address already in use; For more details see:  
http://wiki.apache.org/hadoop/BindException
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.apache.hadoop.ipc.Server.bind(Server.java:535)
at org.apache.hadoop.ipc.Server$Listener.(Server.java:919)
at org.apache.hadoop.ipc.Server.(Server.java:2667)
at org.apache.hadoop.ipc.RPC$Server.(RPC.java:959)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342)
at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:801)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:434)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:796)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:916)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1633)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1263)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1032)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:907)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:839)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:491)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450)
at 
org.apache.hadoop.hdfs.TestAppendSnapshotTruncate.startUp(TestAppendSnapshotTruncate.java:95)
 Standard Output
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6973) DFSClient does not closing a closed socket resulting in thousand of CLOSE_WAIT sockets

2016-11-11 Thread gfeng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15657103#comment-15657103
 ] 

gfeng commented on HDFS-6973:
-

Disable IPv6 cannot kill the problem. The number of opened socket in IPv4 keep 
increase.

> DFSClient does not closing a closed socket resulting in thousand of 
> CLOSE_WAIT sockets
> --
>
> Key: HDFS-6973
> URL: https://issues.apache.org/jira/browse/HDFS-6973
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.4.0
> Environment: RHEL 6.3 -HDP 2.1 -6 RegionServers/Datanode -18T per 
> node -3108Regions
>Reporter: steven xu
>
> HBase as HDFS Client dose not close a dead connection with the datanode.
> This resulting in over 30K+ CLOSE_WAIT and at some point HBase can not 
> connect to the datanode because too many mapped sockets from one host to 
> another on the same port:50010. 
> After I restart all RSs, the count of CLOSE_WAIT will increase always.
> $ netstat -an|grep CLOSE_WAIT|wc -l
> 2545
> netstat -nap|grep CLOSE_WAIT|grep 6569|wc -l
> 2545
> ps -ef|grep 6569
> hbase 6569 6556 21 Aug25 ? 09:52:33 /opt/jdk1.6.0_25/bin/java 
> -Dproc_regionserver -XX:OnOutOfMemoryError=kill -9 %p -Xmx1000m 
> -XX:+UseConcMarkSweepGC
> I aslo have reviewed these issues:
> [HDFS-5697]
> [HDFS-5671]
> [HDFS-1836]
> [HBASE-9393]
> I found in HBase 0.98/Hadoop 2.4.0 source codes of these patchs have been 
> added.
> But I donot understand why HBase 0.98/Hadoop 2.4.0 also have this isssue. 
> Please check. Thanks a lot.
> These codes have been added into 
> BlockReaderFactory.getRemoteBlockReaderFromTcp(). Another bug maybe lead my 
> problem,
> {code:title=BlockReaderFactory.java|borderStyle=solid}
> // Some comments here
>   private BlockReader getRemoteBlockReaderFromTcp() throws IOException {
> if (LOG.isTraceEnabled()) {
>   LOG.trace(this + ": trying to create a remote block reader from a " +
>   "TCP socket");
> }
> BlockReader blockReader = null;
> while (true) {
>   BlockReaderPeer curPeer = null;
>   Peer peer = null;
>   try {
> curPeer = nextTcpPeer();
> if (curPeer == null) break;
> if (curPeer.fromCache) remainingCacheTries--;
> peer = curPeer.peer;
> blockReader = getRemoteBlockReader(peer);
> return blockReader;
>   } catch (IOException ioe) {
> if (isSecurityException(ioe)) {
>   if (LOG.isTraceEnabled()) {
> LOG.trace(this + ": got security exception while constructing " +
> "a remote block reader from " + peer, ioe);
>   }
>   throw ioe;
> }
> if ((curPeer != null) && curPeer.fromCache) {
>   // Handle an I/O error we got when using a cached peer.  These are
>   // considered less serious, because the underlying socket may be
>   // stale.
>   if (LOG.isDebugEnabled()) {
> LOG.debug("Closed potentially stale remote peer " + peer, ioe);
>   }
> } else {
>   // Handle an I/O error we got when using a newly created peer.
>   LOG.warn("I/O error constructing remote block reader.", ioe);
>   throw ioe;
> }
>   } finally {
> if (blockReader == null) {
>   IOUtils.cleanup(LOG, peer);
> }
>   }
> }
> return null;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11117) Refactor striped file unit test case structure

2016-11-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15656939#comment-15656939
 ] 

Hadoop QA commented on HDFS-7:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 34 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 33s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 56 new + 680 unchanged - 65 fixed = 736 total (was 745) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 42s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 99m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.tools.TestDelegationTokenFetcher |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HDFS-7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838534/HDFS-7-v2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 36768e2ded01 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / fd2f22a |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17527/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17527/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17527/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17527/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Refactor striped file unit test case structure
> ---

[jira] [Commented] (HDFS-10996) Ability to specify per-file EC policy at create time

2016-11-11 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15656734#comment-15656734
 ] 

SammiChen commented on HDFS-10996:
--

Double checked two failed unit tests. 
1. hadoop.fs.viewfs.TestViewFsHdfs. It's not part of this patch. Also it passed 
in my local environment. I think it's irrelevant. 
2. hadoop.hdfs.TestDFSClientRetries. It failed because of timeout. It passed in 
my local environment. So I think it's irrelevant too. 

> Ability to specify per-file EC policy at create time
> 
>
> Key: HDFS-10996
> URL: https://issues.apache.org/jira/browse/HDFS-10996
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: SammiChen
> Attachments: HDFS-10996-v1.patch, HDFS-10996-v2.patch, 
> HDFS-10996-v3.patch
>
>
> Based on discussion in HDFS-10971, it would be useful to specify the EC 
> policy when the file is created. This is useful for situations where app 
> requirements do not map nicely to the current directory-level policies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11117) Refactor striped file unit test case structure

2016-11-11 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HDFS-7:
-
Attachment: HDFS-7-v2.patch

1. fix related test case failure
2. fix style issue

> Refactor striped file unit test case structure
> --
>
> Key: HDFS-7
> URL: https://issues.apache.org/jira/browse/HDFS-7
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: SammiChen
>Assignee: SammiChen
> Attachments: HDFS-7-v1.patch, HDFS-7-v2.patch
>
>
> This task is going to refactor current striped file test case structures, 
> especially {{StripedFileTestUtil}} file which is used in many striped file 
> test  cases. All current striped file test cases only support one erasure 
> coding policy, that's the default RS-DEFAULT-6-3-64k policy.  The goal of the 
> refactor is to make the structures more convenient to support other erasure 
> coding policies, such as XOR policy. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11116) Fix javac warnings caused by deprecation of APIs in TestViewFsDefaultValue

2016-11-11 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15656653#comment-15656653
 ] 

Yiqun Lin commented on HDFS-6:
--

Thanks [~ajisakaa] for the commit!

> Fix javac warnings caused by deprecation of APIs in TestViewFsDefaultValue
> --
>
> Key: HDFS-6
> URL: https://issues.apache.org/jira/browse/HDFS-6
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-6.001.patch, HDFS-6.002.patch, 
> HDFS-6.003.patch
>
>
> There were some Jenkins warinings related with TestViewFsDefaultValue in each 
> Jenkins building.
> {code}
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9]
>  [deprecation] getDefaultBlockSize() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9]
>  [deprecation] getDefaultReplication() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43]
>  [deprecation] getServerDefaults() in FileSystem has been deprecated
> {code}
> We should use the method {{getDefaultBlockSize(Path)}} replace with 
> deprecation API {{getDefaultBlockSize}}. The same to the 
> {{getDefaultReplication}} and {{getServerDefaults}}. The {{Path}} can be a 
> not-in-mountpoint path in filesystem to trigger the 
> {{NotInMountpointException}} in test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11068) [SPS]: Provide unique trackID to track the block movement sends to coordinator

2016-11-11 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-11068:
-
Release Note:   (was: I have just committed this to branch)

> [SPS]: Provide unique trackID to track the block movement sends to coordinator
> --
>
> Key: HDFS-11068
> URL: https://issues.apache.org/jira/browse/HDFS-11068
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Rakesh R
>Assignee: Rakesh R
> Fix For: HDFS-10285
>
> Attachments: HDFS-11068-HDFS-10285-01.patch, 
> HDFS-11068-HDFS-10285-02.patch, HDFS-11068-HDFS-10285-03.patch, 
> HDFS-11068-HDFS-10285.patch
>
>
> Presently DatanodeManager uses constant  value -1 as 
> [trackID|https://github.com/apache/hadoop/blob/HDFS-10285/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java#L1607],
>  which is a temporary value. As per discussion with [~umamaheswararao], one 
> proposal is to use {{BlockCollectionId/InodeFileId}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11068) [SPS]: Provide unique trackID to track the block movement sends to coordinator

2016-11-11 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-11068:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-10285
 Release Note: I have just committed this to branch
   Status: Resolved  (was: Patch Available)

> [SPS]: Provide unique trackID to track the block movement sends to coordinator
> --
>
> Key: HDFS-11068
> URL: https://issues.apache.org/jira/browse/HDFS-11068
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Rakesh R
>Assignee: Rakesh R
> Fix For: HDFS-10285
>
> Attachments: HDFS-11068-HDFS-10285-01.patch, 
> HDFS-11068-HDFS-10285-02.patch, HDFS-11068-HDFS-10285-03.patch, 
> HDFS-11068-HDFS-10285.patch
>
>
> Presently DatanodeManager uses constant  value -1 as 
> [trackID|https://github.com/apache/hadoop/blob/HDFS-10285/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java#L1607],
>  which is a temporary value. As per discussion with [~umamaheswararao], one 
> proposal is to use {{BlockCollectionId/InodeFileId}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11068) [SPS]: Provide unique trackID to track the block movement sends to coordinator

2016-11-11 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15656591#comment-15656591
 ] 

Uma Maheswara Rao G commented on HDFS-11068:


+1 on the latest patch

> [SPS]: Provide unique trackID to track the block movement sends to coordinator
> --
>
> Key: HDFS-11068
> URL: https://issues.apache.org/jira/browse/HDFS-11068
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-11068-HDFS-10285-01.patch, 
> HDFS-11068-HDFS-10285-02.patch, HDFS-11068-HDFS-10285-03.patch, 
> HDFS-11068-HDFS-10285.patch
>
>
> Presently DatanodeManager uses constant  value -1 as 
> [trackID|https://github.com/apache/hadoop/blob/HDFS-10285/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java#L1607],
>  which is a temporary value. As per discussion with [~umamaheswararao], one 
> proposal is to use {{BlockCollectionId/InodeFileId}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org