[jira] [Commented] (HDFS-9668) Optimize the locking in FsDatasetImpl

2016-09-27 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528522#comment-15528522
 ] 

Jingcheng Du commented on HDFS-9668:


Hi [~arpitagarwal], I read the code in HDFS-10828. I think the callers of the 
ReplicaMap can guarantee the synchronization, and I need the 
synchronized(mutex) to avoid the concurrent modification in ReplicaMap, besides 
nested locks would probably lead to deal locks. So I have to change the 
AutoCloseableLock back to synchornized(mutex) in ReplicaMap in my next patch. 
Is it okay? Please advise, thanks.

> Optimize the locking in FsDatasetImpl
> -
>
> Key: HDFS-9668
> URL: https://issues.apache.org/jira/browse/HDFS-9668
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
> Attachments: HDFS-9668-1.patch, HDFS-9668-10.patch, 
> HDFS-9668-11.patch, HDFS-9668-12.patch, HDFS-9668-2.patch, HDFS-9668-3.patch, 
> HDFS-9668-4.patch, HDFS-9668-5.patch, HDFS-9668-6.patch, HDFS-9668-7.patch, 
> HDFS-9668-8.patch, HDFS-9668-9.patch, execution_time.png
>
>
> During the HBase test on a tiered storage of HDFS (WAL is stored in 
> SSD/RAMDISK, and all other files are stored in HDD), we observe many 
> long-time BLOCKED threads on FsDatasetImpl in DataNode. The following is part 
> of the jstack result:
> {noformat}
> "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at 
> /192.168.50.16:48521 [Receiving block 
> BP-1042877462-192.168.50.13-1446173170517:blk_1073779272_40852]" - Thread 
> t@93336
>java.lang.Thread.State: BLOCKED
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:)
>   - waiting to lock <18324c9> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl) owned by 
> "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at 
> /192.168.50.16:48520 [Receiving block 
> BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" t@93335
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:113)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:183)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:615)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
>   at java.lang.Thread.run(Thread.java:745)
>Locked ownable synchronizers:
>   - None
>   
> "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at 
> /192.168.50.16:48520 [Receiving block 
> BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" - Thread 
> t@93335
>java.lang.Thread.State: RUNNABLE
>   at java.io.UnixFileSystem.createFileExclusively(Native Method)
>   at java.io.File.createNewFile(File.java:1012)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:66)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:271)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:286)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:1140)
>   - locked <18324c9> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:113)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:183)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:615)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
>   at java.lang.Thread.run(Thread.java:745)
>Locked ownable synchronizers:
>   - None
> {noformat}
> We measured the execution of some operations in FsDatasetImpl during the 
> test. Here following is the result.
> !execution_time.png!
> The operations of finalizeBlock, addBlock and createRbw on HDD in a heavy 
> load take a really long time.
> It means one slow operation of finalizeBlock, addBlock and createRbw in a 
> slow storage can block all the other same operations in the same DataNode, 
> especially in HBase when many wal/flusher/compactor are configured.

[jira] [Updated] (HDFS-9444) Add utility to find set of available ephemeral ports to ServerSocketUtil

2016-09-27 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-9444:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.9.0
   2.8.0
   Status: Resolved  (was: Patch Available)

pushed to branch-2 and branch-2.8..Thanks to [~iwasakims] for contribution and 
[~xiaochen] for review

> Add utility to find set of available ephemeral ports to ServerSocketUtil
> 
>
> Key: HDFS-9444
> URL: https://issues.apache.org/jira/browse/HDFS-9444
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: test
>Reporter: Brahma Reddy Battula
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-9444-branch-2.006.patch, 
> HDFS-9444-branch-2.007.patch, HDFS-9444.001.patch, HDFS-9444.002.patch, 
> HDFS-9444.003.patch, HDFS-9444.004.patch, HDFS-9444.005.patch, 
> HDFS-9444.006.patch
>
>
> Unit tests using MiniDFSCluster with namanode-ha enabled need set of port 
> numbers in advance. Because namenodes talk to each other, we can not set ipc 
> port to 0 in configuration to make namenodes decide port number on its own. 
> ServerSocketUtil should provide utility to find set of available ephemeral 
> port numbers for this.
> For example, TestEditLogTailer could fail due to {{java.net.BindException: 
> Address already in use}}.
> https://builds.apache.org/job/Hadoop-Hdfs-trunk/2556/testReport/
> {noformat}
> java.net.BindException: Problem binding to [localhost:42477] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:444)
>   at sun.nio.ch.Net.bind(Net.java:436)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:469)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:695)
>   at org.apache.hadoop.ipc.Server.(Server.java:2464)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:390)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:742)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:680)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:883)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:862)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1564)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:891)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:823)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:482)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer.testStandbyTriggersLogRolls(TestEditLogTailer.java:139)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer.testNN1TriggersLogRolls(TestEditLogTailer.java:114)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10919) Provide admin/debug tool to dump out info of a given block

2016-09-27 Thread Yongjun Zhang (JIRA)
Yongjun Zhang created HDFS-10919:


 Summary: Provide admin/debug tool to dump out info of a given block
 Key: HDFS-10919
 URL: https://issues.apache.org/jira/browse/HDFS-10919
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs
Reporter: Yongjun Zhang


We have fsck to find out blocks associated with a file, which is nice. 
Sometimes, we saw trouble with a specific block, we'd like to collect info 
about this block, such as
* what file this block belong to, 
* where the replicas of this block are located, 
* whether the block is EC coded; 
* if a block is EC coded, whether it's a data block, or code
* if a block is EC coded, what's the codec.
* if a block is EC coded, what's the block group
* for the block group, what are the other blocks

Create this jira to provide such a util, as dfsadmin, or a debug tool.

Thanks.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9850) DiskBalancer : Explore removing references to FsVolumeSpi

2016-09-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528473#comment-15528473
 ] 

Hudson commented on HDFS-9850:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10505 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10505/])
HDFS-9850. DiskBalancer: Explore removing references to FsVolumeSpi. 
(aengineer: rev 03f519a757ce83d76e7fc9f6aadf271e38bb9f6d)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/diskbalancer/TestDiskBalancer.java


> DiskBalancer : Explore removing references to FsVolumeSpi 
> --
>
> Key: HDFS-9850
> URL: https://issues.apache.org/jira/browse/HDFS-9850
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: 3.0.0-alpha2
>Reporter: Anu Engineer
>Assignee: Manoj Govindassamy
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-9850.001.patch, HDFS-9850.002.patch, 
> HDFS-9850.003.patch, HDFS-9850.004.patch
>
>
> In HDFS-9671, [~arpitagarwal] commented that we should explore the 
> possibility of removing references to FsVolumeSpi at any point and only deal 
> with storage ID. We are not sure if this is possible, this JIRA is to explore 
> if that can be done without issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10883) `getTrashRoot`'s behavior is not consistent in DFS after enabling EZ.

2016-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528455#comment-15528455
 ] 

Hadoop QA commented on HDFS-10883:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
53s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 52s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 95m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10883 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830618/HDFS-10883.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c4e56ecbaf7a 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6437ba1 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16897/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16897/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client 
hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16897/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   

[jira] [Commented] (HDFS-10892) Add unit tests for HDFS command 'dfs -tail' and 'dfs -stat'

2016-09-27 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528437#comment-15528437
 ] 

Mingliang Liu commented on HDFS-10892:
--

Thank you [~aw] for your explanation. That is very helpful to understand why 
the locale was not enforced to UTF-8 in test.

However, I did not find easy way to set the LANG=UTF-8 encoding in Java (kindly 
let me know if there is). The {{Locale.setDefault()}} seems not working here 
either. Per-offline discussion with [~jnp], we decide not to unit test UTF-8 by 
now. Mixing locale and file content encoding is better tested in end-to-end 
system tests.

> Add unit tests for HDFS command 'dfs -tail' and 'dfs -stat'
> ---
>
> Key: HDFS-10892
> URL: https://issues.apache.org/jira/browse/HDFS-10892
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs, shell, test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10892.000.patch, HDFS-10892.001.patch, 
> HDFS-10892.002.patch, HDFS-10892.003.patch, HDFS-10892.004.patch, 
> HDFS-10892.005.patch
>
>
> I did not find unit test in {{trunk}} code for following cases:
> - HDFS command {{dfs -tail}}
> - HDFS command {{dfs -stat}}
> I think it still merits to have one though the commands have served us for 
> years.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10893) Refactor TestDFSShell by setting up MiniDFSCluser once for all commands test

2016-09-27 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-10893:
-
Attachment: HDFS-10893-branch-2.000.patch

Now attach the patch for {{branch-2}}.

> Refactor TestDFSShell by setting up MiniDFSCluser once for all commands test
> 
>
> Key: HDFS-10893
> URL: https://issues.apache.org/jira/browse/HDFS-10893
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10893-branch-2.000.patch, HDFS-10893.000.patch
>
>
> It seems that setting up MiniDFSCluser once for all commands test will reduce 
> the total time. To share a global cluster, the tests should use individual 
> test directories to avoid conflict between test cases. Meanwhile, the 
> MiniDFSCluster should not use the default root data directory; or else tests 
> are not able to launch another cluster(s) by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9850) DiskBalancer : Explore removing references to FsVolumeSpi

2016-09-27 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-9850:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   Status: Resolved  (was: Patch Available)

[~manojg] Thanks for fixing this issue. I have committed this to trunk.

> DiskBalancer : Explore removing references to FsVolumeSpi 
> --
>
> Key: HDFS-9850
> URL: https://issues.apache.org/jira/browse/HDFS-9850
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: 3.0.0-alpha2
>Reporter: Anu Engineer
>Assignee: Manoj Govindassamy
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-9850.001.patch, HDFS-9850.002.patch, 
> HDFS-9850.003.patch, HDFS-9850.004.patch
>
>
> In HDFS-9671, [~arpitagarwal] commented that we should explore the 
> possibility of removing references to FsVolumeSpi at any point and only deal 
> with storage ID. We are not sure if this is possible, this JIRA is to explore 
> if that can be done without issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10915) Fix time measurement bug in TestDatanodeRestart#testWaitForRegistrationOnRestart

2016-09-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528359#comment-15528359
 ] 

Hudson commented on HDFS-10915:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10504 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10504/])
HDFS-10915. Fix time measurement bug in TestDatanodeRestart. Contributed 
(liuml07: rev 6437ba18c5c26bc271a63aff5ea03756f43dd9a3)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestDatanodeRestart.java


> Fix time measurement bug in 
> TestDatanodeRestart#testWaitForRegistrationOnRestart
> 
>
> Key: HDFS-10915
> URL: https://issues.apache.org/jira/browse/HDFS-10915
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Minor
> Fix For: 2.7.4, 3.0.0-alpha2
>
> Attachments: HDFS-10915.000.patch, HDFS-10915.001.patch
>
>
> It should be milliseconds in the message of IOException.
> {code}
> } catch (org.apache.hadoop.ipc.RemoteException e) {
> long elapsed = System.currentTimeMillis() - start;
> // timers have at-least semantics, so it should be at least 5 seconds.
> if (elapsed < 5000 || elapsed > 1) {
>   throw new IOException(elapsed + " seconds passed.", e);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10690) Optimize insertion/removal of replica in ShortCircuitCache.java

2016-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528355#comment-15528355
 ] 

Hadoop QA commented on HDFS-10690:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs-client: The patch 
generated 0 new + 12 unchanged - 4 fixed = 12 total (was 16) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
53s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10690 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830623/HDFS-10690.006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 167e21fdd8c5 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6437ba1 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16898/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16898/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Optimize insertion/removal of replica in ShortCircuitCache.java
> ---
>
> Key: HDFS-10690
> URL: https://issues.apache.org/jira/browse/HDFS-10690
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha2
>

[jira] [Commented] (HDFS-9850) DiskBalancer : Explore removing references to FsVolumeSpi

2016-09-27 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528338#comment-15528338
 ] 

Manoj Govindassamy commented on HDFS-9850:
--

Unit test failures are not related to the attached patch.

> DiskBalancer : Explore removing references to FsVolumeSpi 
> --
>
> Key: HDFS-9850
> URL: https://issues.apache.org/jira/browse/HDFS-9850
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: 3.0.0-alpha2
>Reporter: Anu Engineer
>Assignee: Manoj Govindassamy
> Attachments: HDFS-9850.001.patch, HDFS-9850.002.patch, 
> HDFS-9850.003.patch, HDFS-9850.004.patch
>
>
> In HDFS-9671, [~arpitagarwal] commented that we should explore the 
> possibility of removing references to FsVolumeSpi at any point and only deal 
> with storage ID. We are not sure if this is possible, this JIRA is to explore 
> if that can be done without issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10918) Add a tool to get FileEncryptionInfo from CLI

2016-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528317#comment-15528317
 ] 

Hadoop QA commented on HDFS-10918:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
22s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
56s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 22s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}119m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.mover.TestStorageMover |
|   | hadoop.hdfs.server.blockmanagement.TestBlockManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10918 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830616/HDFS-10918.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux b5ae09d8ed2a 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d144398 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16895/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| 

[jira] [Updated] (HDFS-10915) Fix time measurement bug in TestDatanodeRestart#testWaitForRegistrationOnRestart

2016-09-27 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-10915:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.7.4
   Status: Resolved  (was: Patch Available)

Committed to {{trunk}} through {{branch-2.7}}. Thanks for the contribution, 
[~xiaobingo].

> Fix time measurement bug in 
> TestDatanodeRestart#testWaitForRegistrationOnRestart
> 
>
> Key: HDFS-10915
> URL: https://issues.apache.org/jira/browse/HDFS-10915
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Minor
> Fix For: 2.7.4, 3.0.0-alpha2
>
> Attachments: HDFS-10915.000.patch, HDFS-10915.001.patch
>
>
> It should be milliseconds in the message of IOException.
> {code}
> } catch (org.apache.hadoop.ipc.RemoteException e) {
> long elapsed = System.currentTimeMillis() - start;
> // timers have at-least semantics, so it should be at least 5 seconds.
> if (elapsed < 5000 || elapsed > 1) {
>   throw new IOException(elapsed + " seconds passed.", e);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10910) HDFS Erasure Coding doc should state its currently supported erasure coding policies

2016-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528292#comment-15528292
 ] 

Hadoop QA commented on HDFS-10910:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  9m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10910 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830621/HDFS-10910.001.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 17aaf6507875 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6437ba1 |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16896/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> HDFS Erasure Coding doc should state its currently supported erasure coding 
> policies
> 
>
> Key: HDFS-10910
> URL: https://issues.apache.org/jira/browse/HDFS-10910
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Yiqun Lin
> Attachments: HDFS-10910.001.patch
>
>
> While HDFS Erasure Coding doc states a variety of possible combinations of 
> algorithms, block group size and cell size, the code (as of 3.0.0-alpha1) 
> allows only three policies: RS_6_3_SCHEMA, RS_3_2_SCHEMA and 
> RS_6_3_LEGACY_SCHEMA. All with default cell size. I think this should be 
> documented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10915) Fix time measurement bug in TestDatanodeRestart#testWaitForRegistrationOnRestart

2016-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528260#comment-15528260
 ] 

Hadoop QA commented on HDFS-10915:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 65m 
10s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 86m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10915 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830609/HDFS-10915.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d4da7700de37 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d144398 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16893/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16893/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix time measurement bug in 
> TestDatanodeRestart#testWaitForRegistrationOnRestart
> 
>
> Key: HDFS-10915
> URL: https://issues.apache.org/jira/browse/HDFS-10915
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Minor
> Attachments: HDFS-10915.000.patch, HDFS-10915.001.patch
>
>
> It should be milliseconds in the message of IOException.
> {code}
> } catch 

[jira] [Updated] (HDFS-10690) Optimize insertion/removal of replica in ShortCircuitCache.java

2016-09-27 Thread Fenghua Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fenghua Hu updated HDFS-10690:
--
Attachment: HDFS-10690.006.patch

Re-submit patch v6 so as to trigger Jenkins build.

> Optimize insertion/removal of replica in ShortCircuitCache.java
> ---
>
> Key: HDFS-10690
> URL: https://issues.apache.org/jira/browse/HDFS-10690
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha2
>Reporter: Fenghua Hu
>Assignee: Fenghua Hu
> Attachments: HDFS-10690.001.patch, HDFS-10690.002.patch, 
> HDFS-10690.003.patch, HDFS-10690.004.patch, HDFS-10690.005.patch, 
> HDFS-10690.006.patch, ShortCircuitCache_LinkedMap.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Currently in ShortCircuitCache, two TreeMap objects are used to track the 
> cached replicas.
> private final TreeMap evictable = new TreeMap<>();
> private final TreeMap evictableMmapped = new 
> TreeMap<>();
> TreeMap employs Red-Black tree for sorting. This isn't an issue when using 
> traditional HDD. But when using high-performance SSD/PCIe Flash, the cost 
> inserting/removing an entry  becomes considerable.
> To mitigate it, we designed a new list-based for replica tracking.
> The list is a double-linked FIFO. FIFO is time-based, thus insertion is a 
> very low cost operation. On the other hand, list is not lookup-friendly. To 
> address this issue, we introduce two references into ShortCircuitReplica 
> object.
> ShortCircuitReplica next = null;
> ShortCircuitReplica prev = null;
> In this way, lookup is not needed when removing a replica from the list. We 
> only need to modify its predecessor's and successor's references in the lists.
> Our tests showed up to 15-50% performance improvement when using PCIe flash 
> as storage media.
> The original patch is against 2.6.4, now I am porting to Hadoop trunk, and 
> patch will be posted soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10690) Optimize insertion/removal of replica in ShortCircuitCache.java

2016-09-27 Thread Fenghua Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fenghua Hu updated HDFS-10690:
--
Attachment: (was: HDFS-10690.006.patch)

> Optimize insertion/removal of replica in ShortCircuitCache.java
> ---
>
> Key: HDFS-10690
> URL: https://issues.apache.org/jira/browse/HDFS-10690
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha2
>Reporter: Fenghua Hu
>Assignee: Fenghua Hu
> Attachments: HDFS-10690.001.patch, HDFS-10690.002.patch, 
> HDFS-10690.003.patch, HDFS-10690.004.patch, HDFS-10690.005.patch, 
> ShortCircuitCache_LinkedMap.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Currently in ShortCircuitCache, two TreeMap objects are used to track the 
> cached replicas.
> private final TreeMap evictable = new TreeMap<>();
> private final TreeMap evictableMmapped = new 
> TreeMap<>();
> TreeMap employs Red-Black tree for sorting. This isn't an issue when using 
> traditional HDD. But when using high-performance SSD/PCIe Flash, the cost 
> inserting/removing an entry  becomes considerable.
> To mitigate it, we designed a new list-based for replica tracking.
> The list is a double-linked FIFO. FIFO is time-based, thus insertion is a 
> very low cost operation. On the other hand, list is not lookup-friendly. To 
> address this issue, we introduce two references into ShortCircuitReplica 
> object.
> ShortCircuitReplica next = null;
> ShortCircuitReplica prev = null;
> In this way, lookup is not needed when removing a replica from the list. We 
> only need to modify its predecessor's and successor's references in the lists.
> Our tests showed up to 15-50% performance improvement when using PCIe flash 
> as storage media.
> The original patch is against 2.6.4, now I am porting to Hadoop trunk, and 
> patch will be posted soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9850) DiskBalancer : Explore removing references to FsVolumeSpi

2016-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528230#comment-15528230
 ] 

Hadoop QA commented on HDFS-9850:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 48s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.tracing.TestTracing |
|   | hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-9850 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830605/HDFS-9850.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a779b26c5db0 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d144398 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16892/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16892/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16892/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> DiskBalancer : Explore removing references to FsVolumeSpi 
> --
>
> Key: HDFS-9850
> URL: https://issues.apache.org/jira/browse/HDFS-9850
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: 3.0.0-alpha2
>

[jira] [Commented] (HDFS-10900) DiskBalancer: Complete the documents for the report command

2016-09-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528219#comment-15528219
 ] 

Hudson commented on HDFS-10900:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10503 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10503/])
HDFS-10900. DiskBalancer: Complete the documents for the report command. 
(aengineer: rev 9c9736463b2b30350c78fce4fa0d56c73280d0ff)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSDiskbalancer.md


> DiskBalancer: Complete the documents for the report command
> ---
>
> Key: HDFS-10900
> URL: https://issues.apache.org/jira/browse/HDFS-10900
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10900.001.patch
>
>
> Now the documemnts of the command {{hdfs diskbalancer -report}} look not 
> completed. Two minors:
> * The usage of {{hdfs diskbalancer -report}} is missing in {{HDFSCommands.md}}
> * One subcommand  of report command {{hdfs diskbalancer -report -top}} is 
> missing in {{HDFSDiskBalancer.md}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10910) HDFS Erasure Coding doc should state its currently supported erasure coding policies

2016-09-27 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528191#comment-15528191
 ] 

Yiqun Lin edited comment on HDFS-10910 at 9/28/16 2:52 AM:
---

Attach a initial patch for this. I found that some sequence number not showed 
right in hadoop documentaion page, I also make a fix in my patch.



was (Author: linyiqun):
Attach a initial patch for this. I found that some sequence number not shows 
right in hadoop documentaion page, I also make a fix in my latch.


> HDFS Erasure Coding doc should state its currently supported erasure coding 
> policies
> 
>
> Key: HDFS-10910
> URL: https://issues.apache.org/jira/browse/HDFS-10910
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Yiqun Lin
> Attachments: HDFS-10910.001.patch
>
>
> While HDFS Erasure Coding doc states a variety of possible combinations of 
> algorithms, block group size and cell size, the code (as of 3.0.0-alpha1) 
> allows only three policies: RS_6_3_SCHEMA, RS_3_2_SCHEMA and 
> RS_6_3_LEGACY_SCHEMA. All with default cell size. I think this should be 
> documented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10910) HDFS Erasure Coding doc should state its currently supported erasure coding policies

2016-09-27 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10910:
-
Attachment: HDFS-10910.001.patch

> HDFS Erasure Coding doc should state its currently supported erasure coding 
> policies
> 
>
> Key: HDFS-10910
> URL: https://issues.apache.org/jira/browse/HDFS-10910
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Yiqun Lin
> Attachments: HDFS-10910.001.patch
>
>
> While HDFS Erasure Coding doc states a variety of possible combinations of 
> algorithms, block group size and cell size, the code (as of 3.0.0-alpha1) 
> allows only three policies: RS_6_3_SCHEMA, RS_3_2_SCHEMA and 
> RS_6_3_LEGACY_SCHEMA. All with default cell size. I think this should be 
> documented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10910) HDFS Erasure Coding doc should state its currently supported erasure coding policies

2016-09-27 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10910:
-
Status: Patch Available  (was: Open)

Attach a initial patch for this. I found that some sequence number not shows 
right in hadoop documentaion page, I also make a fix in my latch.


> HDFS Erasure Coding doc should state its currently supported erasure coding 
> policies
> 
>
> Key: HDFS-10910
> URL: https://issues.apache.org/jira/browse/HDFS-10910
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Yiqun Lin
> Attachments: HDFS-10910.001.patch
>
>
> While HDFS Erasure Coding doc states a variety of possible combinations of 
> algorithms, block group size and cell size, the code (as of 3.0.0-alpha1) 
> allows only three policies: RS_6_3_SCHEMA, RS_3_2_SCHEMA and 
> RS_6_3_LEGACY_SCHEMA. All with default cell size. I think this should be 
> documented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10756) Expose getTrashRoot to HTTPFS and WebHDFS

2016-09-27 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528179#comment-15528179
 ] 

Yuanbo Liu commented on HDFS-10756:
---

I'm working on a related jira( HDFS-10883 ), and I will focus on this jira 
after I fix that.

> Expose getTrashRoot to HTTPFS and WebHDFS
> -
>
> Key: HDFS-10756
> URL: https://issues.apache.org/jira/browse/HDFS-10756
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption, httpfs, webhdfs
>Reporter: Xiao Chen
>Assignee: Yuanbo Liu
> Attachments: HDFS-10756.001.patch, HDFS-10756.002.patch
>
>
> Currently, hadoop FileSystem API has 
> [getTrashRoot|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L2708]
>  to determine trash directory at run time. Default trash dir is under 
> {{/user/$USER}}
> For an encrypted file, since moving files between/in/out of EZs are not 
> allowed, when an EZ file is deleted via CLI, it calls in to [DFS 
> implementation|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java#L2485]
>  to move the file to a trash directory under the same EZ.
> This works perfectly fine for CLI users or java users who call FileSystem 
> API. But for users via httpfs/webhdfs, currently there is no way to figure 
> out what the trash root would be. This jira is proposing we add such 
> interface to httpfs and webhdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10314) A new tool to sync current HDFS view to specified snapshot

2016-09-27 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528169#comment-15528169
 ] 

Yongjun Zhang commented on HDFS-10314:
--

HI [~jingzhao],

I tried to provide details in my earlier replies so they tend to be lengthy. 
Now I'd like to ask couple of quick questions on your latest proprosal (I asked 
too in earlier reply but too buried). Appreciate your taking time to reply.

{quote}
In that sense, I think a simpler way is to wrap (but not extend) the current 
distcp in the snapshot-restore tool:
1. The tool takes a single cluster and a target snapshot as arguments
2. The tool computes the delta for restoring using snapshot diff report
3. The tool does rename/delete etc. metadata ops to revert part of the diff
4. The tool uses the distcp (by invokes distcp as a library) to copy the 
original states of modified files
{quote}

Q1: step 2 does the snapshot diff calculation as you described, does it also 
collect the modified files and pass to step 4?

Q2: or does step 4 also do snapshot calculation? 
 
Thanks much.



> A new tool to sync current HDFS view to specified snapshot
> --
>
> Key: HDFS-10314
> URL: https://issues.apache.org/jira/browse/HDFS-10314
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-10314.001.patch
>
>
> HDFS-9820 proposed adding -rdiff switch to distcp, as a reversed operation of 
> -diff switch. 
> Upon discussion with [~jingzhao], we will introduce a new tool that wraps 
> around distcp to achieve the same purpose.
> I'm thinking about calling the new tool "rsync", similar to unix/linux 
> command "rsync". The "r" here means remote.
> The syntax that simulate -rdiff behavior proposed in HDFS-9820 is
> {code}
> rsync  
> {code}
> This command ensure   is newer than .
> I think, In the future, we can add another command to have the functionality 
> of -diff switch of distcp.
> {code}
> sync  
> {code}
> that ensures   is older than .
> Thanks [~jingzhao].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10883) `getTrashRoot`'s behavior is not consistent in DFS after enabling EZ.

2016-09-27 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-10883:
--
Attachment: HDFS-10883.003.patch

Upload v3 patch to address test failure.

> `getTrashRoot`'s behavior is not consistent in DFS after enabling EZ.
> -
>
> Key: HDFS-10883
> URL: https://issues.apache.org/jira/browse/HDFS-10883
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: HDFS-10883-test-case.txt, HDFS-10883.001.patch, 
> HDFS-10883.002.patch, HDFS-10883.003.patch
>
>
> Let's say root path ("/") is the encryption zone, and there is a file called 
> "/test" in root path.
> {code}
> dfs.getTrashRoot(new Path("/"))
> {code}
> returns "/user/$USER/.Trash",
> while
> {code}
> dfs.getTrashRoot(new Path("/test"))
> {code} 
> returns "/.Trash/$USER".
> Please see the attachment to know how to reproduce this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10900) DiskBalancer: Complete the documents for the report command

2016-09-27 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10900:

  Resolution: Fixed
Hadoop Flags: Reviewed
Target Version/s: 3.0.0-alpha2
  Status: Resolved  (was: Patch Available)

[~linyiqun] Thank you the contribution. I have committed this to trunk.

> DiskBalancer: Complete the documents for the report command
> ---
>
> Key: HDFS-10900
> URL: https://issues.apache.org/jira/browse/HDFS-10900
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10900.001.patch
>
>
> Now the documemnts of the command {{hdfs diskbalancer -report}} look not 
> completed. Two minors:
> * The usage of {{hdfs diskbalancer -report}} is missing in {{HDFSCommands.md}}
> * One subcommand  of report command {{hdfs diskbalancer -report -top}} is 
> missing in {{HDFSDiskBalancer.md}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10900) DiskBalancer: Complete the documents for the report command

2016-09-27 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527010#comment-15527010
 ] 

Anu Engineer edited comment on HDFS-10900 at 9/28/16 2:16 AM:
--

My sincere apologies my comment was so cryptic. 

There was a checkin in the tree that changed the Apache Logo. This has broken 
the whole site rendering. I applied you patch and was testing it and noticed 
that it was broken completely. I wanted to file a separate JIRA for that issue 
before I commit this code. 

Please note: This has nothing to do with your change, that I why I mentioned 
that I will check that in later. Without being able to render the page 
correctly, I have no way of verifying your change.  So I am holding off on 
committing this. We will either backout the change in trunk (the logo change) 
or someone with right CSS skills will fix it. I would like to commit your 
changes after that.

To see this issue yourself , you might want to run 

# mvn site:site
# mvn site:stage -DstagingDirectory=/tmp/mysite
# open /tmp/mysite/hadoop-project/index.html 

Then browse around the pages -- especially the hadoop commands and 
HDFSDiskBalancer page. Again I am +1 on this change, but I want to commit when 
the code base is in a better shape.



was (Author: anu):
My sincere apologies my comment was so cryptic. 

There was a checkin in the tree that changed the Apache Logo. This has broken 
the whole site rendering. I applied you patch and was testing it and noticed 
that it was broken completely. I wanted to file a separate JIRA for that issue 
before I commit this code. 

Please note: This has nothing to do with your change, that I why I mentioned 
that I will check that in later. Without being able to render the page 
correctly, I have no way of verifying your change.  So I am holding off on 
committing this. We will either backout the change in trunk (the logo change) 
or someone with right CSS skills will fix it. I would like to commit your 
changes after that.

To see this issue yourself , you might want to run 

# mvn site:site
# mvn site:stage -DstagingDirectory=/tmp/mysite
# open /tmp/mysite/hadoop-project/index.html 

Then browser around the pages -- especially the hadoop commands and 
HDFSDiskBalancer page. Again I am +1 on this change, but I want to commit when 
the code base is in a better shape.


> DiskBalancer: Complete the documents for the report command
> ---
>
> Key: HDFS-10900
> URL: https://issues.apache.org/jira/browse/HDFS-10900
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10900.001.patch
>
>
> Now the documemnts of the command {{hdfs diskbalancer -report}} look not 
> completed. Two minors:
> * The usage of {{hdfs diskbalancer -report}} is missing in {{HDFSCommands.md}}
> * One subcommand  of report command {{hdfs diskbalancer -report -top}} is 
> missing in {{HDFSDiskBalancer.md}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10900) DiskBalancer: Complete the documents for the report command

2016-09-27 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528102#comment-15528102
 ] 

Yiqun Lin commented on HDFS-10900:
--

Thanks [~anu] for the patient explaination. It seems the hadoop documents 
rendering has been modified. I just take a look for the hadoop 
page(http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsSnapshots.html),
 it seems ok now.

> DiskBalancer: Complete the documents for the report command
> ---
>
> Key: HDFS-10900
> URL: https://issues.apache.org/jira/browse/HDFS-10900
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10900.001.patch
>
>
> Now the documemnts of the command {{hdfs diskbalancer -report}} look not 
> completed. Two minors:
> * The usage of {{hdfs diskbalancer -report}} is missing in {{HDFSCommands.md}}
> * One subcommand  of report command {{hdfs diskbalancer -report -top}} is 
> missing in {{HDFSDiskBalancer.md}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10426) TestPendingInvalidateBlock failed in trunk

2016-09-27 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528082#comment-15528082
 ] 

Yiqun Lin commented on HDFS-10426:
--

Thanks [~liuml07] for pointing this. It seem that {{TestPending 
InvalidateBlock#testPendingDeletion}} still failed sometimes(In HDFS-10915, it 
also appeared). It seems that blockManager still schedules the invalidate 
blocks even though we have already made the method {{getInvalidationDelay}} 
return 1 indicates that we don't want to delete blocks right now. I'm not sure 
if there is some race here. Can we delay the deletion operation, and skip the 
current loop in ReplicationMonitor. In the next loop, I think the mockito 
method will make sense. Ping [~iwasakims] for the comments.

> TestPendingInvalidateBlock failed in trunk
> --
>
> Key: HDFS-10426
> URL: https://issues.apache.org/jira/browse/HDFS-10426
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10426.001.patch, HDFS-10426.002.patch, 
> HDFS-10426.003.patch, HDFS-10426.004.patch, HDFS-10426.005.patch, 
> HDFS-10426.006.patch
>
>
> The test {{TestPendingInvalidateBlock}} failed sometimes. The stack info:
> {code}
> org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock
> testPendingDeletion(org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock)
>   Time elapsed: 7.703 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<2> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock.testPendingDeletion(TestPendingInvalidateBlock.java:92)
> {code}
> It looks that the {{invalidateBlock}} has been removed before we do the check
> {code}
> // restart NN
> cluster.restartNameNode(true);
> dfs.delete(foo, true);
> Assert.assertEquals(0, cluster.getNamesystem().getBlocksTotal());
> Assert.assertEquals(REPLICATION, cluster.getNamesystem()
> .getPendingDeletionBlocks());
> Assert.assertEquals(REPLICATION,
> dfs.getPendingDeletionBlocksCount());
> {code}
> And I look into the related configurations. I found the property 
> {{dfs.namenode.replication.interval}} was just set as 1 second in this test. 
> And after the delay time of {{dfs.namenode.startup.delay.block.deletion.sec}} 
> and the delete operation was slowly, it will cause this case. We can see the 
> stack info before, the failed test costs 7.7s more than 5+1 second.
> One way can improve this.
> * Increase the time of {{dfs.namenode.replication.interval}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10913) Refactor BlockReceiver by introducing faults injector to enhance testability of detecting slow mirrors

2016-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528092#comment-15528092
 ] 

Hadoop QA commented on HDFS-10913:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 24s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 5 new + 74 unchanged - 1 fixed = 79 total (was 75) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 46s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 84m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.TestSafeModeWithStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10913 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830596/HDFS-10913.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 74292d0fb7db 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2acfb1e |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16891/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16891/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16891/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16891/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Refactor BlockReceiver by introducing faults injector to 

[jira] [Updated] (HDFS-10918) Add a tool to get FileEncryptionInfo from CLI

2016-09-27 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-10918:
-
Attachment: HDFS-10918.01.patch

Attaching a patch to express the idea.

Some explanations:
- Client can already get {{FileEncryptionInfo}} from {{DFSClient}} code, just 
not convenient from CLI. So adding such a tool provides some convenience, with 
no security drawback.
- Anyone can read the file should be able to get feinfo, to be consistent with 
client behavior.
- Added a {{toStringStable}} to feinfo, similar to what HDFS-9732 is done.
- Can't seem to have permission related tests added to {{TestCryptoAdminCLI}}, 
since it's always running as superuser. Manually tested though.

> Add a tool to get FileEncryptionInfo from CLI
> -
>
> Key: HDFS-10918
> URL: https://issues.apache.org/jira/browse/HDFS-10918
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-10918.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10918) Add a tool to get FileEncryptionInfo from CLI

2016-09-27 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-10918:
-
Status: Patch Available  (was: Open)

> Add a tool to get FileEncryptionInfo from CLI
> -
>
> Key: HDFS-10918
> URL: https://issues.apache.org/jira/browse/HDFS-10918
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-10918.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10914) Move remnants of oah.hdfs.client to hadoop-hdfs-client

2016-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528091#comment-15528091
 ] 

Hadoop QA commented on HDFS-10914:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-hdfs-project: The patch generated 0 new + 4 
unchanged - 1 fixed = 4 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 24 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
3s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 21s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}106m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.mover.TestStorageMover |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10914 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830592/hdfs-10914.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9823a5e74d5e 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2acfb1e |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16890/artifact/patchprocess/whitespace-eol.txt
 |
| unit | 

[jira] [Commented] (HDFS-10690) Optimize insertion/removal of replica in ShortCircuitCache.java

2016-09-27 Thread Fenghua Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528033#comment-15528033
 ] 

Fenghua Hu commented on HDFS-10690:
---

looks like patch v6 hasn't been built and verified by Jenkins. Anything else i 
should do?[~xyao]

> Optimize insertion/removal of replica in ShortCircuitCache.java
> ---
>
> Key: HDFS-10690
> URL: https://issues.apache.org/jira/browse/HDFS-10690
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha2
>Reporter: Fenghua Hu
>Assignee: Fenghua Hu
> Attachments: HDFS-10690.001.patch, HDFS-10690.002.patch, 
> HDFS-10690.003.patch, HDFS-10690.004.patch, HDFS-10690.005.patch, 
> HDFS-10690.006.patch, ShortCircuitCache_LinkedMap.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Currently in ShortCircuitCache, two TreeMap objects are used to track the 
> cached replicas.
> private final TreeMap evictable = new TreeMap<>();
> private final TreeMap evictableMmapped = new 
> TreeMap<>();
> TreeMap employs Red-Black tree for sorting. This isn't an issue when using 
> traditional HDD. But when using high-performance SSD/PCIe Flash, the cost 
> inserting/removing an entry  becomes considerable.
> To mitigate it, we designed a new list-based for replica tracking.
> The list is a double-linked FIFO. FIFO is time-based, thus insertion is a 
> very low cost operation. On the other hand, list is not lookup-friendly. To 
> address this issue, we introduce two references into ShortCircuitReplica 
> object.
> ShortCircuitReplica next = null;
> ShortCircuitReplica prev = null;
> In this way, lookup is not needed when removing a replica from the list. We 
> only need to modify its predecessor's and successor's references in the lists.
> Our tests showed up to 15-50% performance improvement when using PCIe flash 
> as storage media.
> The original patch is against 2.6.4, now I am porting to Hadoop trunk, and 
> patch will be posted soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9850) DiskBalancer : Explore removing references to FsVolumeSpi

2016-09-27 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527991#comment-15527991
 ] 

Manoj Govindassamy commented on HDFS-9850:
--

Thanks a lot for the quick review.

> DiskBalancer : Explore removing references to FsVolumeSpi 
> --
>
> Key: HDFS-9850
> URL: https://issues.apache.org/jira/browse/HDFS-9850
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: 3.0.0-alpha2
>Reporter: Anu Engineer
>Assignee: Manoj Govindassamy
> Attachments: HDFS-9850.001.patch, HDFS-9850.002.patch, 
> HDFS-9850.003.patch, HDFS-9850.004.patch
>
>
> In HDFS-9671, [~arpitagarwal] commented that we should explore the 
> possibility of removing references to FsVolumeSpi at any point and only deal 
> with storage ID. We are not sure if this is possible, this JIRA is to explore 
> if that can be done without issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10918) Add a tool to get FileEncryptionInfo from CLI

2016-09-27 Thread Xiao Chen (JIRA)
Xiao Chen created HDFS-10918:


 Summary: Add a tool to get FileEncryptionInfo from CLI
 Key: HDFS-10918
 URL: https://issues.apache.org/jira/browse/HDFS-10918
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: encryption
Reporter: Xiao Chen
Assignee: Xiao Chen






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9850) DiskBalancer : Explore removing references to FsVolumeSpi

2016-09-27 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527986#comment-15527986
 ] 

Anu Engineer commented on HDFS-9850:


+1, Pending Jenkins. The changes in v4 patch look very good. Thanks for getting 
this done. I will commit this as soon as we have jenkins run.

> DiskBalancer : Explore removing references to FsVolumeSpi 
> --
>
> Key: HDFS-9850
> URL: https://issues.apache.org/jira/browse/HDFS-9850
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: 3.0.0-alpha2
>Reporter: Anu Engineer
>Assignee: Manoj Govindassamy
> Attachments: HDFS-9850.001.patch, HDFS-9850.002.patch, 
> HDFS-9850.003.patch, HDFS-9850.004.patch
>
>
> In HDFS-9671, [~arpitagarwal] commented that we should explore the 
> possibility of removing references to FsVolumeSpi at any point and only deal 
> with storage ID. We are not sure if this is possible, this JIRA is to explore 
> if that can be done without issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10892) Add unit tests for HDFS command 'dfs -tail' and 'dfs -stat'

2016-09-27 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-10892:
-
Attachment: HDFS-10892.005.patch

> Add unit tests for HDFS command 'dfs -tail' and 'dfs -stat'
> ---
>
> Key: HDFS-10892
> URL: https://issues.apache.org/jira/browse/HDFS-10892
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs, shell, test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10892.000.patch, HDFS-10892.001.patch, 
> HDFS-10892.002.patch, HDFS-10892.003.patch, HDFS-10892.004.patch, 
> HDFS-10892.005.patch
>
>
> I did not find unit test in {{trunk}} code for following cases:
> - HDFS command {{dfs -tail}}
> - HDFS command {{dfs -stat}}
> I think it still merits to have one though the commands have served us for 
> years.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10915) Fix time measurement bug in TestDatanodeRestart#testWaitForRegistrationOnRestart

2016-09-27 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527963#comment-15527963
 ] 

Mingliang Liu commented on HDFS-10915:
--

+1 pending on Jenkins.

> Fix time measurement bug in 
> TestDatanodeRestart#testWaitForRegistrationOnRestart
> 
>
> Key: HDFS-10915
> URL: https://issues.apache.org/jira/browse/HDFS-10915
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Minor
> Attachments: HDFS-10915.000.patch, HDFS-10915.001.patch
>
>
> It should be milliseconds in the message of IOException.
> {code}
> } catch (org.apache.hadoop.ipc.RemoteException e) {
> long elapsed = System.currentTimeMillis() - start;
> // timers have at-least semantics, so it should be at least 5 seconds.
> if (elapsed < 5000 || elapsed > 1) {
>   throw new IOException(elapsed + " seconds passed.", e);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10915) Fix time measurement bug in TestDatanodeRestart#testWaitForRegistrationOnRestart

2016-09-27 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10915:
-
Attachment: (was: HDFS-10915.001.patch)

> Fix time measurement bug in 
> TestDatanodeRestart#testWaitForRegistrationOnRestart
> 
>
> Key: HDFS-10915
> URL: https://issues.apache.org/jira/browse/HDFS-10915
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Minor
> Attachments: HDFS-10915.000.patch, HDFS-10915.001.patch
>
>
> It should be milliseconds in the message of IOException.
> {code}
> } catch (org.apache.hadoop.ipc.RemoteException e) {
> long elapsed = System.currentTimeMillis() - start;
> // timers have at-least semantics, so it should be at least 5 seconds.
> if (elapsed < 5000 || elapsed > 1) {
>   throw new IOException(elapsed + " seconds passed.", e);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10915) Fix time measurement bug in TestDatanodeRestart#testWaitForRegistrationOnRestart

2016-09-27 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10915:
-
Attachment: HDFS-10915.001.patch

> Fix time measurement bug in 
> TestDatanodeRestart#testWaitForRegistrationOnRestart
> 
>
> Key: HDFS-10915
> URL: https://issues.apache.org/jira/browse/HDFS-10915
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Minor
> Attachments: HDFS-10915.000.patch, HDFS-10915.001.patch
>
>
> It should be milliseconds in the message of IOException.
> {code}
> } catch (org.apache.hadoop.ipc.RemoteException e) {
> long elapsed = System.currentTimeMillis() - start;
> // timers have at-least semantics, so it should be at least 5 seconds.
> if (elapsed < 5000 || elapsed > 1) {
>   throw new IOException(elapsed + " seconds passed.", e);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-10910) HDFS Erasure Coding doc should state its currently supported erasure coding policies

2016-09-27 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin reassigned HDFS-10910:


Assignee: Yiqun Lin

> HDFS Erasure Coding doc should state its currently supported erasure coding 
> policies
> 
>
> Key: HDFS-10910
> URL: https://issues.apache.org/jira/browse/HDFS-10910
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Yiqun Lin
>
> While HDFS Erasure Coding doc states a variety of possible combinations of 
> algorithms, block group size and cell size, the code (as of 3.0.0-alpha1) 
> allows only three policies: RS_6_3_SCHEMA, RS_3_2_SCHEMA and 
> RS_6_3_LEGACY_SCHEMA. All with default cell size. I think this should be 
> documented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10915) Fix time measurement bug in TestDatanodeRestart#testWaitForRegistrationOnRestart

2016-09-27 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527938#comment-15527938
 ] 

Xiaobing Zhou commented on HDFS-10915:
--

Thanks [~liuml07], quite good input. I posted v001 patch for that.

> Fix time measurement bug in 
> TestDatanodeRestart#testWaitForRegistrationOnRestart
> 
>
> Key: HDFS-10915
> URL: https://issues.apache.org/jira/browse/HDFS-10915
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Minor
> Attachments: HDFS-10915.000.patch, HDFS-10915.001.patch
>
>
> It should be milliseconds in the message of IOException.
> {code}
> } catch (org.apache.hadoop.ipc.RemoteException e) {
> long elapsed = System.currentTimeMillis() - start;
> // timers have at-least semantics, so it should be at least 5 seconds.
> if (elapsed < 5000 || elapsed > 1) {
>   throw new IOException(elapsed + " seconds passed.", e);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10915) Fix time measurement bug in TestDatanodeRestart#testWaitForRegistrationOnRestart

2016-09-27 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10915:
-
Attachment: HDFS-10915.001.patch

> Fix time measurement bug in 
> TestDatanodeRestart#testWaitForRegistrationOnRestart
> 
>
> Key: HDFS-10915
> URL: https://issues.apache.org/jira/browse/HDFS-10915
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Minor
> Attachments: HDFS-10915.000.patch, HDFS-10915.001.patch
>
>
> It should be milliseconds in the message of IOException.
> {code}
> } catch (org.apache.hadoop.ipc.RemoteException e) {
> long elapsed = System.currentTimeMillis() - start;
> // timers have at-least semantics, so it should be at least 5 seconds.
> if (elapsed < 5000 || elapsed > 1) {
>   throw new IOException(elapsed + " seconds passed.", e);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10915) Fix time measurement bug in TestDatanodeRestart#testWaitForRegistrationOnRestart

2016-09-27 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10915:
-
Attachment: HDFS-10913.001.patch

> Fix time measurement bug in 
> TestDatanodeRestart#testWaitForRegistrationOnRestart
> 
>
> Key: HDFS-10915
> URL: https://issues.apache.org/jira/browse/HDFS-10915
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Minor
>
> It should be milliseconds in the message of IOException.
> {code}
> } catch (org.apache.hadoop.ipc.RemoteException e) {
> long elapsed = System.currentTimeMillis() - start;
> // timers have at-least semantics, so it should be at least 5 seconds.
> if (elapsed < 5000 || elapsed > 1) {
>   throw new IOException(elapsed + " seconds passed.", e);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10915) Fix time measurement bug in TestDatanodeRestart#testWaitForRegistrationOnRestart

2016-09-27 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10915:
-
Attachment: (was: HDFS-10915.000.patch)

> Fix time measurement bug in 
> TestDatanodeRestart#testWaitForRegistrationOnRestart
> 
>
> Key: HDFS-10915
> URL: https://issues.apache.org/jira/browse/HDFS-10915
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Minor
>
> It should be milliseconds in the message of IOException.
> {code}
> } catch (org.apache.hadoop.ipc.RemoteException e) {
> long elapsed = System.currentTimeMillis() - start;
> // timers have at-least semantics, so it should be at least 5 seconds.
> if (elapsed < 5000 || elapsed > 1) {
>   throw new IOException(elapsed + " seconds passed.", e);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9850) DiskBalancer : Explore removing references to FsVolumeSpi

2016-09-27 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-9850:
-
Attachment: HDFS-9850.004.patch

Attaching v04 patch to address following. [~anu], kindly take a look.
* Restored old behavior of getQueryStatus to return volume paths even if the 
volume is removed. Made VolumePair to hold the volume paths.
* Accordingly changed the expectation in TestDiskBalancer. Test will not expect 
DiskBalancerException and instead awaits for the status to reach PLAN_DONE. 



> DiskBalancer : Explore removing references to FsVolumeSpi 
> --
>
> Key: HDFS-9850
> URL: https://issues.apache.org/jira/browse/HDFS-9850
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: 3.0.0-alpha2
>Reporter: Anu Engineer
>Assignee: Manoj Govindassamy
> Attachments: HDFS-9850.001.patch, HDFS-9850.002.patch, 
> HDFS-9850.003.patch, HDFS-9850.004.patch
>
>
> In HDFS-9671, [~arpitagarwal] commented that we should explore the 
> possibility of removing references to FsVolumeSpi at any point and only deal 
> with storage ID. We are not sure if this is possible, this JIRA is to explore 
> if that can be done without issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10915) Fix time measurement bug in TestDatanodeRestart#testWaitForRegistrationOnRestart

2016-09-27 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10915:
-
Attachment: HDFS-10915.000.patch

> Fix time measurement bug in 
> TestDatanodeRestart#testWaitForRegistrationOnRestart
> 
>
> Key: HDFS-10915
> URL: https://issues.apache.org/jira/browse/HDFS-10915
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Minor
> Attachments: HDFS-10915.000.patch
>
>
> It should be milliseconds in the message of IOException.
> {code}
> } catch (org.apache.hadoop.ipc.RemoteException e) {
> long elapsed = System.currentTimeMillis() - start;
> // timers have at-least semantics, so it should be at least 5 seconds.
> if (elapsed < 5000 || elapsed > 1) {
>   throw new IOException(elapsed + " seconds passed.", e);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10892) Add unit tests for HDFS command 'dfs -tail' and 'dfs -stat'

2016-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527928#comment-15527928
 ] 

Hadoop QA commented on HDFS-10892:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 179 unchanged - 0 fixed = 180 total (was 179) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m  6s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 90m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10892 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830586/HDFS-10892.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7e29a1340365 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2acfb1e |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16889/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16889/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16889/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16889/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add unit tests for HDFS command 'dfs -tail' 

[jira] [Updated] (HDFS-10915) Fix time measurement bug in TestDatanodeRestart#testWaitForRegistrationOnRestart

2016-09-27 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10915:
-
Attachment: (was: HDFS-10913.001.patch)

> Fix time measurement bug in 
> TestDatanodeRestart#testWaitForRegistrationOnRestart
> 
>
> Key: HDFS-10915
> URL: https://issues.apache.org/jira/browse/HDFS-10915
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Minor
>
> It should be milliseconds in the message of IOException.
> {code}
> } catch (org.apache.hadoop.ipc.RemoteException e) {
> long elapsed = System.currentTimeMillis() - start;
> // timers have at-least semantics, so it should be at least 5 seconds.
> if (elapsed < 5000 || elapsed > 1) {
>   throw new IOException(elapsed + " seconds passed.", e);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10916) Switch from "raw" to "system" namespace for erasure coding policy

2016-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527909#comment-15527909
 ] 

Hadoop QA commented on HDFS-10916:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 29s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 87m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10916 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830588/HDFS-10916.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e179797de6bc 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2acfb1e |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16888/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16888/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16888/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Switch from "raw" to "system" namespace for erasure coding policy
> -
>
> Key: HDFS-10916
> URL: 

[jira] [Updated] (HDFS-10915) Fix time measurement bug in TestDatanodeRestart#testWaitForRegistrationOnRestart

2016-09-27 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-10915:
-
Summary: Fix time measurement bug in 
TestDatanodeRestart#testWaitForRegistrationOnRestart  (was: fix typo in 
TestDatanodeRestart#testWaitForRegistrationOnRestart)

> Fix time measurement bug in 
> TestDatanodeRestart#testWaitForRegistrationOnRestart
> 
>
> Key: HDFS-10915
> URL: https://issues.apache.org/jira/browse/HDFS-10915
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Minor
> Attachments: HDFS-10915.000.patch
>
>
> It should be milliseconds in the message of IOException.
> {code}
> } catch (org.apache.hadoop.ipc.RemoteException e) {
> long elapsed = System.currentTimeMillis() - start;
> // timers have at-least semantics, so it should be at least 5 seconds.
> if (elapsed < 5000 || elapsed > 1) {
>   throw new IOException(elapsed + " seconds passed.", e);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-09-27 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527890#comment-15527890
 ] 

Konstantin Shvachko commented on HDFS-10301:


??removing single-rpc FBRs.??
Daryn, do you suggest to remove it in this jira? I thought we can do it in a 
different one.

??The race to consider is apparently a BR for a new volume is processed prior 
to receiving/processing a heartbeat which includes a storage report for the new 
volume.??
Don't see a race here. New storage is created via 
{{DatanodeDescriptor.updateStorage()}}, which is invoked whenever a new storage 
is reported. New storage can be reported via a heartbeat, IBR, or FBR. What do 
I miss?

??The default 10 minute lag is concerning.??
The patch does not change storage removal logic. If it is a concern, it is with 
or without the patch as this is how it works right now. I agree during rolling 
upgrades storage failures are more likely and durability can be an issue, but 
it should be addressed in a different jira?

> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
> URL: https://issues.apache.org/jira/browse/HDFS-10301
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.1
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
>Priority: Critical
> Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, 
> HDFS-10301.004.patch, HDFS-10301.005.patch, HDFS-10301.006.patch, 
> HDFS-10301.007.patch, HDFS-10301.008.patch, HDFS-10301.009.patch, 
> HDFS-10301.01.patch, HDFS-10301.010.patch, HDFS-10301.011.patch, 
> HDFS-10301.012.patch, HDFS-10301.013.patch, HDFS-10301.014.patch, 
> HDFS-10301.branch-2.7.patch, HDFS-10301.branch-2.patch, 
> HDFS-10301.sample.patch, zombieStorageLogs.rtf
>
>
> When NameNode is busy a DataNode can timeout sending a block report. Then it 
> sends the block report again. Then NameNode while process these two reports 
> at the same time can interleave processing storages from different reports. 
> This screws up the blockReportId field, which makes NameNode think that some 
> storages are zombie. Replicas from zombie storages are immediately removed, 
> causing missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10915) fix typo in TestDatanodeRestart#testWaitForRegistrationOnRestart

2016-09-27 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527878#comment-15527878
 ] 

Mingliang Liu commented on HDFS-10915:
--

Thanks for the patch.

Can you also replace the code that uses {{System.currentTimeMillis()}} with 
{{System.nanoTime()}}.

The reason is that the granularity of {{System.currentTimeMillis()}} depends on 
the implementation and on the OS and is usually around 10 ms (up to 100ms on 
some platforms). Instead, {{System.nanoTime()}} returns the current value of 
the most precise available system timer (in nanoseconds). The latter is perfect 
to calculate elapsed time. Specially, in Hadoop, we can use 
{{org.apache.hadoop.util#monotonicNow()}} for getting the start/end time.

> fix typo in TestDatanodeRestart#testWaitForRegistrationOnRestart
> 
>
> Key: HDFS-10915
> URL: https://issues.apache.org/jira/browse/HDFS-10915
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Minor
> Attachments: HDFS-10915.000.patch
>
>
> It should be milliseconds in the message of IOException.
> {code}
> } catch (org.apache.hadoop.ipc.RemoteException e) {
> long elapsed = System.currentTimeMillis() - start;
> // timers have at-least semantics, so it should be at least 5 seconds.
> if (elapsed < 5000 || elapsed > 1) {
>   throw new IOException(elapsed + " seconds passed.", e);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10917) Maintain various peer performance statistics on DataNode

2016-09-27 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10917:
-
Description: DataNodes already detect if replication pipeline operations 
are slow and log warnings. For the purpose of analysis, performance metrics are 
desirable. This proposes adding them on DataNodes. These performance related 
statistics can be aggregated and reported to NameNode as part of heartbeat 
message.

> Maintain various peer performance statistics on DataNode
> 
>
> Key: HDFS-10917
> URL: https://issues.apache.org/jira/browse/HDFS-10917
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> DataNodes already detect if replication pipeline operations are slow and log 
> warnings. For the purpose of analysis, performance metrics are desirable. 
> This proposes adding them on DataNodes. These performance related statistics 
> can be aggregated and reported to NameNode as part of heartbeat message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10917) Maintain various peer performance statistics on DataNode

2016-09-27 Thread Xiaobing Zhou (JIRA)
Xiaobing Zhou created HDFS-10917:


 Summary: Maintain various peer performance statistics on DataNode
 Key: HDFS-10917
 URL: https://issues.apache.org/jira/browse/HDFS-10917
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.8.0
Reporter: Xiaobing Zhou
Assignee: Xiaobing Zhou






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10915) fix typo in TestDatanodeRestart#testWaitForRegistrationOnRestart

2016-09-27 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-10915:
-
   Priority: Minor  (was: Major)
Component/s: test

> fix typo in TestDatanodeRestart#testWaitForRegistrationOnRestart
> 
>
> Key: HDFS-10915
> URL: https://issues.apache.org/jira/browse/HDFS-10915
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Minor
> Attachments: HDFS-10915.000.patch
>
>
> It should be milliseconds in the message of IOException.
> {code}
> } catch (org.apache.hadoop.ipc.RemoteException e) {
> long elapsed = System.currentTimeMillis() - start;
> // timers have at-least semantics, so it should be at least 5 seconds.
> if (elapsed < 5000 || elapsed > 1) {
>   throw new IOException(elapsed + " seconds passed.", e);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10913) Refactor BlockReceiver by introducing faults injector to enhance testability of detecting slow mirrors

2016-09-27 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10913:
-
Status: Patch Available  (was: Open)

> Refactor BlockReceiver by introducing faults injector to enhance testability 
> of detecting slow mirrors
> --
>
> Key: HDFS-10913
> URL: https://issues.apache.org/jira/browse/HDFS-10913
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10913.000.patch
>
>
> BlockReceiver#datanodeSlowLogThresholdMs is used as threshold to detect slow 
> mirrors. BlockReceiver only writes some warning logs. In order to better test 
> behaviors of slow mirrors, it necessitates introducing fault injectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10913) Refactor BlockReceiver by introducing faults injector to enhance testability of detecting slow mirrors

2016-09-27 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527831#comment-15527831
 ] 

Xiaobing Zhou commented on HDFS-10913:
--

I posted patch v000, please kindly review.

> Refactor BlockReceiver by introducing faults injector to enhance testability 
> of detecting slow mirrors
> --
>
> Key: HDFS-10913
> URL: https://issues.apache.org/jira/browse/HDFS-10913
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10913.000.patch
>
>
> BlockReceiver#datanodeSlowLogThresholdMs is used as threshold to detect slow 
> mirrors. BlockReceiver only writes some warning logs. In order to better test 
> behaviors of slow mirrors, it necessitates introducing fault injectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10913) Refactor BlockReceiver by introducing faults injector to enhance testability of detecting slow mirrors

2016-09-27 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10913:
-
Attachment: HDFS-10913.000.patch

> Refactor BlockReceiver by introducing faults injector to enhance testability 
> of detecting slow mirrors
> --
>
> Key: HDFS-10913
> URL: https://issues.apache.org/jira/browse/HDFS-10913
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10913.000.patch
>
>
> BlockReceiver#datanodeSlowLogThresholdMs is used as threshold to detect slow 
> mirrors. BlockReceiver only writes some warning logs. In order to better test 
> behaviors of slow mirrors, it necessitates introducing fault injectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10915) fix typo in TestDatanodeRestart#testWaitForRegistrationOnRestart

2016-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527820#comment-15527820
 ] 

Hadoop QA commented on HDFS-10915:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m  7s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 83m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10915 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830584/HDFS-10915.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ffb6991bbd4a 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2acfb1e |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16887/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16887/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16887/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> fix typo in TestDatanodeRestart#testWaitForRegistrationOnRestart
> 
>
> Key: HDFS-10915
> URL: https://issues.apache.org/jira/browse/HDFS-10915
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: 

[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-09-27 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527821#comment-15527821
 ] 

Konstantin Shvachko commented on HDFS-10301:


Talking to Arpit, I think I understood the problem with the {{checkLease()}} 
change in the patch. This will allow DNs send BRs without first obtaining a 
lease from NN.

> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
> URL: https://issues.apache.org/jira/browse/HDFS-10301
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.1
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
>Priority: Critical
> Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, 
> HDFS-10301.004.patch, HDFS-10301.005.patch, HDFS-10301.006.patch, 
> HDFS-10301.007.patch, HDFS-10301.008.patch, HDFS-10301.009.patch, 
> HDFS-10301.01.patch, HDFS-10301.010.patch, HDFS-10301.011.patch, 
> HDFS-10301.012.patch, HDFS-10301.013.patch, HDFS-10301.014.patch, 
> HDFS-10301.branch-2.7.patch, HDFS-10301.branch-2.patch, 
> HDFS-10301.sample.patch, zombieStorageLogs.rtf
>
>
> When NameNode is busy a DataNode can timeout sending a block report. Then it 
> sends the block report again. Then NameNode while process these two reports 
> at the same time can interleave processing storages from different reports. 
> This screws up the blockReportId field, which makes NameNode think that some 
> storages are zombie. Replicas from zombie storages are immediately removed, 
> causing missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10914) Move remnants of oah.hdfs.client to hadoop-hdfs-client

2016-09-27 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527772#comment-15527772
 ] 

Lei (Eddy) Xu commented on HDFS-10914:
--

The patch is mostly moving code around. +1 pending jenkins

Thanks [~andrew.wang]

> Move remnants of oah.hdfs.client to hadoop-hdfs-client
> --
>
> Key: HDFS-10914
> URL: https://issues.apache.org/jira/browse/HDFS-10914
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Critical
> Attachments: hdfs-10914.001.patch, hdfs-10914.002.patch
>
>
> Some remaining classes in the oah.hdfs.client package are still in 
> hadoop-hdfs rather than hadoop-hdfs-client.
> This broke a client that depended on hadoop-client for HdfsAdmin. 
> hadoop-client now pulls in hadoop-hdfs-client rather than hadoop-hdfs, 
> meaning it lost access to HdfsAdmin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10914) Move remnants of oah.hdfs.client to hadoop-hdfs-client

2016-09-27 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10914:
---
Attachment: hdfs-10914.002.patch

New rev attached. Apparently checkstyle doesn't like those new imports, and 
does like package-info.java files.

Whitespace is an existing issue, since I'm just renaming. I can do 
--whitespace=fix at commit time though.

> Move remnants of oah.hdfs.client to hadoop-hdfs-client
> --
>
> Key: HDFS-10914
> URL: https://issues.apache.org/jira/browse/HDFS-10914
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Critical
> Attachments: hdfs-10914.001.patch, hdfs-10914.002.patch
>
>
> Some remaining classes in the oah.hdfs.client package are still in 
> hadoop-hdfs rather than hadoop-hdfs-client.
> This broke a client that depended on hadoop-client for HdfsAdmin. 
> hadoop-client now pulls in hadoop-hdfs-client rather than hadoop-hdfs, 
> meaning it lost access to HdfsAdmin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10892) Add unit tests for HDFS command 'dfs -tail' and 'dfs -stat'

2016-09-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527755#comment-15527755
 ] 

Allen Wittenauer edited comment on HDFS-10892 at 9/27/16 11:23 PM:
---

The Dockerfile that ships with Hadoop does not force a UTF-8 locale.  So it 
will inherit whatever locale was running at the UNIX level. (This, BTW, is 
intentional.)  Apache Yetus will only force the locale to be a UTF-8 variant if 
shellcheck is involved.

I'd recommend that if the test needs UTF-8, then the test should set the locale 
it needs and then restore.  If setting the UTF-8 locale fails, then the test 
should be skipped. There's no guarantee that a user running the test is even 
UTF-8 capable.


was (Author: aw):
The Dockerfile that ships with Hadoop does not force a UTF-8 locale.  So it 
will inherit whatever locale was running at the UNIX level.  Apache Yetus will 
only force the locale to be a UTF-8 variant if shellcheck is involved.

I'd recommend that if the test needs UTF-8, then the test should set the locale 
it needs and then restore.  If setting the UTF-8 locale fails, then the test 
should be skipped. There's no guarantee that a user running the test is even 
UTF-8 capable.

> Add unit tests for HDFS command 'dfs -tail' and 'dfs -stat'
> ---
>
> Key: HDFS-10892
> URL: https://issues.apache.org/jira/browse/HDFS-10892
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs, shell, test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10892.000.patch, HDFS-10892.001.patch, 
> HDFS-10892.002.patch, HDFS-10892.003.patch, HDFS-10892.004.patch
>
>
> I did not find unit test in {{trunk}} code for following cases:
> - HDFS command {{dfs -tail}}
> - HDFS command {{dfs -stat}}
> I think it still merits to have one though the commands have served us for 
> years.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10376) Enhance setOwner testing

2016-09-27 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527757#comment-15527757
 ] 

John Zhuge commented on HDFS-10376:
---

Thanks [~yzhangal] for the review and commit!

> Enhance setOwner testing
> 
>
> Key: HDFS-10376
> URL: https://issues.apache.org/jira/browse/HDFS-10376
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Yongjun Zhang
>Assignee: John Zhuge
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10376.001.patch
>
>
> TestPermission create a user with the following name and group:
> {code}
>  final private static String USER_NAME = "user" + RAN.nextInt();
>  final private static String[] GROUP_NAMES = {"group1", "group2"};
>UserGroupInformation userGroupInfo = 
> UserGroupInformation.createUserForTesting(USER_NAME, GROUP_NAMES );
>   
>   FileSystem userfs = DFSTestUtil.getFileSystemAs(userGroupInfo, conf);
>   // make sure mkdir of a existing directory that is not owned by 
>   // this user does not throw an exception.
>   userfs.mkdirs(CHILD_DIR1);
>   
> {code}
> Supposedly 
> {code}
>  userfs.setOwner(CHILD_FILE3, "foo", "bar");
> {code}
> will be run as the specified user, but it seems to be run as me who run the 
> test.
> Running as the specified user would disallow setOwner, which requires 
> superuser privilege. This is not happening.
> Creating this jira for some investigation to understand whether it's indeed 
> an issue.
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10892) Add unit tests for HDFS command 'dfs -tail' and 'dfs -stat'

2016-09-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527755#comment-15527755
 ] 

Allen Wittenauer commented on HDFS-10892:
-

The Dockerfile that ships with Hadoop does not force a UTF-8 locale.  So it 
will inherit whatever locale was running at the UNIX level.  Apache Yetus will 
only force the locale to be a UTF-8 variant if shellcheck is involved.

I'd recommend that if the test needs UTF-8, then the test should set the locale 
it needs and then restore.  If setting the UTF-8 locale fails, then the test 
should be skipped. There's no guarantee that a user running the test is even 
UTF-8 capable.

> Add unit tests for HDFS command 'dfs -tail' and 'dfs -stat'
> ---
>
> Key: HDFS-10892
> URL: https://issues.apache.org/jira/browse/HDFS-10892
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs, shell, test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10892.000.patch, HDFS-10892.001.patch, 
> HDFS-10892.002.patch, HDFS-10892.003.patch, HDFS-10892.004.patch
>
>
> I did not find unit test in {{trunk}} code for following cases:
> - HDFS command {{dfs -tail}}
> - HDFS command {{dfs -stat}}
> I think it still merits to have one though the commands have served us for 
> years.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10914) Move remnants of oah.hdfs.client to hadoop-hdfs-client

2016-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527730#comment-15527730
 ] 

Hadoop QA commented on HDFS-10914:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 28s{color} | {color:orange} hadoop-hdfs-project: The patch generated 3 new + 
4 unchanged - 1 fixed = 7 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 24 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
57s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 59m 
36s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 89m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10914 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830576/hdfs-10914.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1e87ec4824bc 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1831be8 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16886/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16886/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-10376) Enhance setOwner testing

2016-09-27 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527722#comment-15527722
 ] 

Yongjun Zhang commented on HDFS-10376:
--

Committed to trunk. Thanks [~jzhuge] for the contribution.


> Enhance setOwner testing
> 
>
> Key: HDFS-10376
> URL: https://issues.apache.org/jira/browse/HDFS-10376
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Yongjun Zhang
>Assignee: John Zhuge
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10376.001.patch
>
>
> TestPermission create a user with the following name and group:
> {code}
>  final private static String USER_NAME = "user" + RAN.nextInt();
>  final private static String[] GROUP_NAMES = {"group1", "group2"};
>UserGroupInformation userGroupInfo = 
> UserGroupInformation.createUserForTesting(USER_NAME, GROUP_NAMES );
>   
>   FileSystem userfs = DFSTestUtil.getFileSystemAs(userGroupInfo, conf);
>   // make sure mkdir of a existing directory that is not owned by 
>   // this user does not throw an exception.
>   userfs.mkdirs(CHILD_DIR1);
>   
> {code}
> Supposedly 
> {code}
>  userfs.setOwner(CHILD_FILE3, "foo", "bar");
> {code}
> will be run as the specified user, but it seems to be run as me who run the 
> test.
> Running as the specified user would disallow setOwner, which requires 
> superuser privilege. This is not happening.
> Creating this jira for some investigation to understand whether it's indeed 
> an issue.
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10531) Add EC policy and storage policy related usage summarization function to dfs du command

2016-09-27 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10531:
---
Affects Version/s: 3.0.0-alpha1
 Target Version/s: 3.0.0-alpha2

> Add EC policy and storage policy related usage summarization function to dfs 
> du command
> ---
>
> Key: HDFS-10531
> URL: https://issues.apache.org/jira/browse/HDFS-10531
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha1
>Reporter: Rui Gao
>Assignee: Wei-Chiu Chuang
>
> Currently du command output:
> {code}
> [ ~]$ hdfs dfs -du  -h /home/rgao/
> 0  /home/rgao/.Trash
> 0  /home/rgao/.staging
> 100 M  /home/rgao/ds
> 250 M  /home/rgao/ds-2
> 200 M  /home/rgao/noECBackup-ds
> 500 M  /home/rgao/noECBackup-ds-2
> {code}
> For hdfs users and administrators, EC policy and storage policy related usage 
> summarization would be very helpful when managing storages of cluster. The 
> imitate output of du could be like the following.
> {code}
> [ ~]$ hdfs dfs -du  -h -t( total, parameter to be added) /home/rgao
>  
> 0  /home/rgao/.Trash
> 0  /home/rgao/.staging
> [Archive] [EC:RS-DEFAULT-6-3-64k] 100 M  /home/rgao/ds
> [DISK] [EC:RS-DEFAULT-6-3-64k] 250 M  /home/rgao/ds-2
> [DISK] [Replica] 200 M  /home/rgao/noECBackup-ds
> [DISK] [Replica] 500 M  /home/rgao/noECBackup-ds-2
>  
> Total:
>  
> [Archive][EC:RS-DEFAULT-6-3-64k]  100 M
> [Archive][Replica]0 M
> [DISK] [EC:RS-DEFAULT-6-3-64k] 250 M
> [DISK] [Replica]   700 M  
>  
> [Archive][ALL] 100M
> [DISK][ALL]  950M
> [ALL] [EC:RS-DEFAULT-6-3-64k]350M
> [ALL] [Replica]  700M
> {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10376) Enhance setOwner testing

2016-09-27 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-10376:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   Status: Resolved  (was: Patch Available)

> Enhance setOwner testing
> 
>
> Key: HDFS-10376
> URL: https://issues.apache.org/jira/browse/HDFS-10376
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Yongjun Zhang
>Assignee: John Zhuge
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10376.001.patch
>
>
> TestPermission create a user with the following name and group:
> {code}
>  final private static String USER_NAME = "user" + RAN.nextInt();
>  final private static String[] GROUP_NAMES = {"group1", "group2"};
>UserGroupInformation userGroupInfo = 
> UserGroupInformation.createUserForTesting(USER_NAME, GROUP_NAMES );
>   
>   FileSystem userfs = DFSTestUtil.getFileSystemAs(userGroupInfo, conf);
>   // make sure mkdir of a existing directory that is not owned by 
>   // this user does not throw an exception.
>   userfs.mkdirs(CHILD_DIR1);
>   
> {code}
> Supposedly 
> {code}
>  userfs.setOwner(CHILD_FILE3, "foo", "bar");
> {code}
> will be run as the specified user, but it seems to be run as me who run the 
> test.
> Running as the specified user would disallow setOwner, which requires 
> superuser privilege. This is not happening.
> Creating this jira for some investigation to understand whether it's indeed 
> an issue.
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10893) Refactor TestDFSShell by setting up MiniDFSCluser once for all commands test

2016-09-27 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527693#comment-15527693
 ] 

Mingliang Liu commented on HDFS-10893:
--

Test failure is not related. Specially, [HDFS-10426] was supposed to have 
addressed this.

> Refactor TestDFSShell by setting up MiniDFSCluser once for all commands test
> 
>
> Key: HDFS-10893
> URL: https://issues.apache.org/jira/browse/HDFS-10893
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10893.000.patch
>
>
> It seems that setting up MiniDFSCluser once for all commands test will reduce 
> the total time. To share a global cluster, the tests should use individual 
> test directories to avoid conflict between test cases. Meanwhile, the 
> MiniDFSCluster should not use the default root data directory; or else tests 
> are not able to launch another cluster(s) by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10426) TestPendingInvalidateBlock failed in trunk

2016-09-27 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527691#comment-15527691
 ] 

Mingliang Liu commented on HDFS-10426:
--

Is 
https://builds.apache.org/job/PreCommit-HDFS-Build/16885/testReport/org.apache.hadoop.hdfs.server.blockmanagement/TestPendingInvalidateBlock/testPendingDeletion/
 a related build that still fails with this patch? Thanks.

> TestPendingInvalidateBlock failed in trunk
> --
>
> Key: HDFS-10426
> URL: https://issues.apache.org/jira/browse/HDFS-10426
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10426.001.patch, HDFS-10426.002.patch, 
> HDFS-10426.003.patch, HDFS-10426.004.patch, HDFS-10426.005.patch, 
> HDFS-10426.006.patch
>
>
> The test {{TestPendingInvalidateBlock}} failed sometimes. The stack info:
> {code}
> org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock
> testPendingDeletion(org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock)
>   Time elapsed: 7.703 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<2> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock.testPendingDeletion(TestPendingInvalidateBlock.java:92)
> {code}
> It looks that the {{invalidateBlock}} has been removed before we do the check
> {code}
> // restart NN
> cluster.restartNameNode(true);
> dfs.delete(foo, true);
> Assert.assertEquals(0, cluster.getNamesystem().getBlocksTotal());
> Assert.assertEquals(REPLICATION, cluster.getNamesystem()
> .getPendingDeletionBlocks());
> Assert.assertEquals(REPLICATION,
> dfs.getPendingDeletionBlocksCount());
> {code}
> And I look into the related configurations. I found the property 
> {{dfs.namenode.replication.interval}} was just set as 1 second in this test. 
> And after the delay time of {{dfs.namenode.startup.delay.block.deletion.sec}} 
> and the delete operation was slowly, it will cause this case. We can see the 
> stack info before, the failed test costs 7.7s more than 5+1 second.
> One way can improve this.
> * Increase the time of {{dfs.namenode.replication.interval}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10376) Enhance setOwner testing

2016-09-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527683#comment-15527683
 ] 

Hudson commented on HDFS-10376:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10501 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10501/])
HDFS-10376. Enhance setOwner testing. (John Zhuge via Yongjun Zhang) (yzhang: 
rev 2acfb1e1e4355246ef707b7c17964871b5dc7a73)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/security/TestPermission.java


> Enhance setOwner testing
> 
>
> Key: HDFS-10376
> URL: https://issues.apache.org/jira/browse/HDFS-10376
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Yongjun Zhang
>Assignee: John Zhuge
> Attachments: HDFS-10376.001.patch
>
>
> TestPermission create a user with the following name and group:
> {code}
>  final private static String USER_NAME = "user" + RAN.nextInt();
>  final private static String[] GROUP_NAMES = {"group1", "group2"};
>UserGroupInformation userGroupInfo = 
> UserGroupInformation.createUserForTesting(USER_NAME, GROUP_NAMES );
>   
>   FileSystem userfs = DFSTestUtil.getFileSystemAs(userGroupInfo, conf);
>   // make sure mkdir of a existing directory that is not owned by 
>   // this user does not throw an exception.
>   userfs.mkdirs(CHILD_DIR1);
>   
> {code}
> Supposedly 
> {code}
>  userfs.setOwner(CHILD_FILE3, "foo", "bar");
> {code}
> will be run as the specified user, but it seems to be run as me who run the 
> test.
> Running as the specified user would disallow setOwner, which requires 
> superuser privilege. This is not happening.
> Creating this jira for some investigation to understand whether it's indeed 
> an issue.
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10650) DFSClient#mkdirs and DFSClient#primitiveMkdir should use default directory permission

2016-09-27 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-10650:
-
Release Note: If the caller does not supply a permission, DFSClient#mkdirs 
and DFSClient#primitiveMkdir will create a new directory with the default 
directory permission 00777 now, instead of 00666.  (was: If the caller does not 
supply a permission, DFSClient#mkdirs and DFSClient#primitiveMkdir will create 
a new directory with the default directory permission 00777 instead of 00666.)

> DFSClient#mkdirs and DFSClient#primitiveMkdir should use default directory 
> permission
> -
>
> Key: HDFS-10650
> URL: https://issues.apache.org/jira/browse/HDFS-10650
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10650.001.patch, HDFS-10650.002.patch
>
>
> These 2 DFSClient methods should use default directory permission to create a 
> directory.
> {code:java}
>   public boolean mkdirs(String src, FsPermission permission,
>   boolean createParent) throws IOException {
> if (permission == null) {
>   permission = FsPermission.getDefault();
> }
> {code}
> {code:java}
>   public boolean primitiveMkdir(String src, FsPermission absPermission, 
> boolean createParent)
> throws IOException {
> checkOpen();
> if (absPermission == null) {
>   absPermission = 
> FsPermission.getDefault().applyUMask(dfsClientConf.uMask);
> } 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10893) Refactor TestDFSShell by setting up MiniDFSCluser once for all commands test

2016-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527651#comment-15527651
 ] 

Hadoop QA commented on HDFS-10893:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 40 new + 115 unchanged - 64 fixed = 155 total (was 179) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 58m 38s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10893 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830573/HDFS-10893.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 12332907450b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1831be8 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16885/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16885/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16885/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16885/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Refactor TestDFSShell by setting up MiniDFSCluser once for all commands test
> 

[jira] [Updated] (HDFS-10916) Switch from "raw" to "system" namespace for erasure coding policy

2016-09-27 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10916:
---
Release Note: 
EC policy is now stored in the "system" extended attribute namespace rather 
than "raw". This means the EC policy extended attribute is no longer directly 
accessible by users or preserved across a distcp that preserves raw extended 
attributes.

Users can instead use HdfsAdmin#setErasureCodingPolicy and 
HdfsAdmin#getErasureCodingPolicy to set and get the EC policy for a path.

> Switch from "raw" to "system" namespace for erasure coding policy
> -
>
> Key: HDFS-10916
> URL: https://issues.apache.org/jira/browse/HDFS-10916
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HDFS-10916.001.patch
>
>
> Currently EC policy is stored as in the raw xattr namespace. It would be 
> better to store this in "system" like storage policy.
> Raw is meant for attributes which need to be preserved across a distcp, like 
> encryption info. EC policy is more similar to replication factor or storage 
> policy, which can differ between the src and target of a distcp. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10916) Switch from "raw" to "system" namespace for erasure coding policy

2016-09-27 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10916:
---
Status: Patch Available  (was: Open)

> Switch from "raw" to "system" namespace for erasure coding policy
> -
>
> Key: HDFS-10916
> URL: https://issues.apache.org/jira/browse/HDFS-10916
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HDFS-10916.001.patch
>
>
> Currently EC policy is stored as in the raw xattr namespace. It would be 
> better to store this in "system" like storage policy.
> Raw is meant for attributes which need to be preserved across a distcp, like 
> encryption info. EC policy is more similar to replication factor or storage 
> policy, which can differ between the src and target of a distcp. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10916) Switch from "raw" to "system" namespace for erasure coding policy

2016-09-27 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-10916:
--

 Summary: Switch from "raw" to "system" namespace for erasure 
coding policy
 Key: HDFS-10916
 URL: https://issues.apache.org/jira/browse/HDFS-10916
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: erasure-coding
Affects Versions: 3.0.0-alpha1
Reporter: Andrew Wang
Assignee: Andrew Wang


Currently EC policy is stored as in the raw xattr namespace. It would be better 
to store this in "system" like storage policy.

Raw is meant for attributes which need to be preserved across a distcp, like 
encryption info. EC policy is more similar to replication factor or storage 
policy, which can differ between the src and target of a distcp. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10916) Switch from "raw" to "system" namespace for erasure coding policy

2016-09-27 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10916:
---
Attachment: HDFS-10916.001.patch

Trivial patch attached.

> Switch from "raw" to "system" namespace for erasure coding policy
> -
>
> Key: HDFS-10916
> URL: https://issues.apache.org/jira/browse/HDFS-10916
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HDFS-10916.001.patch
>
>
> Currently EC policy is stored as in the raw xattr namespace. It would be 
> better to store this in "system" like storage policy.
> Raw is meant for attributes which need to be preserved across a distcp, like 
> encryption info. EC policy is more similar to replication factor or storage 
> policy, which can differ between the src and target of a distcp. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10892) Add unit tests for HDFS command 'dfs -tail' and 'dfs -stat'

2016-09-27 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527560#comment-15527560
 ] 

Mingliang Liu edited comment on HDFS-10892 at 9/27/16 10:12 PM:


The {{hadoop.hdfs.TestDFSShell#testUtf8Encoding}} was not able to pass in 
Jenkins. The stack trace is like:
{quote}
Error Message

Malformed input or input contains unmappable characters: 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/RFrIWvn1nt/TestDFSShell/哈杜普.txt
Stacktrace

java.nio.file.InvalidPathException: Malformed input or input contains 
unmappable characters: 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/RFrIWvn1nt/TestDFSShell/哈杜普.txt
at sun.nio.fs.UnixPath.encode(UnixPath.java:147)
at sun.nio.fs.UnixPath.(UnixPath.java:71)
at sun.nio.fs.UnixFileSystem.getPath(UnixFileSystem.java:281)
at java.io.File.toPath(File.java:2234)
at 
org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getLastAccessTime(RawLocalFileSystem.java:662)
at 
org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.(RawLocalFileSystem.java:673)
at 
org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:643)
at 
org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:871)
at 
org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:635)
at 
org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:435)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:360)
at 
org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:2093)
at 
org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:2061)
at 
org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:2026)
at 
org.apache.hadoop.hdfs.TestDFSShell.testUtf8Encoding(TestDFSShell.java:3885)
{quote}

I was able to pass the test locally without any problem. Perhaps it has 
something to with the {{LANG}} environment variable according to discussions on 
the Internet. I can confirm that my local test machine is using following 
settings.
{code}
$ locale
LANG="en_US.UTF-8"
LC_COLLATE="en_US.UTF-8"
LC_CTYPE="en_US.UTF-8"
LC_MESSAGES="en_US.UTF-8"
LC_MONETARY="en_US.UTF-8"
LC_NUMERIC="en_US.UTF-8"
LC_TIME="en_US.UTF-8"
LC_ALL=
{code}

I also think Yetus is setting the local correctly. [~aw] Do you have any idea 
about this? Thanks.

As this test case is not tightly related to the commands "-tail" or "-stat", I 
remove this test case in the latest patch. If anyone suggests a working 
approach, I'd like to submit separate JIRA for tracking this test. Otherwise, 
we test it elsewhere (say, nightly system tests). Just for what it's worth, the 
test code is:
{code}
  /**
   * Test that the file name and content can have UTF-8 chars.
   */
  @Test (timeout = 3)
  public void testUtf8Encoding() throws Exception {
final int blockSize = 1024;
final Configuration conf = new HdfsConfiguration();
conf.setInt(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, blockSize);

try (MiniDFSCluster cluster =
 new MiniDFSCluster.Builder(conf).numDataNodes(3).build()) {
  cluster.waitActive();
  final DistributedFileSystem dfs = cluster.getFileSystem();
  final Path workDir= new Path("/testUtf8Encoding");
  dfs.mkdirs(workDir);

  System.setProperty("sun.jnu.encoding", "UTF-8");
  System.setProperty("file.encoding", "UTF-8");
  final String chineseStr = "哈杜普.txt";
  final File testFile = new File(TEST_ROOT_DIR, chineseStr);
  // create a local file; its content contains the Chinese file name
  createLocalFile(testFile);
  dfs.copyFromLocalFile(new Path(testFile.getPath()), workDir);
  assertTrue(dfs.exists(new Path(workDir, testFile.getName(;

  final ByteArrayOutputStream out = new ByteArrayOutputStream();
  System.setOut(new PrintStream(out));

  final String argv[] = new String[]{
  "-cat", workDir + "/" + testFile.getName()};
  final int ret = ToolRunner.run(new FsShell(conf), argv);
  assertEquals(Arrays.toString(argv) + " returned non-zero status " + ret,
  0, ret);
  assertTrue("Unexpected -cat output: " + out,
  out.toString().contains(chineseStr));
}
  }
{code}


was (Author: liuml07):
The {{hadoop.hdfs.TestDFSShell#testUtf8Encoding}} was not able to run in 
Jenkins. I was able to pass the test locally without any problem. Perhaps it 
has something to with the {{LANG}} environment variable according to 
discussions on the Internet. I can confirm that my local test machine is using 
following settings. I also think Yetus is setting the local correctly. [~aw] Do 
you have any idea about this? Thanks.
{code}
$ locale
LANG="en_US.UTF-8"
LC_COLLATE="en_US.UTF-8"
LC_CTYPE="en_US.UTF-8"

[jira] [Updated] (HDFS-10892) Add unit tests for HDFS command 'dfs -tail' and 'dfs -stat'

2016-09-27 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-10892:
-
Attachment: HDFS-10892.004.patch

> Add unit tests for HDFS command 'dfs -tail' and 'dfs -stat'
> ---
>
> Key: HDFS-10892
> URL: https://issues.apache.org/jira/browse/HDFS-10892
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs, shell, test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10892.000.patch, HDFS-10892.001.patch, 
> HDFS-10892.002.patch, HDFS-10892.003.patch, HDFS-10892.004.patch
>
>
> I did not find unit test in {{trunk}} code for following cases:
> - HDFS command {{dfs -tail}}
> - HDFS command {{dfs -stat}}
> I think it still merits to have one though the commands have served us for 
> years.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10892) Add unit tests for HDFS command 'dfs -tail' and 'dfs -stat'

2016-09-27 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-10892:
-
Description: 
I did not find unit test in {{trunk}} code for following cases:
- HDFS command {{dfs -tail}}
- HDFS command {{dfs -stat}}

I think it still merits to have one though the commands have served us for 
years.

  was:
I did not find unit test in {{trunk}} code for following cases:
- HDFS command {{dfs -tail}}
- HDFS command {{dfs -stat}}
- file name or content with UTF-8 characters

I think it still merits to have one though the commands have served us for 
years.


The {{hadoop.hdfs.TestDFSShell#testUtf8Encoding}} was not able to run in 
Jenkins. I was able to pass the test locally without any problem. Perhaps it 
has something to with the {{LANG}} environment variable according to 
discussions on the Internet. I can confirm that my local test machine is using 
following settings. I also think Yetus is setting the local correctly. [~aw] Do 
you have any idea about this? Thanks.
{code}
$ locale
LANG="en_US.UTF-8"
LC_COLLATE="en_US.UTF-8"
LC_CTYPE="en_US.UTF-8"
LC_MESSAGES="en_US.UTF-8"
LC_MONETARY="en_US.UTF-8"
LC_NUMERIC="en_US.UTF-8"
LC_TIME="en_US.UTF-8"
LC_ALL=
{code}

As this test case is not tightly related to the commands "-tail" or "-stat", I 
remove this test case in the latest patch. Just for what it's worth, the test 
code is:
{code}

  /**
   * Test that the file name and content can have UTF-8 chars.
   */
  @Test (timeout = 3)
  public void testUtf8Encoding() throws Exception {
final int blockSize = 1024;
final Configuration conf = new HdfsConfiguration();
conf.setInt(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, blockSize);

try (MiniDFSCluster cluster =
 new MiniDFSCluster.Builder(conf).numDataNodes(3).build()) {
  cluster.waitActive();
  final DistributedFileSystem dfs = cluster.getFileSystem();
  final Path workDir= new Path("/testUtf8Encoding");
  dfs.mkdirs(workDir);

  System.setProperty("sun.jnu.encoding", "UTF-8");
  System.setProperty("file.encoding", "UTF-8");
  final String chineseStr = "哈杜普.txt";
  final File testFile = new File(TEST_ROOT_DIR, chineseStr);
  // create a local file; its content contains the Chinese file name
  createLocalFile(testFile);
  dfs.copyFromLocalFile(new Path(testFile.getPath()), workDir);
  assertTrue(dfs.exists(new Path(workDir, testFile.getName(;

  final ByteArrayOutputStream out = new ByteArrayOutputStream();
  System.setOut(new PrintStream(out));

  final String argv[] = new String[]{
  "-cat", workDir + "/" + testFile.getName()};
  final int ret = ToolRunner.run(new FsShell(conf), argv);
  assertEquals(Arrays.toString(argv) + " returned non-zero status " + ret,
  0, ret);
  assertTrue("Unexpected -cat output: " + out,
  out.toString().contains(chineseStr));
}
  }
{code}
If anyone suggests a working approach, I'd like to submit separate JIRA for 
tracking this test. Otherwise, we test it elsewhere (say, nightly system tests).

> Add unit tests for HDFS command 'dfs -tail' and 'dfs -stat'
> ---
>
> Key: HDFS-10892
> URL: https://issues.apache.org/jira/browse/HDFS-10892
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs, shell, test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10892.000.patch, HDFS-10892.001.patch, 
> HDFS-10892.002.patch, HDFS-10892.003.patch
>
>
> I did not find unit test in {{trunk}} code for following cases:
> - HDFS command {{dfs -tail}}
> - HDFS command {{dfs -stat}}
> I think it still merits to have one though the commands have served us for 
> years.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10915) fix typo in TestDatanodeRestart#testWaitForRegistrationOnRestart

2016-09-27 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10915:
-
Attachment: HDFS-10915.000.patch

> fix typo in TestDatanodeRestart#testWaitForRegistrationOnRestart
> 
>
> Key: HDFS-10915
> URL: https://issues.apache.org/jira/browse/HDFS-10915
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10915.000.patch
>
>
> It should be milliseconds in the message of IOException.
> {code}
> } catch (org.apache.hadoop.ipc.RemoteException e) {
> long elapsed = System.currentTimeMillis() - start;
> // timers have at-least semantics, so it should be at least 5 seconds.
> if (elapsed < 5000 || elapsed > 1) {
>   throw new IOException(elapsed + " seconds passed.", e);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10915) fix typo in TestDatanodeRestart#testWaitForRegistrationOnRestart

2016-09-27 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10915:
-
Attachment: (was: HDFS-10915.000.patch)

> fix typo in TestDatanodeRestart#testWaitForRegistrationOnRestart
> 
>
> Key: HDFS-10915
> URL: https://issues.apache.org/jira/browse/HDFS-10915
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10915.000.patch
>
>
> It should be milliseconds in the message of IOException.
> {code}
> } catch (org.apache.hadoop.ipc.RemoteException e) {
> long elapsed = System.currentTimeMillis() - start;
> // timers have at-least semantics, so it should be at least 5 seconds.
> if (elapsed < 5000 || elapsed > 1) {
>   throw new IOException(elapsed + " seconds passed.", e);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10913) Refactor BlockReceiver by introducing faults injector to enhance testability of detecting slow mirrors

2016-09-27 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527499#comment-15527499
 ] 

Xiaobing Zhou commented on HDFS-10913:
--

[~jojochuang] thanks for the correction.

> Refactor BlockReceiver by introducing faults injector to enhance testability 
> of detecting slow mirrors
> --
>
> Key: HDFS-10913
> URL: https://issues.apache.org/jira/browse/HDFS-10913
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> BlockReceiver#datanodeSlowLogThresholdMs is used as threshold to detect slow 
> mirrors. BlockReceiver only writes some warning logs. In order to better test 
> behaviors of slow mirrors, it necessitates introducing fault injectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10915) fix typo in TestDatanodeRestart#testWaitForRegistrationOnRestart

2016-09-27 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10915:
-
Status: Patch Available  (was: Open)

> fix typo in TestDatanodeRestart#testWaitForRegistrationOnRestart
> 
>
> Key: HDFS-10915
> URL: https://issues.apache.org/jira/browse/HDFS-10915
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10915.000.patch
>
>
> It should be milliseconds in the message of IOException.
> {code}
> } catch (org.apache.hadoop.ipc.RemoteException e) {
> long elapsed = System.currentTimeMillis() - start;
> // timers have at-least semantics, so it should be at least 5 seconds.
> if (elapsed < 5000 || elapsed > 1) {
>   throw new IOException(elapsed + " seconds passed.", e);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10915) fix typo in TestDatanodeRestart#testWaitForRegistrationOnRestart

2016-09-27 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10915:
-
Attachment: HDFS-10915.000.patch

v000 is posted, please kindly review it, thanks.

> fix typo in TestDatanodeRestart#testWaitForRegistrationOnRestart
> 
>
> Key: HDFS-10915
> URL: https://issues.apache.org/jira/browse/HDFS-10915
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10915.000.patch
>
>
> It should be milliseconds in the message of IOException.
> {code}
> } catch (org.apache.hadoop.ipc.RemoteException e) {
> long elapsed = System.currentTimeMillis() - start;
> // timers have at-least semantics, so it should be at least 5 seconds.
> if (elapsed < 5000 || elapsed > 1) {
>   throw new IOException(elapsed + " seconds passed.", e);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10828) Fix usage of FsDatasetImpl object lock in ReplicaMap

2016-09-27 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527488#comment-15527488
 ] 

Kai Zheng commented on HDFS-10828:
--

Thanks [~arpiagariu]. The minor wasn't new and I thought it could be fixed in 
HDFS-9668. Ping [~jingcheng...@intel.com] to take care of this.

> Fix usage of FsDatasetImpl object lock in ReplicaMap
> 
>
> Key: HDFS-10828
> URL: https://issues.apache.org/jira/browse/HDFS-10828
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10828.01.patch, HDFS-10828.02.patch, 
> HDFS-10828.03.patch
>
>
> HDFS-10682 replaced the FsDatasetImpl object lock with a separate reentrant 
> lock but missed updating an instance ReplicaMap still uses the FsDatasetImpl.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10915) fix typo in TestDatanodeRestart#testWaitForRegistrationOnRestart

2016-09-27 Thread Xiaobing Zhou (JIRA)
Xiaobing Zhou created HDFS-10915:


 Summary: fix typo in 
TestDatanodeRestart#testWaitForRegistrationOnRestart
 Key: HDFS-10915
 URL: https://issues.apache.org/jira/browse/HDFS-10915
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Xiaobing Zhou
Assignee: Xiaobing Zhou


It should be milliseconds in the message of IOException.

{code}
} catch (org.apache.hadoop.ipc.RemoteException e) {
long elapsed = System.currentTimeMillis() - start;
// timers have at-least semantics, so it should be at least 5 seconds.
if (elapsed < 5000 || elapsed > 1) {
  throw new IOException(elapsed + " seconds passed.", e);
}
  }
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10914) Move remnants of oah.hdfs.client to hadoop-hdfs-client

2016-09-27 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527481#comment-15527481
 ] 

Andrew Wang commented on HDFS-10914:


FWIW, verified that HdfsAdmin was present in the new JAR too:

{noformat}
-> % jar -tf 
./hadoop-hdfs-project/hadoop-hdfs-client/target/hadoop-hdfs-client-3.0.0-alpha2-SNAPSHOT.jar
 | grep HdfsAdmin
org/apache/hadoop/hdfs/client/HdfsAdmin.class
{noformat}

> Move remnants of oah.hdfs.client to hadoop-hdfs-client
> --
>
> Key: HDFS-10914
> URL: https://issues.apache.org/jira/browse/HDFS-10914
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Critical
> Attachments: hdfs-10914.001.patch
>
>
> Some remaining classes in the oah.hdfs.client package are still in 
> hadoop-hdfs rather than hadoop-hdfs-client.
> This broke a client that depended on hadoop-client for HdfsAdmin. 
> hadoop-client now pulls in hadoop-hdfs-client rather than hadoop-hdfs, 
> meaning it lost access to HdfsAdmin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10914) Move remnants of oah.hdfs.client to hadoop-hdfs-client

2016-09-27 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10914:
---
Status: Patch Available  (was: Open)

> Move remnants of oah.hdfs.client to hadoop-hdfs-client
> --
>
> Key: HDFS-10914
> URL: https://issues.apache.org/jira/browse/HDFS-10914
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Critical
> Attachments: hdfs-10914.001.patch
>
>
> Some remaining classes in the oah.hdfs.client package are still in 
> hadoop-hdfs rather than hadoop-hdfs-client.
> This broke a client that depended on hadoop-client for HdfsAdmin. 
> hadoop-client now pulls in hadoop-hdfs-client rather than hadoop-hdfs, 
> meaning it lost access to HdfsAdmin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10914) Move remnants of oah.hdfs.client to hadoop-hdfs-client

2016-09-27 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10914:
---
Attachment: hdfs-10914.001.patch

Patch attached. I encourage reviewers to look at the "git diff -M" to see 
changes, listed here:

* Switched HdfsUtils to SLF4J to avoid a apache commons logging dependency
* Switched to just closing the fs at the end of HdfsUtils, since it'll still 
log in DistributedFileSystem.
* Removed javadoc on DFSAdmin, since it depends on hadoop-hdfs
* Added imports to CreateEncryptionZoneFlag to fix javadoc links

> Move remnants of oah.hdfs.client to hadoop-hdfs-client
> --
>
> Key: HDFS-10914
> URL: https://issues.apache.org/jira/browse/HDFS-10914
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Critical
> Attachments: hdfs-10914.001.patch
>
>
> Some remaining classes in the oah.hdfs.client package are still in 
> hadoop-hdfs rather than hadoop-hdfs-client.
> This broke a client that depended on hadoop-client for HdfsAdmin. 
> hadoop-client now pulls in hadoop-hdfs-client rather than hadoop-hdfs, 
> meaning it lost access to HdfsAdmin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10779) Rename does not need to re-solve destination

2016-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527471#comment-15527471
 ] 

Hadoop QA commented on HDFS-10779:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 227 unchanged - 2 fixed = 230 total (was 229) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 27s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.qjournal.client.TestQuorumJournalManager |
|   | hadoop.hdfs.TestCrcCorruption |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10779 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830546/HDFS-10779.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0bd9a9575615 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1831be8 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16884/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16884/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16884/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16884/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> 

[jira] [Created] (HDFS-10914) Move remnants of oah.hdfs.client to hadoop-hdfs-client

2016-09-27 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-10914:
--

 Summary: Move remnants of oah.hdfs.client to hadoop-hdfs-client
 Key: HDFS-10914
 URL: https://issues.apache.org/jira/browse/HDFS-10914
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.8.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Critical


Some remaining classes in the oah.hdfs.client package are still in hadoop-hdfs 
rather than hadoop-hdfs-client.

This broke a client that depended on hadoop-client for HdfsAdmin. hadoop-client 
now pulls in hadoop-hdfs-client rather than hadoop-hdfs, meaning it lost access 
to HdfsAdmin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10893) Refactor TestDFSShell by setting up MiniDFSCluser once for all commands test

2016-09-27 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-10893:
-
Description: It seems that setting up MiniDFSCluser once for all commands 
test will reduce the total time. To share a global cluster, the tests should 
use individual test directories to avoid conflict between test cases. 
Meanwhile, the MiniDFSCluster should not use the default root data directory; 
or else tests are not able to launch another cluster(s) by default.  (was: This 
is a minor change. It seems that setting up MiniDFSCluser once for all commands 
test will reduce the total time.)

> Refactor TestDFSShell by setting up MiniDFSCluser once for all commands test
> 
>
> Key: HDFS-10893
> URL: https://issues.apache.org/jira/browse/HDFS-10893
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10893.000.patch
>
>
> It seems that setting up MiniDFSCluser once for all commands test will reduce 
> the total time. To share a global cluster, the tests should use individual 
> test directories to avoid conflict between test cases. Meanwhile, the 
> MiniDFSCluster should not use the default root data directory; or else tests 
> are not able to launch another cluster(s) by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10893) Refactor TestDFSShell by setting up MiniDFSCluser once for all commands test

2016-09-27 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-10893:
-
Priority: Major  (was: Minor)

> Refactor TestDFSShell by setting up MiniDFSCluser once for all commands test
> 
>
> Key: HDFS-10893
> URL: https://issues.apache.org/jira/browse/HDFS-10893
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10893.000.patch
>
>
> This is a minor change. It seems that setting up MiniDFSCluser once for all 
> commands test will reduce the total time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10893) Refactor TestDFSShell by setting up MiniDFSCluser once for all commands test

2016-09-27 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527422#comment-15527422
 ] 

Mingliang Liu commented on HDFS-10893:
--

The overall running time on my laptop is reduced from 35+ seconds to ~12 
seconds.

> Refactor TestDFSShell by setting up MiniDFSCluser once for all commands test
> 
>
> Key: HDFS-10893
> URL: https://issues.apache.org/jira/browse/HDFS-10893
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: HDFS-10893.000.patch
>
>
> This is a minor change. It seems that setting up MiniDFSCluser once for all 
> commands test will reduce the total time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10893) Refactor TestDFSShell by setting up MiniDFSCluser once for all commands test

2016-09-27 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-10893:
-
Status: Patch Available  (was: Open)

> Refactor TestDFSShell by setting up MiniDFSCluser once for all commands test
> 
>
> Key: HDFS-10893
> URL: https://issues.apache.org/jira/browse/HDFS-10893
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: HDFS-10893.000.patch
>
>
> This is a minor change. It seems that setting up MiniDFSCluser once for all 
> commands test will reduce the total time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10893) Refactor TestDFSShell by setting up MiniDFSCluser once for all commands test

2016-09-27 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-10893:
-
Attachment: HDFS-10893.000.patch

The v0 patch:
- Sets up a global MiniDFSCluster for multiple test cases; and closes after 
tests
- Sets up test case specific directories
- Uses a dedicated root directory so that individual tests can launch another 
cluster without conflict

Some of the tests are still using local cluster(s) because 1) they need ad hoc 
configurations 2) they need to restart the cluster 3) they may pollute the 
cluster state.

> Refactor TestDFSShell by setting up MiniDFSCluser once for all commands test
> 
>
> Key: HDFS-10893
> URL: https://issues.apache.org/jira/browse/HDFS-10893
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: HDFS-10893.000.patch
>
>
> This is a minor change. It seems that setting up MiniDFSCluser once for all 
> commands test will reduce the total time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >