[jira] [Comment Edited] (HDFS-12935) Get ambiguous result for DFSAdmin command in HA mode when only one namenode is up

2017-12-22 Thread Jianfei Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16302178#comment-16302178
 ] 

Jianfei Jiang edited comment on HDFS-12935 at 12/23/17 3:46 AM:


The failed testcases are not related. I have re-run them and all passed.
[~brahmareddy], [~shahrs87], [~zhenyi] Please review. Thanks a lot.

[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure
[INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.779 
s - in org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure
[INFO] Running org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.455 s 
- in org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus
[INFO] Running org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA
[INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.318 s 
- in org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA
[INFO] Running org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
[INFO] Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 113.323 
s - in org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
[INFO] Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090
[WARNING] Tests run: 18, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 
110.711 s - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090
[INFO] Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030
[WARNING] Tests run: 18, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 
110.061 s - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030
[INFO] Running org.apache.hadoop.hdfs.TestRenameWhileOpen
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.687 s 
- in org.apache.hadoop.hdfs.TestRenameWhileOpen
[INFO] Running org.apache.hadoop.hdfs.TestEncryptionZones
[INFO] Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 59.161 
s - in org.apache.hadoop.hdfs.TestEncryptionZones
[INFO]
[INFO] Results:
[INFO]
[WARNING] Tests run: 118, Failures: 0, Errors: 0, Skipped: 4



was (Author: jiangjianfei):
The failed testcases are not related. I have re-run them and all passed.
Please review [~brahmareddy] [~shahrs87] [~zhenyi]. Thanks a lot.

[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure
[INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.779 
s - in org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure
[INFO] Running org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.455 s 
- in org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus
[INFO] Running org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA
[INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.318 s 
- in org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA
[INFO] Running org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
[INFO] Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 113.323 
s - in org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
[INFO] Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090
[WARNING] Tests run: 18, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 
110.711 s - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090
[INFO] Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030
[WARNING] Tests run: 18, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 
110.061 s - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030
[INFO] Running org.apache.hadoop.hdfs.TestRenameWhileOpen
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.687 s 
- in org.apache.hadoop.hdfs.TestRenameWhileOpen
[INFO] Running org.apache.hadoop.hdfs.TestEncryptionZones
[INFO] Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 59.161 
s - in org.apache.hadoop.hdfs.TestEncryptionZones
[INFO]
[INFO] Results:
[INFO]
[WARNING] Tests run: 118, Failures: 0, Errors: 0, Skipped: 4


> Get ambiguous result for DFSAdmin command in HA mode when only one namenode 
> is up
> -
>
> Key: HDFS-12935
> URL: https://issues.apache.org/jira/browse/HDFS-12935
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Ve

[jira] [Comment Edited] (HDFS-12935) Get ambiguous result for DFSAdmin command in HA mode when only one namenode is up

2017-12-22 Thread Jianfei Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16302178#comment-16302178
 ] 

Jianfei Jiang edited comment on HDFS-12935 at 12/23/17 3:45 AM:


The failed testcases are not related. I have re-run them and all passed.
Please review [~brahmareddy] [~shahrs87] [~zhenyi]. Thanks a lot.

[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure
[INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.779 
s - in org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure
[INFO] Running org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.455 s 
- in org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus
[INFO] Running org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA
[INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.318 s 
- in org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA
[INFO] Running org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
[INFO] Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 113.323 
s - in org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
[INFO] Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090
[WARNING] Tests run: 18, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 
110.711 s - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090
[INFO] Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030
[WARNING] Tests run: 18, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 
110.061 s - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030
[INFO] Running org.apache.hadoop.hdfs.TestRenameWhileOpen
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.687 s 
- in org.apache.hadoop.hdfs.TestRenameWhileOpen
[INFO] Running org.apache.hadoop.hdfs.TestEncryptionZones
[INFO] Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 59.161 
s - in org.apache.hadoop.hdfs.TestEncryptionZones
[INFO]
[INFO] Results:
[INFO]
[WARNING] Tests run: 118, Failures: 0, Errors: 0, Skipped: 4



was (Author: jiangjianfei):
The failed testcases are not related. I have re-run them and all passed.
Please review [~brahmareddy]. Thanks a lot.

[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure
[INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.779 
s - in org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure
[INFO] Running org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.455 s 
- in org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus
[INFO] Running org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA
[INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.318 s 
- in org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA
[INFO] Running org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
[INFO] Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 113.323 
s - in org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
[INFO] Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090
[WARNING] Tests run: 18, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 
110.711 s - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090
[INFO] Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030
[WARNING] Tests run: 18, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 
110.061 s - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030
[INFO] Running org.apache.hadoop.hdfs.TestRenameWhileOpen
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.687 s 
- in org.apache.hadoop.hdfs.TestRenameWhileOpen
[INFO] Running org.apache.hadoop.hdfs.TestEncryptionZones
[INFO] Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 59.161 
s - in org.apache.hadoop.hdfs.TestEncryptionZones
[INFO]
[INFO] Results:
[INFO]
[WARNING] Tests run: 118, Failures: 0, Errors: 0, Skipped: 4


> Get ambiguous result for DFSAdmin command in HA mode when only one namenode 
> is up
> -
>
> Key: HDFS-12935
> URL: https://issues.apache.org/jira/browse/HDFS-12935
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0-beta1, 3.0

[jira] [Commented] (HDFS-12935) Get ambiguous result for DFSAdmin command in HA mode when only one namenode is up

2017-12-22 Thread Jianfei Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16302178#comment-16302178
 ] 

Jianfei Jiang commented on HDFS-12935:
--

The failed testcases are not related. I have re-run them and all passed.
Please review [~brahmareddy]. Thanks a lot.

[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure
[INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.779 
s - in org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure
[INFO] Running org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.455 s 
- in org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus
[INFO] Running org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA
[INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.318 s 
- in org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA
[INFO] Running org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
[INFO] Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 113.323 
s - in org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
[INFO] Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090
[WARNING] Tests run: 18, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 
110.711 s - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090
[INFO] Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030
[WARNING] Tests run: 18, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 
110.061 s - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030
[INFO] Running org.apache.hadoop.hdfs.TestRenameWhileOpen
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.687 s 
- in org.apache.hadoop.hdfs.TestRenameWhileOpen
[INFO] Running org.apache.hadoop.hdfs.TestEncryptionZones
[INFO] Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 59.161 
s - in org.apache.hadoop.hdfs.TestEncryptionZones
[INFO]
[INFO] Results:
[INFO]
[WARNING] Tests run: 118, Failures: 0, Errors: 0, Skipped: 4


> Get ambiguous result for DFSAdmin command in HA mode when only one namenode 
> is up
> -
>
> Key: HDFS-12935
> URL: https://issues.apache.org/jira/browse/HDFS-12935
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0-beta1, 3.0.0
>Reporter: Jianfei Jiang
>Assignee: Jianfei Jiang
> Attachments: HDFS-12935.002.patch, HDFS-12935.003.patch, 
> HDFS_12935.001.patch
>
>
> In HA mode, if one namenode is down, most of functions can still work. When 
> considering the following two occasions:
>  (1)nn1 up and nn2 down
>  (2)nn1 down and nn2 up
> These two occasions should be equivalent. However, some of the DFSAdmin 
> commands will have ambiguous results. The commands can be send successfully 
> to the up namenode and are always functionally useful only when nn1 is up 
> regardless of exception (IOException when connecting to the down namenode 
> nn2). If only nn2 is up, the commands have no use at all and only exception 
> to connect nn1 can be found.
> See the following command "hdfs dfsadmin setBalancerBandwidth" which aim to 
> set balancer bandwidth value for datanodes as an example. It works and all 
> the datanodes can get the setting values only when nn1 is up. If only nn2 is 
> up, the command throws exception directly and no datanode get the bandwidth 
> setting. Approximately ten DFSAdmin commands use the similar logical process 
> and may be ambiguous.
> [root@jiangjianfei01 ~]# hdfs haadmin -getServiceState nn1
> active
> [root@jiangjianfei01 ~]# hdfs dfsadmin -setBalancerBandwidth 12345
> *Balancer bandwidth is set to 12345 for jiangjianfei01/172.17.0.14:9820*
> setBalancerBandwidth: Call From jiangjianfei01/172.17.0.14 to 
> jiangjianfei02:9820 failed on connection exception: 
> java.net.ConnectException: Connection refused; For more details see:  
> http://wiki.apache.org/hadoop/ConnectionRefused
> [root@jiangjianfei01 ~]# hdfs haadmin -getServiceState nn2
> active
> [root@jiangjianfei01 ~]# hdfs dfsadmin -setBalancerBandwidth 1234
> setBalancerBandwidth: Call From jiangjianfei01/172.17.0.14 to 
> jiangjianfei01:9820 failed on connection exception: 
> java.net.ConnectException: Connection refused; For more details see:  
> http://wiki.apache.org/hadoop/ConnectionRefused
> [root@jiangjianfei01 ~]# 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-

[jira] [Commented] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.

2017-12-22 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16302173#comment-16302173
 ] 

genericqa commented on HDFS-12574:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 37m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 48s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
4s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
11s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  7s{color} | {color:orange} root: The patch generated 2 new + 392 unchanged 
- 2 fixed = 394 total (was 394) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 38s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 42s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
32s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}134m 16s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}279m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.fs.viewfs.TestViewFileSystemWithAuthorityLocalFileSystem |
|   | hadoop.fs.viewfs.TestViewFileSystemLocalFileSystem |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestDFSStripedOutputStream |
|   | hadoop.hdfs.server.blockmanagement.Tes

[jira] [Commented] (HDFS-12860) StripedBlockUtil#getRangesInternalBlocks throws exception for the block group size larger than 2GB

2017-12-22 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16302005#comment-16302005
 ] 

Lei (Eddy) Xu commented on HDFS-12860:
--

Hi, [~Sammi] Thanks for the review.

bq. 1. It's great to add error message to provide more information when 
Precondition check fails. There are "%d" used in String.format and "%s" used in 
Preconditions. Is it because Preconditions doesn't support "%s"? 

Yes, Preconditions [only allow "%s" 
indicators|https://github.com/google/guava/wiki/PreconditionsExplained].

bq. Also I would suggest add a end-to-end test case in 
TestErasureCodingPolicies.

Could you help to clarify which case you would like to add? If it is about 
testing write an actual data to be larger than {{Integer.MAX_VALUE}} to test 
the precondition checks in {{VerticalRange#VerticalRange()}}, it needs write at 
least {{numDataUnits * Integer.MAX_VALUE}} bytes. 

> StripedBlockUtil#getRangesInternalBlocks throws exception for the block group 
> size larger than 2GB
> --
>
> Key: HDFS-12860
> URL: https://issues.apache.org/jira/browse/HDFS-12860
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-12860.00.patch
>
>
> Running terasort on a cluster with 8 datanodes, 256GB data, using 
> RS-3-2-1024k.
> The test data was generated by {{teragen}} with 32 mappers.
> The terasort benchmark fails with the following stack trace:
> {code}
> 17/11/27 14:44:31 INFO mapreduce.Job:  map 45% reduce 0%
> 17/11/27 14:44:33 INFO mapreduce.Job: Task Id : 
> attempt_1510080297865_0160_m_08_0, Status : FAILED
> Error: java.lang.IllegalArgumentException
>   at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:72)
>   at 
> org.apache.hadoop.hdfs.util.StripedBlockUtil$VerticalRange.(StripedBlockUtil.java:701)
>   at 
> org.apache.hadoop.hdfs.util.StripedBlockUtil.getRangesForInternalBlocks(StripedBlockUtil.java:442)
>   at 
> org.apache.hadoop.hdfs.util.StripedBlockUtil.divideOneStripe(StripedBlockUtil.java:311)
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:308)
>   at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:391)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:813)
>   at java.io.DataInputStream.read(DataInputStream.java:149)
>   at 
> org.apache.hadoop.examples.terasort.TeraInputFormat$TeraRecordReader.nextKeyValue(TeraInputFormat.java:257)
>   at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:562)
>   at 
> org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
>   at 
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:793)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.

2017-12-22 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-12574:
--
Attachment: HDFS-12574.008.patch

> Add CryptoInputStream to WebHdfsFileSystem read call.
> -
>
> Key: HDFS-12574
> URL: https://issues.apache.org/jira/browse/HDFS-12574
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: encryption, kms, webhdfs
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HDFS-12574.001.patch, HDFS-12574.002.patch, 
> HDFS-12574.003.patch, HDFS-12574.004.patch, HDFS-12574.005.patch, 
> HDFS-12574.006.patch, HDFS-12574.007.patch, HDFS-12574.008.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12528) Short-circuit reads unnecessarily disabled for a long time

2017-12-22 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301870#comment-16301870
 ] 

Xiao Chen commented on HDFS-12528:
--

Looked at this again with the helpful unit test from John.

IMO we can do one or some of the following:
# Make the {{expireAfterWrite}} of {{DomainSocketFactory$pathMap}} configurable.
This is lowest risk and gives clients the freedom. {{getPathInfo}} will return 
{{VALID}} state if the entry is not found from {{pathMap}}, so setting it to 0 
basically never disables the domain socket.
# Add a time based counter on {{DomainSocketFactory}} and only disable after a 
configurable number of errors seen within an interval (say, 10 errors in 10 
mins?) for a given path.
This may be more smarter and wouldn't need the basic users to change the config.
Trade off is if a problematic block is read very frequently we'll still have 
this same issue.
# Add a field about exception type to {{BlockOpResponseProto}} (or something 
morphs). Anecdotally the reports here are all FNFE. We can update 
BlockReaderFactory to ignore unknown errors that are FNFE, but still disable 
other unknown errors.
This will change more stuff, and trivially change the behavior. Still 
compatible though.

In any case, upon the null {{BlockReader}} from SCR, the read will fall back to 
regular RPC.

I'm proposing we do #1 and #2. Thoughts?

> Short-circuit reads unnecessarily disabled for a long time
> --
>
> Key: HDFS-12528
> URL: https://issues.apache.org/jira/browse/HDFS-12528
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client, performance
>Affects Versions: 2.6.0
>Reporter: Andre Araujo
>Assignee: Xiao Chen
> Attachments: HDFS-12528.000.patch
>
>
> We have scenarios where data ingestion makes use of the -appendToFile 
> operation to add new data to existing HDFS files. In these situations, we're 
> frequently running into the problem described below.
> We're using Impala to query the HDFS data with short-circuit reads (SCR) 
> enabled. After each file read, Impala "unbuffer"'s the HDFS file to reduce 
> the memory footprint. In some cases, though, Impala still keeps the HDFS file 
> handle open for reuse.
> The "unbuffer" call, however, causes the file's current block reader to be 
> closed, which makes the associated ShortCircuitReplica evictable from the 
> ShortCircuitCache. When the cluster is under load, this means that the 
> ShortCircuitReplica can be purged off the cache pretty fast, which closes the 
> file descriptor to the underlying storage file.
> That means that when Impala re-reads the file it has to re-open the storage 
> files associated with the ShortCircuitReplica's that were evicted from the 
> cache. If there were no appends to those blocks, the re-open will succeed 
> without problems. If one block was appended since the ShortCircuitReplica was 
> created, the re-open will fail with the following error:
> {code}
> Meta file for BP-810388474-172.31.113.69-1499543341726:blk_1074012183_273087 
> not found
> {code}
> This error is handled as an "unknown response" by the BlockReaderFactory [1], 
> which disables short-circuit reads for 10 minutes [2] for the client.
> These 10 minutes without SCR can have a big performance impact for the client 
> operations. In this particular case ("Meta file not found") it would suffice 
> to return null without disabling SCR. This particular block read would fall 
> back to the normal, non-short-circuited, path and other SCR requests would 
> continue to work as expected.
> It might also be interesting to be able to control how long SCR is disabled 
> for in the "unknown response" case. 10 minutes seems a bit to long and not 
> being able to change that is a problem.
> [1] 
> https://github.com/apache/hadoop/blob/f67237cbe7bc48a1b9088e990800b37529f1db2a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java#L646
> [2] 
> https://github.com/apache/hadoop/blob/f67237cbe7bc48a1b9088e990800b37529f1db2a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/DomainSocketFactory.java#L97



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11847) Enhance dfsadmin listOpenFiles command to list files blocking datanode decommissioning

2017-12-22 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301802#comment-16301802
 ] 

Xiao Chen commented on HDFS-11847:
--

Thanks Manoj for revving and the comments!

Looks good overall. Some additional comments:
- I'm okay for splitting to another jira about the DN and type changes. Please 
create the jira and add a comment in {{FSN#listOpenFiles}} to point to that. 
Looks like for the new stuff we don't need to change APIs, which is great.
- Sorry I wasn't clear in the earlier comment #1. Our [compat 
policy|http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-common/Compatibility.html]
 has different aspects. HDFS-10480 is in 2.9.0 and 2.8.3, so I think we need to 
keep the old APIs for API and wire compat, and add the new one. (e.g. 
HDFSAdmin.java is public evolving, DFS.java is LimitedPrivate). [~andrew.wang] 
am I correct?
- We should respect {{maxListOpenFilesResponses}} in 
{{FSN#getFilesBlockingDecom}}, to make the batchedlist indeed batched. :)
- Adding an extra tab to the middle of CLI output format is usually frowned 
upon by admins - imagine there's a script that parses the output with a \t 
delimiter. 
- {{HDFSCommands.md}} has an unnecessary change in the first line 
{{+
> Key: HDFS-11847
> URL: https://issues.apache.org/jira/browse/HDFS-11847
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11847.01.patch, HDFS-11847.02.patch, 
> HDFS-11847.03.patch, HDFS-11847.04.patch
>
>
> HDFS-10480 adds {{listOpenFiles}} option is to {{dfsadmin}} command to list 
> all the open files in the system.
> Additionally, it would be very useful to only list open files that are 
> blocking the DataNode decommissioning. With thousand+ node clusters, where 
> there might be machines added and removed regularly for maintenance, any 
> option to monitor and debug decommissioning status is very helpful. Proposal 
> here is to add suboptions to {{listOpenFiles}} for the above case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12960) The audit log recorded the wrong result when the delete API return false

2017-12-22 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301787#comment-16301787
 ] 

Wei-Chiu Chuang edited comment on HDFS-12960 at 12/22/17 6:40 PM:
--

Hi [~xiaodong.hu] thanks for filing the issue.

Audit logger should record a success whenever the operation is authorized.
{code:title=HdfsAuditLogger}
/**
   * Same as
   * {@link #logAuditEvent(boolean, String, InetAddress, String, String, String,
   * FileStatus)} with additional parameters related to logging delegation token
   * tracking IDs.
   * 
   * @param succeeded Whether authorization succeeded.
   * @param userName Name of the user executing the request.
   * @param addr Remote address of the request.
   * @param cmd The requested command.
   * @param src Path of affected source file.
   * @param dst Path of affected destination file (if any).
   * @param stat File information for operations that change the file's metadata
   *  (permissions, owner, times, etc).
   * @param callerContext Context information of the caller
   * @param ugi UserGroupInformation of the current user, or null if not logging
   *  token tracking information
   * @param dtSecretManager The token secret manager, or null if not logging
   *  token tracking information
   */
  public void logAuditEvent(boolean succeeded, String userName,
  InetAddress addr, String cmd, String src, String dst,
  FileStatus stat, CallerContext callerContext, UserGroupInformation ugi,
  DelegationTokenSecretManager dtSecretManager) {
logAuditEvent(succeeded, userName, addr, cmd, src, dst, stat,
  ugi, dtSecretManager);
  }
{code}
When delete returns false, that means files are not actually removed. Looking 
at HDFS implementation, in the case of HDFS, it returns false if no blocks are 
removed (for example the file is 0-byte)
{code:title=ClientProtocol}
/**
   * Delete the given file or directory from the file system.
   * 
   * same as delete but provides a way to avoid accidentally
   * deleting non empty directories programmatically.
   * @param src existing name
   * @param recursive if true deletes a non empty directory recursively,
   * else throws an exception.
   * @return true only if the existing file or directory was actually removed
   * from the file system.
   *
   * @throws org.apache.hadoop.security.AccessControlException If access is
   *   denied
   * @throws java.io.FileNotFoundException If file src is not found
   * @throws org.apache.hadoop.hdfs.server.namenode.SafeModeException create not
   *   allowed in safemode
   * @throws org.apache.hadoop.fs.UnresolvedLinkException If src
   *   contains a symlink
   * @throws SnapshotAccessControlException if path is in RO snapshot
   * @throws IOException If an I/O error occurred
   */
  @AtMostOnce
  boolean delete(String src, boolean recursive)
  throws IOException;
{code}


was (Author: jojochuang):
Hi [~xiaodong.hu] thanks for filing the issue.

Audit logger should record a success whenever the operation is authorized.
{code:title=HdfsAuditLogger}
/**
   * Same as
   * {@link #logAuditEvent(boolean, String, InetAddress, String, String, String,
   * FileStatus)} with additional parameters related to logging delegation token
   * tracking IDs.
   * 
   * @param succeeded Whether authorization succeeded.
   * @param userName Name of the user executing the request.
   * @param addr Remote address of the request.
   * @param cmd The requested command.
   * @param src Path of affected source file.
   * @param dst Path of affected destination file (if any).
   * @param stat File information for operations that change the file's metadata
   *  (permissions, owner, times, etc).
   * @param callerContext Context information of the caller
   * @param ugi UserGroupInformation of the current user, or null if not logging
   *  token tracking information
   * @param dtSecretManager The token secret manager, or null if not logging
   *  token tracking information
   */
  public void logAuditEvent(boolean succeeded, String userName,
  InetAddress addr, String cmd, String src, String dst,
  FileStatus stat, CallerContext callerContext, UserGroupInformation ugi,
  DelegationTokenSecretManager dtSecretManager) {
logAuditEvent(succeeded, userName, addr, cmd, src, dst, stat,
  ugi, dtSecretManager);
  }
{code}
When delete returns false, that means files are not actually removed. Looking 
at HDFS implementation, in the case of HDFS, it returns false if no blocks are 
removed (for example the file is 0-byte)
{code:ClientProtocol}
/**
   * Delete the given file or directory from the file system.
   * 
   * same as delete but provides a way to avoid accidentally
   * deleting non empty directories programmatically.
   * @param src existing name
   * @param re

[jira] [Commented] (HDFS-12960) The audit log recorded the wrong result when the delete API return false

2017-12-22 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301787#comment-16301787
 ] 

Wei-Chiu Chuang commented on HDFS-12960:


Hi [~xiaodong.hu] thanks for filing the issue.

Audit logger should record a success whenever the operation is authorized.
{code:title=HdfsAuditLogger}
/**
   * Same as
   * {@link #logAuditEvent(boolean, String, InetAddress, String, String, String,
   * FileStatus)} with additional parameters related to logging delegation token
   * tracking IDs.
   * 
   * @param succeeded Whether authorization succeeded.
   * @param userName Name of the user executing the request.
   * @param addr Remote address of the request.
   * @param cmd The requested command.
   * @param src Path of affected source file.
   * @param dst Path of affected destination file (if any).
   * @param stat File information for operations that change the file's metadata
   *  (permissions, owner, times, etc).
   * @param callerContext Context information of the caller
   * @param ugi UserGroupInformation of the current user, or null if not logging
   *  token tracking information
   * @param dtSecretManager The token secret manager, or null if not logging
   *  token tracking information
   */
  public void logAuditEvent(boolean succeeded, String userName,
  InetAddress addr, String cmd, String src, String dst,
  FileStatus stat, CallerContext callerContext, UserGroupInformation ugi,
  DelegationTokenSecretManager dtSecretManager) {
logAuditEvent(succeeded, userName, addr, cmd, src, dst, stat,
  ugi, dtSecretManager);
  }
{code}
When delete returns false, that means files are not actually removed. Looking 
at HDFS implementation, in the case of HDFS, it returns false if no blocks are 
removed (for example the file is 0-byte)
{code:ClientProtocol}
/**
   * Delete the given file or directory from the file system.
   * 
   * same as delete but provides a way to avoid accidentally
   * deleting non empty directories programmatically.
   * @param src existing name
   * @param recursive if true deletes a non empty directory recursively,
   * else throws an exception.
   * @return true only if the existing file or directory was actually removed
   * from the file system.
   *
   * @throws org.apache.hadoop.security.AccessControlException If access is
   *   denied
   * @throws java.io.FileNotFoundException If file src is not found
   * @throws org.apache.hadoop.hdfs.server.namenode.SafeModeException create not
   *   allowed in safemode
   * @throws org.apache.hadoop.fs.UnresolvedLinkException If src
   *   contains a symlink
   * @throws SnapshotAccessControlException if path is in RO snapshot
   * @throws IOException If an I/O error occurred
   */
  @AtMostOnce
  boolean delete(String src, boolean recursive)
  throws IOException;
{code}

> The audit log recorded the wrong result when the delete API return false
> 
>
> Key: HDFS-12960
> URL: https://issues.apache.org/jira/browse/HDFS-12960
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha4
>Reporter: hu xiaodong
>Assignee: hu xiaodong
> Attachments: HDFS-12960.001.patch
>
>
> The audit log recorded the wrong result when the delete API return false



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12629) NameNode UI should report total blocks count by type - replicated and erasure coded

2017-12-22 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-12629:
--
Status: Patch Available  (was: Open)

> NameNode UI should report total blocks count by type - replicated and erasure 
> coded
> ---
>
> Key: HDFS-12629
> URL: https://issues.apache.org/jira/browse/HDFS-12629
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.0.0-beta1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-12629.01.patch, 
> NN_UI_Summary_BlockCount_AfterFix.png, NN_UI_Summary_BlockCount_BeforeFix.png
>
>
> Currently NameNode UI displays total files and directories and total blocks 
> in the cluster under the Summary tab. But, the total blocks count split by 
> type is missing. It would be good if we can display total blocks counts by 
> type (provided by HDFS-12573) along with the total block count. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12629) NameNode UI should report total blocks count by type - replicated and erasure coded

2017-12-22 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-12629:
--
Attachment: HDFS-12629.01.patch

Attached v01 patch to report separate block stats -- Replicated blocks and 
Erasure Coded block groups in the NN UI Summary page.
[~eddyxu], can you please take a look at the patch?

> NameNode UI should report total blocks count by type - replicated and erasure 
> coded
> ---
>
> Key: HDFS-12629
> URL: https://issues.apache.org/jira/browse/HDFS-12629
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.0.0-beta1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-12629.01.patch, 
> NN_UI_Summary_BlockCount_AfterFix.png, NN_UI_Summary_BlockCount_BeforeFix.png
>
>
> Currently NameNode UI displays total files and directories and total blocks 
> in the cluster under the Summary tab. But, the total blocks count split by 
> type is missing. It would be good if we can display total blocks counts by 
> type (provided by HDFS-12573) along with the total block count. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12629) NameNode UI should report total blocks count by type - replicated and erasure coded

2017-12-22 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-12629:
--
Attachment: NN_UI_Summary_BlockCount_AfterFix.png

> NameNode UI should report total blocks count by type - replicated and erasure 
> coded
> ---
>
> Key: HDFS-12629
> URL: https://issues.apache.org/jira/browse/HDFS-12629
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.0.0-beta1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-12629.01.patch, 
> NN_UI_Summary_BlockCount_AfterFix.png, NN_UI_Summary_BlockCount_BeforeFix.png
>
>
> Currently NameNode UI displays total files and directories and total blocks 
> in the cluster under the Summary tab. But, the total blocks count split by 
> type is missing. It would be good if we can display total blocks counts by 
> type (provided by HDFS-12573) along with the total block count. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12935) Get ambiguous result for DFSAdmin command in HA mode when only one namenode is up

2017-12-22 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301292#comment-16301292
 ] 

genericqa commented on HDFS-12935:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
6s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}123m 31s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}181m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestRenameWhileOpen |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.TestEncryptionZones |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12935 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12903371/HDFS-12935.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux edc9ad327262 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 76e664e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/2249

[jira] [Commented] (HDFS-12960) The audit log recorded the wrong result when the delete API return false

2017-12-22 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301270#comment-16301270
 ] 

genericqa commented on HDFS-12960:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
1s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}124m 41s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}176m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy 
|
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.TestDFSStripedInputStream |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.TestDFSStripedOutputStream |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 |
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12960 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12903369/HDFS-12960.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |

[jira] [Updated] (HDFS-12955) [SPS]: Move SPS classes to a separate package

2017-12-22 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-12955:

   Resolution: Fixed
Fix Version/s: HDFS-10285
   Status: Resolved  (was: Patch Available)

Thanks [~umamaheswararao]. I've changed the jira status to resolved.

> [SPS]: Move SPS classes to a separate package
> -
>
> Key: HDFS-12955
> URL: https://issues.apache.org/jira/browse/HDFS-12955
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: nn
>Affects Versions: HDFS-10285
>Reporter: Uma Maheswara Rao G
>Assignee: Rakesh R
>Priority: Trivial
> Fix For: HDFS-10285
>
> Attachments: HDFS-12955-HDFS-10285-00.patch, 
> HDFS-12955-HDFS-10285-01.patch
>
>
> For clean modularization, it would be good if we moved SPS related classes 
> into its own package.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12935) Get ambiguous result for DFSAdmin command in HA mode when only one namenode is up

2017-12-22 Thread Jianfei Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianfei Jiang updated HDFS-12935:
-
Status: Patch Available  (was: In Progress)

> Get ambiguous result for DFSAdmin command in HA mode when only one namenode 
> is up
> -
>
> Key: HDFS-12935
> URL: https://issues.apache.org/jira/browse/HDFS-12935
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0, 3.0.0-beta1
>Reporter: Jianfei Jiang
>Assignee: Jianfei Jiang
> Attachments: HDFS-12935.002.patch, HDFS-12935.003.patch, 
> HDFS_12935.001.patch
>
>
> In HA mode, if one namenode is down, most of functions can still work. When 
> considering the following two occasions:
>  (1)nn1 up and nn2 down
>  (2)nn1 down and nn2 up
> These two occasions should be equivalent. However, some of the DFSAdmin 
> commands will have ambiguous results. The commands can be send successfully 
> to the up namenode and are always functionally useful only when nn1 is up 
> regardless of exception (IOException when connecting to the down namenode 
> nn2). If only nn2 is up, the commands have no use at all and only exception 
> to connect nn1 can be found.
> See the following command "hdfs dfsadmin setBalancerBandwidth" which aim to 
> set balancer bandwidth value for datanodes as an example. It works and all 
> the datanodes can get the setting values only when nn1 is up. If only nn2 is 
> up, the command throws exception directly and no datanode get the bandwidth 
> setting. Approximately ten DFSAdmin commands use the similar logical process 
> and may be ambiguous.
> [root@jiangjianfei01 ~]# hdfs haadmin -getServiceState nn1
> active
> [root@jiangjianfei01 ~]# hdfs dfsadmin -setBalancerBandwidth 12345
> *Balancer bandwidth is set to 12345 for jiangjianfei01/172.17.0.14:9820*
> setBalancerBandwidth: Call From jiangjianfei01/172.17.0.14 to 
> jiangjianfei02:9820 failed on connection exception: 
> java.net.ConnectException: Connection refused; For more details see:  
> http://wiki.apache.org/hadoop/ConnectionRefused
> [root@jiangjianfei01 ~]# hdfs haadmin -getServiceState nn2
> active
> [root@jiangjianfei01 ~]# hdfs dfsadmin -setBalancerBandwidth 1234
> setBalancerBandwidth: Call From jiangjianfei01/172.17.0.14 to 
> jiangjianfei01:9820 failed on connection exception: 
> java.net.ConnectException: Connection refused; For more details see:  
> http://wiki.apache.org/hadoop/ConnectionRefused
> [root@jiangjianfei01 ~]# 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12935) Get ambiguous result for DFSAdmin command in HA mode when only one namenode is up

2017-12-22 Thread Jianfei Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianfei Jiang updated HDFS-12935:
-
Attachment: HDFS-12935.003.patch

Fix up the three checkstyle bugs generated by 002 patch

> Get ambiguous result for DFSAdmin command in HA mode when only one namenode 
> is up
> -
>
> Key: HDFS-12935
> URL: https://issues.apache.org/jira/browse/HDFS-12935
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0-beta1, 3.0.0
>Reporter: Jianfei Jiang
>Assignee: Jianfei Jiang
> Attachments: HDFS-12935.002.patch, HDFS-12935.003.patch, 
> HDFS_12935.001.patch
>
>
> In HA mode, if one namenode is down, most of functions can still work. When 
> considering the following two occasions:
>  (1)nn1 up and nn2 down
>  (2)nn1 down and nn2 up
> These two occasions should be equivalent. However, some of the DFSAdmin 
> commands will have ambiguous results. The commands can be send successfully 
> to the up namenode and are always functionally useful only when nn1 is up 
> regardless of exception (IOException when connecting to the down namenode 
> nn2). If only nn2 is up, the commands have no use at all and only exception 
> to connect nn1 can be found.
> See the following command "hdfs dfsadmin setBalancerBandwidth" which aim to 
> set balancer bandwidth value for datanodes as an example. It works and all 
> the datanodes can get the setting values only when nn1 is up. If only nn2 is 
> up, the command throws exception directly and no datanode get the bandwidth 
> setting. Approximately ten DFSAdmin commands use the similar logical process 
> and may be ambiguous.
> [root@jiangjianfei01 ~]# hdfs haadmin -getServiceState nn1
> active
> [root@jiangjianfei01 ~]# hdfs dfsadmin -setBalancerBandwidth 12345
> *Balancer bandwidth is set to 12345 for jiangjianfei01/172.17.0.14:9820*
> setBalancerBandwidth: Call From jiangjianfei01/172.17.0.14 to 
> jiangjianfei02:9820 failed on connection exception: 
> java.net.ConnectException: Connection refused; For more details see:  
> http://wiki.apache.org/hadoop/ConnectionRefused
> [root@jiangjianfei01 ~]# hdfs haadmin -getServiceState nn2
> active
> [root@jiangjianfei01 ~]# hdfs dfsadmin -setBalancerBandwidth 1234
> setBalancerBandwidth: Call From jiangjianfei01/172.17.0.14 to 
> jiangjianfei01:9820 failed on connection exception: 
> java.net.ConnectException: Connection refused; For more details see:  
> http://wiki.apache.org/hadoop/ConnectionRefused
> [root@jiangjianfei01 ~]# 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12935) Get ambiguous result for DFSAdmin command in HA mode when only one namenode is up

2017-12-22 Thread Jianfei Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianfei Jiang updated HDFS-12935:
-
Status: In Progress  (was: Patch Available)

> Get ambiguous result for DFSAdmin command in HA mode when only one namenode 
> is up
> -
>
> Key: HDFS-12935
> URL: https://issues.apache.org/jira/browse/HDFS-12935
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0, 3.0.0-beta1
>Reporter: Jianfei Jiang
>Assignee: Jianfei Jiang
> Attachments: HDFS-12935.002.patch, HDFS_12935.001.patch
>
>
> In HA mode, if one namenode is down, most of functions can still work. When 
> considering the following two occasions:
>  (1)nn1 up and nn2 down
>  (2)nn1 down and nn2 up
> These two occasions should be equivalent. However, some of the DFSAdmin 
> commands will have ambiguous results. The commands can be send successfully 
> to the up namenode and are always functionally useful only when nn1 is up 
> regardless of exception (IOException when connecting to the down namenode 
> nn2). If only nn2 is up, the commands have no use at all and only exception 
> to connect nn1 can be found.
> See the following command "hdfs dfsadmin setBalancerBandwidth" which aim to 
> set balancer bandwidth value for datanodes as an example. It works and all 
> the datanodes can get the setting values only when nn1 is up. If only nn2 is 
> up, the command throws exception directly and no datanode get the bandwidth 
> setting. Approximately ten DFSAdmin commands use the similar logical process 
> and may be ambiguous.
> [root@jiangjianfei01 ~]# hdfs haadmin -getServiceState nn1
> active
> [root@jiangjianfei01 ~]# hdfs dfsadmin -setBalancerBandwidth 12345
> *Balancer bandwidth is set to 12345 for jiangjianfei01/172.17.0.14:9820*
> setBalancerBandwidth: Call From jiangjianfei01/172.17.0.14 to 
> jiangjianfei02:9820 failed on connection exception: 
> java.net.ConnectException: Connection refused; For more details see:  
> http://wiki.apache.org/hadoop/ConnectionRefused
> [root@jiangjianfei01 ~]# hdfs haadmin -getServiceState nn2
> active
> [root@jiangjianfei01 ~]# hdfs dfsadmin -setBalancerBandwidth 1234
> setBalancerBandwidth: Call From jiangjianfei01/172.17.0.14 to 
> jiangjianfei01:9820 failed on connection exception: 
> java.net.ConnectException: Connection refused; For more details see:  
> http://wiki.apache.org/hadoop/ConnectionRefused
> [root@jiangjianfei01 ~]# 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-12960) The audit log recorded the wrong result when the delete API return false

2017-12-22 Thread hu xiaodong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-12960 started by hu xiaodong.
--
> The audit log recorded the wrong result when the delete API return false
> 
>
> Key: HDFS-12960
> URL: https://issues.apache.org/jira/browse/HDFS-12960
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha4
>Reporter: hu xiaodong
>Assignee: hu xiaodong
> Attachments: HDFS-12960.001.patch
>
>
> The audit log recorded the wrong result when the delete API return false



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12960) The audit log recorded the wrong result when the delete API return false

2017-12-22 Thread hu xiaodong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hu xiaodong updated HDFS-12960:
---
Attachment: HDFS-12960.001.patch

> The audit log recorded the wrong result when the delete API return false
> 
>
> Key: HDFS-12960
> URL: https://issues.apache.org/jira/browse/HDFS-12960
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha4
>Reporter: hu xiaodong
>Assignee: hu xiaodong
> Attachments: HDFS-12960.001.patch
>
>
> The audit log recorded the wrong result when the delete API return false



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12960) The audit log recorded the wrong result when the delete API return false

2017-12-22 Thread hu xiaodong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hu xiaodong updated HDFS-12960:
---
Status: Patch Available  (was: In Progress)

> The audit log recorded the wrong result when the delete API return false
> 
>
> Key: HDFS-12960
> URL: https://issues.apache.org/jira/browse/HDFS-12960
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha4
>Reporter: hu xiaodong
>Assignee: hu xiaodong
> Attachments: HDFS-12960.001.patch
>
>
> The audit log recorded the wrong result when the delete API return false



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12935) Get ambiguous result for DFSAdmin command in HA mode when only one namenode is up

2017-12-22 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301090#comment-16301090
 ] 

genericqa commented on HDFS-12935:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
22s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 47s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 202 unchanged - 0 fixed = 205 total (was 202) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}120m 47s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}180m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12935 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12903340/HDFS-12935.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4ecf33f5ba8a 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 76e664e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22497/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22497/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22

[jira] [Created] (HDFS-12960) The audit log recorded the wrong result when the delete API return false

2017-12-22 Thread hu xiaodong (JIRA)
hu xiaodong created HDFS-12960:
--

 Summary: The audit log recorded the wrong result when the delete 
API return false
 Key: HDFS-12960
 URL: https://issues.apache.org/jira/browse/HDFS-12960
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0-alpha4
Reporter: hu xiaodong
Assignee: hu xiaodong


The audit log recorded the wrong result when the delete API return false



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org