[jira] [Updated] (HDFS-11120) TestEncryptionZones should waitActive

2016-11-08 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-11120:
--
Attachment: HDFS-11120.002.patch

Patch 002:
* Suppress checkstyle {{MethodLengthCheck}} for 
{{TestEncryptionZones#testBasicOperations}}

> TestEncryptionZones should waitActive
> -
>
> Key: HDFS-11120
> URL: https://issues.apache.org/jira/browse/HDFS-11120
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HDFS-11120.001.patch, HDFS-11120.002.patch
>
>
> Happened to notice this.
> {{TestEncryptionZones#setup}} didn't {{waitActive}} on the minicluster. 
> There's also a test case that does a unnecessary waitActive:
> {code}
> cluster.restartNameNode(true);
> cluster.waitActive();
> {code}
> We should fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11121) Add assertions to BlockInfo#addStorage to protect from breaking reportedBlock-blockGroup mapping

2016-11-08 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649965#comment-15649965
 ] 

Takanobu Asanuma commented on HDFS-11121:
-

The failed test is not related to the patch.

[~jingzhao], could you take a look? :)

> Add assertions to BlockInfo#addStorage to protect from breaking 
> reportedBlock-blockGroup mapping
> 
>
> Key: HDFS-11121
> URL: https://issues.apache.org/jira/browse/HDFS-11121
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Critical
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11121.1.patch
>
>
> There are not any assertions in {{BlockInfo.addStorage}}. This may cause that 
> {{BlockInfo}} instances accept strange block reports and result in serious 
> bugs, like HDFS-10858.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11121) Add assertions to BlockInfo#addStorage to protect from breaking reportedBlock-blockGroup mapping

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649943#comment-15649943
 ] 

Hadoop QA commented on HDFS-11121:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 13 unchanged - 4 fixed = 13 total (was 17) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 55s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.checker.TestThrottledAsyncChecker |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HDFS-11121 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838120/HDFS-11121.1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b3845a996a82 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ed0beba |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17483/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17483/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17483/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add assertions to BlockInfo#addStorage to protect from breaking 
> reportedBlock-blockGroup mapping
> 
>
> Key: HDFS-11121
> URL: https://issues.apache.org/jira/browse/HDFS-11121
>

[jira] [Commented] (HDFS-8307) Spurious DNS Queries from hdfs shell

2016-11-08 Thread Andres Perez (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649932#comment-15649932
 ] 

Andres Perez commented on HDFS-8307:


The whitespace, JDK7 unit test failed, and asf license fail are not related to 
the patch

> Spurious DNS Queries from hdfs shell
> 
>
> Key: HDFS-8307
> URL: https://issues.apache.org/jira/browse/HDFS-8307
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.1
>Reporter: Anu Engineer
>Assignee: Andres Perez
>Priority: Trivial
>  Labels: ha
> Fix For: 2.7.4
>
> Attachments: HDFS-8307-branch-2.7.patch
>
>
> With HA configured the hdfs shell (org.apache.hadoop.fs.FsShell) seems to 
> issue a DNS query for the cluster Name. if  fs.defaultFS is set to 
> hdfs://mycluster, then the shell seems to issue a DNS query for 
> mycluster.FQDN or mycluster.
> since mycluster not a machine name  DNS query always fails with 
> "DNS 85 Standard query response 0x2aeb No such name"
> Repro Steps:
> # Setup a HA cluster 
> # Log on to any node
> # Run wireshark monitoring port 53 - "sudo tshark 'port 53'"
> # Run "sudo -u hdfs hdfs dfs -ls /" 
> # You should be able to see DNS queries to mycluster.FQDN in wireshark



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8307) Spurious DNS Queries from hdfs shell

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649920#comment-15649920
 ] 

Hadoop QA commented on HDFS-8307:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
24s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
12s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
3s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1317 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m 
30s{color} | {color:red} The patch 76 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 56m  3s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_111. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
22s{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}141m 27s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_111 Failed junit tests | 
hadoop.hdfs.server.datanode.TestBlockScanner |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
| JDK v1.7.0_111 Failed junit tests | 
hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
\\

[jira] [Updated] (HDFS-11121) Add assertions to BlockInfo#addStorage to protect from breaking reportedBlock-blockGroup mapping

2016-11-08 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-11121:

Status: Patch Available  (was: Open)

> Add assertions to BlockInfo#addStorage to protect from breaking 
> reportedBlock-blockGroup mapping
> 
>
> Key: HDFS-11121
> URL: https://issues.apache.org/jira/browse/HDFS-11121
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Critical
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11121.1.patch
>
>
> There are not any assertions in {{BlockInfo.addStorage}}. This may cause that 
> {{BlockInfo}} instances accept strange block reports and result in serious 
> bugs, like HDFS-10858.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11121) Add assertions to BlockInfo#addStorage to protect from breaking reportedBlock-blockGroup mapping

2016-11-08 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-11121:

Attachment: HDFS-11121.1.patch

I uploaded the first patch.

> Add assertions to BlockInfo#addStorage to protect from breaking 
> reportedBlock-blockGroup mapping
> 
>
> Key: HDFS-11121
> URL: https://issues.apache.org/jira/browse/HDFS-11121
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Critical
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11121.1.patch
>
>
> There are not any assertions in {{BlockInfo.addStorage}}. This may cause that 
> {{BlockInfo}} instances accept strange block reports and result in serious 
> bugs, like HDFS-10858.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11116) Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue

2016-11-08 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649811#comment-15649811
 ] 

Yiqun Lin commented on HDFS-6:
--

Hi [~ajisakaa], could you help make a quick review for this? I think this is a 
minor change and will not cost you many time, :). Thanks in advance.

> Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue
> -
>
> Key: HDFS-6
> URL: https://issues.apache.org/jira/browse/HDFS-6
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-6.001.patch, HDFS-6.002.patch
>
>
> There were some Jenkins warinings related with TestViewFsDefaultValue in each 
> Jenkins building.
> {code}
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9]
>  [deprecation] getDefaultBlockSize() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9]
>  [deprecation] getDefaultReplication() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43]
>  [deprecation] getServerDefaults() in FileSystem has been deprecated
> {code}
> We should use the method {{getDefaultBlockSize(Path)}} replace with 
> deprecation API {{getDefaultBlockSize}}. The same to the 
> {{getDefaultReplication}} and {{getServerDefaults}}. The {{Path}} can be a 
> not-in-mountpoint path in filesystem to trigger the 
> {{NotInMountpointException}} in test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11120) TestEncryptionZones should waitActive

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649799#comment-15649799
 ] 

Hadoop QA commented on HDFS-11120:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 43 unchanged - 1 fixed = 44 total (was 44) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m  2s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.checker.TestThrottledAsyncChecker |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HDFS-11120 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838113/HDFS-11120.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f5a895b61615 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ed0beba |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17481/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17481/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17481/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17481/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestEncryptionZones should waitActive
> -
>
> Key: HDFS-11120
> URL: 

[jira] [Commented] (HDFS-11056) Concurrent append and read operations lead to checksum error

2016-11-08 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649778#comment-15649778
 ] 

Wei-Chiu Chuang commented on HDFS-11056:


The branch-2 failed tests are not related.

> Concurrent append and read operations lead to checksum error
> 
>
> Key: HDFS-11056
> URL: https://issues.apache.org/jira/browse/HDFS-11056
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, httpfs
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-11056.001.patch, HDFS-11056.002.patch, 
> HDFS-11056.branch-2.patch, HDFS-11056.reproduce.patch
>
>
> If there are two clients, one of them open-append-close a file continuously, 
> while the other open-read-close the same file continuously, the reader 
> eventually gets a checksum error in the data read.
> On my local Mac, it takes a few minutes to produce the error. This happens to 
> httpfs clients, but there's no reason not believe this happens to any append 
> clients.
> I have a unit test that demonstrates the checksum error. Will attach later.
> Relevant log:
> {quote}
> 2016-10-25 15:34:45,153 INFO  audit - allowed=trueugi=weichiu 
> (auth:SIMPLE)   ip=/127.0.0.1   cmd=opensrc=/tmp/bar.txt
> dst=nullperm=null   proto=rpc
> 2016-10-25 15:34:45,155 INFO  DataNode - Receiving 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 src: 
> /127.0.0.1:51130 dest: /127.0.0.1:50131
> 2016-10-25 15:34:45,155 INFO  FsDatasetImpl - Appending to FinalizedReplica, 
> blk_1073741825_1182, FINALIZED
>   getNumBytes() = 182
>   getBytesOnDisk()  = 182
>   getVisibleLength()= 182
>   getVolume()   = 
> /Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1
>   getBlockURI() = 
> file:/Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1/current/BP-837130339-172.16.1.88-1477434851452/current/finalized/subdir0/subdir0/blk_1073741825
> 2016-10-25 15:34:45,167 INFO  DataNode - opReadBlock 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 received exception 
> java.io.IOException: No data exists for block 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182
> 2016-10-25 15:34:45,167 WARN  DataNode - 
> DatanodeRegistration(127.0.0.1:50131, 
> datanodeUuid=41c96335-5e4b-4950-ac22-3d21b353abb8, infoPort=50133, 
> infoSecurePort=0, ipcPort=50134, 
> storageInfo=lv=-57;cid=testClusterID;nsid=1472068852;c=1477434851452):Got 
> exception while serving 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 to /127.0.0.1:51121
> java.io.IOException: No data exists for block 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockInputStream(FsDatasetImpl.java:773)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:400)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:581)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:150)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:102)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289)
>   at java.lang.Thread.run(Thread.java:745)
> 2016-10-25 15:34:45,168 INFO  FSNamesystem - 
> updatePipeline(blk_1073741825_1182, newGS=1183, newLength=182, 
> newNodes=[127.0.0.1:50131], client=DFSClient_NONMAPREDUCE_-1743096965_197)
> 2016-10-25 15:34:45,168 ERROR DataNode - 127.0.0.1:50131:DataXceiver error 
> processing READ_BLOCK operation  src: /127.0.0.1:51121 dst: /127.0.0.1:50131
> java.io.IOException: No data exists for block 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockInputStream(FsDatasetImpl.java:773)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:400)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:581)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:150)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:102)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289)
>   at java.lang.Thread.run(Thread.java:745)
> 2016-10-25 15:34:45,168 INFO  FSNamesystem - 
> updatePipeline(blk_1073741825_1182 => blk_1073741825_1183) success
> 2016-10-25 15:34:45,170 WARN  DFSClient - Found Checksum error for 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 from 
> 

[jira] [Updated] (HDFS-9482) Replace DatanodeInfo constructors with a builder pattern

2016-11-08 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-9482:
---
Fix Version/s: 2.9.0

> Replace DatanodeInfo constructors with a builder pattern
> 
>
> Key: HDFS-9482
> URL: https://issues.apache.org/jira/browse/HDFS-9482
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-9482-002.patch, HDFS-9482-003.patch, 
> HDFS-9482-branch-2.8.patch, HDFS-9482-branch-2.patch, HDFS-9482.patch
>
>
> As per  [~arpitagarwal] comment 
> [here|https://issues.apache.org/jira/browse/HDFS-9038?focusedCommentId=15018761=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15018761],Replace
>  DatanodeInfo constructors with a builder pattern, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9482) Replace DatanodeInfo constructors with a builder pattern

2016-11-08 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649769#comment-15649769
 ] 

Brahma Reddy Battula commented on HDFS-9482:


Pushed to branch-2.

> Replace DatanodeInfo constructors with a builder pattern
> 
>
> Key: HDFS-9482
> URL: https://issues.apache.org/jira/browse/HDFS-9482
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-9482-002.patch, HDFS-9482-003.patch, 
> HDFS-9482-branch-2.8.patch, HDFS-9482-branch-2.patch, HDFS-9482.patch
>
>
> As per  [~arpitagarwal] comment 
> [here|https://issues.apache.org/jira/browse/HDFS-9038?focusedCommentId=15018761=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15018761],Replace
>  DatanodeInfo constructors with a builder pattern, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11120) TestEncryptionZones should waitActive

2016-11-08 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649731#comment-15649731
 ] 

Xiao Chen commented on HDFS-11120:
--

Thanks [~jzhuge] for the quick fix. +1 pending jenkins.

> TestEncryptionZones should waitActive
> -
>
> Key: HDFS-11120
> URL: https://issues.apache.org/jira/browse/HDFS-11120
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HDFS-11120.001.patch
>
>
> Happened to notice this.
> {{TestEncryptionZones#setup}} didn't {{waitActive}} on the minicluster. 
> There's also a test case that does a unnecessary waitActive:
> {code}
> cluster.restartNameNode(true);
> cluster.waitActive();
> {code}
> We should fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9482) Replace DatanodeInfo constructors with a builder pattern

2016-11-08 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649710#comment-15649710
 ] 

Brahma Reddy Battula commented on HDFS-9482:


[~arpitagarwal] thanks a lot committing to trunk and review..
bq.  This was not an easy patch to review and I am sure not very fun to write 
either.
hmm.. Yes, it is not easy..thanks a bunch again for your time on review this.
bq.if the only difference in the branch-2 patch is the EC changes then I am +1 
for that too. Feel free to commit the branch-2 patch Brahma Reddy Battula.
Yes,diff only that ..will commit to branch-2
bq.I posted a comment on HDFS-9371. If we can merge that to branch-2.8 then we 
don't need a separate branch-2.8 patch here.
ok. will hold commit till HDFS-9371 get in branch-2.8.

> Replace DatanodeInfo constructors with a builder pattern
> 
>
> Key: HDFS-9482
> URL: https://issues.apache.org/jira/browse/HDFS-9482
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-9482-002.patch, HDFS-9482-003.patch, 
> HDFS-9482-branch-2.8.patch, HDFS-9482-branch-2.patch, HDFS-9482.patch
>
>
> As per  [~arpitagarwal] comment 
> [here|https://issues.apache.org/jira/browse/HDFS-9038?focusedCommentId=15018761=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15018761],Replace
>  DatanodeInfo constructors with a builder pattern, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8307) Spurious DNS Queries from hdfs shell

2016-11-08 Thread Andres Perez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andres Perez updated HDFS-8307:
---
Labels: ha  (was: )

> Spurious DNS Queries from hdfs shell
> 
>
> Key: HDFS-8307
> URL: https://issues.apache.org/jira/browse/HDFS-8307
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.1
>Reporter: Anu Engineer
>Assignee: Andres Perez
>Priority: Trivial
>  Labels: ha
> Fix For: 2.7.4
>
> Attachments: HDFS-8307-branch-2.7.patch
>
>
> With HA configured the hdfs shell (org.apache.hadoop.fs.FsShell) seems to 
> issue a DNS query for the cluster Name. if  fs.defaultFS is set to 
> hdfs://mycluster, then the shell seems to issue a DNS query for 
> mycluster.FQDN or mycluster.
> since mycluster not a machine name  DNS query always fails with 
> "DNS 85 Standard query response 0x2aeb No such name"
> Repro Steps:
> # Setup a HA cluster 
> # Log on to any node
> # Run wireshark monitoring port 53 - "sudo tshark 'port 53'"
> # Run "sudo -u hdfs hdfs dfs -ls /" 
> # You should be able to see DNS queries to mycluster.FQDN in wireshark



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10996) Ability to specify per-file EC policy at create time

2016-11-08 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649705#comment-15649705
 ] 

Rakesh R commented on HDFS-10996:
-

Thanks [~Sammi] for the good work. I've few comments on the patch, please take 
care.
# Few typos: 
{code}
"file's parent director" should be "file's parent directory"

"erasureCodingPlolicy" should be "erasureCodingPolicy"
{code}
# I couldn't see any test cases by taking a valid or invalid 
{{erasureCodingPolicy}} paramter behaviors. Appreciate adding more test cases 
covering the same.
# Please provide javadoc for the new API DistributedFileSystem#create()
# I'd suggest to rephrase this javadoc saying that, dfs will support arbitrary 
replication factors, not just 3x
{code}
+   * @param erasureCodingPolicy the name of erasure coding policy. A empty
+   *string value means this file will inherit its
+   *parent group's policy, either 3x replication or
+   *erasure coding policy.
{code}
This could be like below or a better one.
{code}
+   * @param erasureCodingPolicy the name of erasure coding policy. A empty
+   *string value means this file will inherit its
+   *parent group's policy. If parent doesn't have
+   *erasure code policy configured then continue
+   *with the traditional file replication mode.
{code}
# Could you simplify {{if (erasureCodingPolicy !=  null && 
(!erasureCodingPolicy.isEmpty())) {}} check with 
{code}
org.apache.commons.lang.StringUtils.isBlank(erasureCodingPolicy)
{code}
# Missing indentation for {{erasureCodingPolicy+ " ] does not match any of the 
" +}}. There should a space between {{erasureCodingPolicy+ }}
{code}
erasureCodingPolicy + " ] does not match any of the " +
{code}
# It looks like test case failures are related to the patch. Just analysed 
{{TestFcHdfsCreateMkdir/testMkdirRecursiveWithExistingDir_2/}} and could see 
that the test failed due to {{null}} value to 
{{protos#setErasureCodingPolicy}}. Please take care this. Thanks!

In general, I hope you might have tried improving the 
{{setErasureCodingPolicy}} API based on [your previous 
comments|https://issues.apache.org/jira/browse/HDFS-10996?focusedCommentId=15614112=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15614112]
 and due to its complexity you have gone with the {{#create}} API, right?

> Ability to specify per-file EC policy at create time
> 
>
> Key: HDFS-10996
> URL: https://issues.apache.org/jira/browse/HDFS-10996
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: SammiChen
> Attachments: HDFS-10996-v1.patch
>
>
> Based on discussion in HDFS-10971, it would be useful to specify the EC 
> policy when the file is created. This is useful for situations where app 
> requirements do not map nicely to the current directory-level policies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8307) Spurious DNS Queries from hdfs shell

2016-11-08 Thread Andres Perez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andres Perez updated HDFS-8307:
---
Fix Version/s: 2.7.4
   Status: Patch Available  (was: In Progress)

> Spurious DNS Queries from hdfs shell
> 
>
> Key: HDFS-8307
> URL: https://issues.apache.org/jira/browse/HDFS-8307
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.1
>Reporter: Anu Engineer
>Assignee: Andres Perez
>Priority: Trivial
> Fix For: 2.7.4
>
> Attachments: HDFS-8307-branch-2.7.patch
>
>
> With HA configured the hdfs shell (org.apache.hadoop.fs.FsShell) seems to 
> issue a DNS query for the cluster Name. if  fs.defaultFS is set to 
> hdfs://mycluster, then the shell seems to issue a DNS query for 
> mycluster.FQDN or mycluster.
> since mycluster not a machine name  DNS query always fails with 
> "DNS 85 Standard query response 0x2aeb No such name"
> Repro Steps:
> # Setup a HA cluster 
> # Log on to any node
> # Run wireshark monitoring port 53 - "sudo tshark 'port 53'"
> # Run "sudo -u hdfs hdfs dfs -ls /" 
> # You should be able to see DNS queries to mycluster.FQDN in wireshark



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8307) Spurious DNS Queries from hdfs shell

2016-11-08 Thread Andres Perez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andres Perez updated HDFS-8307:
---
Attachment: HDFS-8307-branch-2.7.patch

The patch was tested using wireshark and there were not any DNS request made 
with NXDomain responses

> Spurious DNS Queries from hdfs shell
> 
>
> Key: HDFS-8307
> URL: https://issues.apache.org/jira/browse/HDFS-8307
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.1
>Reporter: Anu Engineer
>Assignee: Andres Perez
>Priority: Trivial
> Fix For: 2.7.4
>
> Attachments: HDFS-8307-branch-2.7.patch
>
>
> With HA configured the hdfs shell (org.apache.hadoop.fs.FsShell) seems to 
> issue a DNS query for the cluster Name. if  fs.defaultFS is set to 
> hdfs://mycluster, then the shell seems to issue a DNS query for 
> mycluster.FQDN or mycluster.
> since mycluster not a machine name  DNS query always fails with 
> "DNS 85 Standard query response 0x2aeb No such name"
> Repro Steps:
> # Setup a HA cluster 
> # Log on to any node
> # Run wireshark monitoring port 53 - "sudo tshark 'port 53'"
> # Run "sudo -u hdfs hdfs dfs -ls /" 
> # You should be able to see DNS queries to mycluster.FQDN in wireshark



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-8307) Spurious DNS Queries from hdfs shell

2016-11-08 Thread Andres Perez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-8307 started by Andres Perez.
--
> Spurious DNS Queries from hdfs shell
> 
>
> Key: HDFS-8307
> URL: https://issues.apache.org/jira/browse/HDFS-8307
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.1
>Reporter: Anu Engineer
>Assignee: Andres Perez
>Priority: Trivial
>
> With HA configured the hdfs shell (org.apache.hadoop.fs.FsShell) seems to 
> issue a DNS query for the cluster Name. if  fs.defaultFS is set to 
> hdfs://mycluster, then the shell seems to issue a DNS query for 
> mycluster.FQDN or mycluster.
> since mycluster not a machine name  DNS query always fails with 
> "DNS 85 Standard query response 0x2aeb No such name"
> Repro Steps:
> # Setup a HA cluster 
> # Log on to any node
> # Run wireshark monitoring port 53 - "sudo tshark 'port 53'"
> # Run "sudo -u hdfs hdfs dfs -ls /" 
> # You should be able to see DNS queries to mycluster.FQDN in wireshark



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8307) Spurious DNS Queries from hdfs shell

2016-11-08 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649691#comment-15649691
 ] 

Brahma Reddy Battula commented on HDFS-8307:


FYI..Only Contributors can attach the patch..added you in contributor's list 
and assigned to you.. thanks for your interest.

> Spurious DNS Queries from hdfs shell
> 
>
> Key: HDFS-8307
> URL: https://issues.apache.org/jira/browse/HDFS-8307
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.1
>Reporter: Anu Engineer
>Assignee: Andres Perez
>Priority: Trivial
>
> With HA configured the hdfs shell (org.apache.hadoop.fs.FsShell) seems to 
> issue a DNS query for the cluster Name. if  fs.defaultFS is set to 
> hdfs://mycluster, then the shell seems to issue a DNS query for 
> mycluster.FQDN or mycluster.
> since mycluster not a machine name  DNS query always fails with 
> "DNS 85 Standard query response 0x2aeb No such name"
> Repro Steps:
> # Setup a HA cluster 
> # Log on to any node
> # Run wireshark monitoring port 53 - "sudo tshark 'port 53'"
> # Run "sudo -u hdfs hdfs dfs -ls /" 
> # You should be able to see DNS queries to mycluster.FQDN in wireshark



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8307) Spurious DNS Queries from hdfs shell

2016-11-08 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-8307:
---
Assignee: Andres Perez

> Spurious DNS Queries from hdfs shell
> 
>
> Key: HDFS-8307
> URL: https://issues.apache.org/jira/browse/HDFS-8307
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.1
>Reporter: Anu Engineer
>Assignee: Andres Perez
>Priority: Trivial
>
> With HA configured the hdfs shell (org.apache.hadoop.fs.FsShell) seems to 
> issue a DNS query for the cluster Name. if  fs.defaultFS is set to 
> hdfs://mycluster, then the shell seems to issue a DNS query for 
> mycluster.FQDN or mycluster.
> since mycluster not a machine name  DNS query always fails with 
> "DNS 85 Standard query response 0x2aeb No such name"
> Repro Steps:
> # Setup a HA cluster 
> # Log on to any node
> # Run wireshark monitoring port 53 - "sudo tshark 'port 53'"
> # Run "sudo -u hdfs hdfs dfs -ls /" 
> # You should be able to see DNS queries to mycluster.FQDN in wireshark



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10885) [SPS]: Mover tool should not be allowed to run when Storage Policy Satisfier is on

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649689#comment-15649689
 ] 

Hadoop QA commented on HDFS-10885:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
21s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
25s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
17s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} HDFS-10285 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
6s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
983 unchanged - 2 fixed = 984 total (was 985) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
54s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 58m 23s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 86m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestFileChecksum |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10885 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838106/HDFS-10885-HDFS-10285.07.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  xml  |
| uname | Linux 18c527fb9c35 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10285 / 3adef4f |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 

[jira] [Commented] (HDFS-8307) Spurious DNS Queries from hdfs shell

2016-11-08 Thread Andres Perez (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649680#comment-15649680
 ] 

Andres Perez commented on HDFS-8307:


I have a proposed patch for this bug but I'm unable to submit it. Should I 
create a new sub-task/clone this one?

> Spurious DNS Queries from hdfs shell
> 
>
> Key: HDFS-8307
> URL: https://issues.apache.org/jira/browse/HDFS-8307
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.1
>Reporter: Anu Engineer
>Priority: Trivial
>
> With HA configured the hdfs shell (org.apache.hadoop.fs.FsShell) seems to 
> issue a DNS query for the cluster Name. if  fs.defaultFS is set to 
> hdfs://mycluster, then the shell seems to issue a DNS query for 
> mycluster.FQDN or mycluster.
> since mycluster not a machine name  DNS query always fails with 
> "DNS 85 Standard query response 0x2aeb No such name"
> Repro Steps:
> # Setup a HA cluster 
> # Log on to any node
> # Run wireshark monitoring port 53 - "sudo tshark 'port 53'"
> # Run "sudo -u hdfs hdfs dfs -ls /" 
> # You should be able to see DNS queries to mycluster.FQDN in wireshark



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11121) Add assertions to BlockInfo#addStorage to protect from breaking reportedBlock-blockGroup mapping

2016-11-08 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-11121:

Summary: Add assertions to BlockInfo#addStorage to protect from breaking 
reportedBlock-blockGroup mapping  (was: Add assertions to 
{{BlockInfo.addStorage}} to protect from breaking reportedBlock-blockGroup 
mapping)

> Add assertions to BlockInfo#addStorage to protect from breaking 
> reportedBlock-blockGroup mapping
> 
>
> Key: HDFS-11121
> URL: https://issues.apache.org/jira/browse/HDFS-11121
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Critical
> Fix For: 3.0.0-alpha2
>
>
> There are not any assertions in {{BlockInfo.addStorage}}. This may cause that 
> {{BlockInfo}} instances accept strange block reports and result in serious 
> bugs, like HDFS-10858.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11116) Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649667#comment-15649667
 ] 

Hadoop QA commented on HDFS-6:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
6s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
10s{color} | {color:green} root generated 0 new + 691 unchanged - 3 fixed = 691 
total (was 694) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 45s{color} | {color:orange} root: The patch generated 2 new + 65 unchanged - 
1 fixed = 67 total (was 66) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 37s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m  6s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}132m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
|   | hadoop.hdfs.server.datanode.TestLargeBlockReport |
|   | hadoop.hdfs.TestFileCorruption |
|   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HDFS-6 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838101/HDFS-6.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2a5490fcbab9 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 62d8c17 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17479/artifact/patchprocess/diff-checkstyle-root.txt
 |
| unit | 

[jira] [Created] (HDFS-11121) Add assertions to {{BlockInfo.addStorage}} to protect from breaking reportedBlock-blockGroup mapping

2016-11-08 Thread Takanobu Asanuma (JIRA)
Takanobu Asanuma created HDFS-11121:
---

 Summary: Add assertions to {{BlockInfo.addStorage}} to protect 
from breaking reportedBlock-blockGroup mapping
 Key: HDFS-11121
 URL: https://issues.apache.org/jira/browse/HDFS-11121
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: erasure-coding
Affects Versions: 3.0.0-alpha1
Reporter: Takanobu Asanuma
Assignee: Takanobu Asanuma
Priority: Critical
 Fix For: 3.0.0-alpha2


There are not any assertions in {{BlockInfo.addStorage}}. This may cause that 
{{BlockInfo}} instances accept strange block reports and result in serious 
bugs, like HDFS-10858.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11120) TestEncryptionZones should waitActive

2016-11-08 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-11120:
--
Status: Patch Available  (was: Open)

> TestEncryptionZones should waitActive
> -
>
> Key: HDFS-11120
> URL: https://issues.apache.org/jira/browse/HDFS-11120
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HDFS-11120.001.patch
>
>
> Happened to notice this.
> {{TestEncryptionZones#setup}} didn't {{waitActive}} on the minicluster. 
> There's also a test case that does a unnecessary waitActive:
> {code}
> cluster.restartNameNode(true);
> cluster.waitActive();
> {code}
> We should fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11120) TestEncryptionZones should waitActive

2016-11-08 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-11120:
--
Attachment: HDFS-11120.001.patch

Patch 001:
* Call waitActive in setup
* Set all timeout to 120s

> TestEncryptionZones should waitActive
> -
>
> Key: HDFS-11120
> URL: https://issues.apache.org/jira/browse/HDFS-11120
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HDFS-11120.001.patch
>
>
> Happened to notice this.
> {{TestEncryptionZones#setup}} didn't {{waitActive}} on the minicluster. 
> There's also a test case that does a unnecessary waitActive:
> {code}
> cluster.restartNameNode(true);
> cluster.waitActive();
> {code}
> We should fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11116) Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue

2016-11-08 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649631#comment-15649631
 ] 

Yiqun Lin commented on HDFS-6:
--

Thanks [~manojg] for the comments. Here the  {{FileNohtFoundException}} threw 
from the line {{fsState.resolve(getUriPath(f), true)}} is actually that the 
input path isn't in mountpoint. If the exception {{FileNohtFoundException}} is 
threw. and then we throw the {{NotInMountpointException}.This logic has been 
done in {{getDefaultBlcokSize(Path)}} and {{getDefaultReplication(Path)}}, so I 
also apply it to the method {{getServerDefaults(Path)}}.

> Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue
> -
>
> Key: HDFS-6
> URL: https://issues.apache.org/jira/browse/HDFS-6
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-6.001.patch, HDFS-6.002.patch
>
>
> There were some Jenkins warinings related with TestViewFsDefaultValue in each 
> Jenkins building.
> {code}
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9]
>  [deprecation] getDefaultBlockSize() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9]
>  [deprecation] getDefaultReplication() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43]
>  [deprecation] getServerDefaults() in FileSystem has been deprecated
> {code}
> We should use the method {{getDefaultBlockSize(Path)}} replace with 
> deprecation API {{getDefaultBlockSize}}. The same to the 
> {{getDefaultReplication}} and {{getServerDefaults}}. The {{Path}} can be a 
> not-in-mountpoint path in filesystem to trigger the 
> {{NotInMountpointException}} in test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11112) Journal Nodes should refuse to format non-empty directories

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649630#comment-15649630
 ] 

Hadoop QA commented on HDFS-2:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 50s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}115m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestStartup |
|   | hadoop.hdfs.TestDataTransferKeepalive |
|   | hadoop.hdfs.TestCrcCorruption |
|   | hadoop.hdfs.TestDFSStripedOutputStream |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HDFS-2 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838099/HDFS-2.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6863068bda1c 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e1c6ef2 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17478/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17478/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17478/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Journal Nodes should refuse to format non-empty directories
> ---
>
> Key: HDFS-2
> URL: 

[jira] [Commented] (HDFS-9482) Replace DatanodeInfo constructors with a builder pattern

2016-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649608#comment-15649608
 ] 

Hudson commented on HDFS-9482:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10796 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10796/])
HDFS-9482. Replace DatanodeInfo constructors with a builder pattern. (arp: rev 
ed0bebabaaf27cd730f7f8eb002d92c9c7db327d)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/shortcircuit/TestShortCircuitCache.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/StripedBlockWriter.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocolPB/TestPBHelper.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/NamenodeProtocolServerSideTranslatorPB.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestInterDatanodeProtocol.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCorruption.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReportBadBlockAction.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/client/impl/TestBlockReaderFactory.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientSocketSize.java


> Replace DatanodeInfo constructors with a builder pattern
> 
>
> Key: HDFS-9482
> URL: https://issues.apache.org/jira/browse/HDFS-9482
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-9482-002.patch, HDFS-9482-003.patch, 
> HDFS-9482-branch-2.8.patch, HDFS-9482-branch-2.patch, HDFS-9482.patch
>
>
> As per  [~arpitagarwal] comment 
> [here|https://issues.apache.org/jira/browse/HDFS-9038?focusedCommentId=15018761=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15018761],Replace
>  DatanodeInfo constructors with a builder pattern, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11116) Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue

2016-11-08 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649597#comment-15649597
 ] 

Manoj Govindassamy commented on HDFS-6:
---

[~linyiqun],

{code}
   @Override
   public FsServerDefaults getServerDefaults(Path f) throws IOException {

+} catch (FileNotFoundException e) {
+  throw new NotInMountpointException(f, "getServerDefaults");
+}
{code}

Semantically, it might not be right to replace FileNotFileFoundException with 
NotInMountPointException and especially after resolving the file using the 
target file system. Your thoughts please ?

> Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue
> -
>
> Key: HDFS-6
> URL: https://issues.apache.org/jira/browse/HDFS-6
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-6.001.patch, HDFS-6.002.patch
>
>
> There were some Jenkins warinings related with TestViewFsDefaultValue in each 
> Jenkins building.
> {code}
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9]
>  [deprecation] getDefaultBlockSize() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9]
>  [deprecation] getDefaultReplication() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43]
>  [deprecation] getServerDefaults() in FileSystem has been deprecated
> {code}
> We should use the method {{getDefaultBlockSize(Path)}} replace with 
> deprecation API {{getDefaultBlockSize}}. The same to the 
> {{getDefaultReplication}} and {{getServerDefaults}}. The {{Path}} can be a 
> not-in-mountpoint path in filesystem to trigger the 
> {{NotInMountpointException}} in test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9482) Replace DatanodeInfo constructors with a builder pattern

2016-11-08 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649592#comment-15649592
 ] 

Arpit Agarwal commented on HDFS-9482:
-

if the only difference in the branch-2 patch is the EC changes then I am +1 for 
that too. Feel free to commit the branch-2 patch [~brahmareddy].

I posted a comment on HDFS-9371. If we can merge that to branch-2.8 then we 
don't need a separate branch-2.8 patch here.

> Replace DatanodeInfo constructors with a builder pattern
> 
>
> Key: HDFS-9482
> URL: https://issues.apache.org/jira/browse/HDFS-9482
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-9482-002.patch, HDFS-9482-003.patch, 
> HDFS-9482-branch-2.8.patch, HDFS-9482-branch-2.patch, HDFS-9482.patch
>
>
> As per  [~arpitagarwal] comment 
> [here|https://issues.apache.org/jira/browse/HDFS-9038?focusedCommentId=15018761=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15018761],Replace
>  DatanodeInfo constructors with a builder pattern, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9371) Code cleanup for DatanodeManager

2016-11-08 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649590#comment-15649590
 ] 

Arpit Agarwal commented on HDFS-9371:
-

It looks like this can be committed to branch-2.8 with trivially resolved 
conflicts. [~jingzhao], do you recall if there is any potential issue merging 
this to branch-2.8 (like implicit dependency)? Thanks!

{code}
<<< HEAD
if (shouldCountVersion(dn)) {
  Integer num = versionCount.get(dn.getSoftwareVersion());
||| parent of e0abb0a... HDFS-9371. Code cleanup for DatanodeManager. 
Contributed by Jing Zhao.
// Check isAlive too because right after removeDatanode(),
// isDatanodeDead() is still true
if(shouldCountVersion(dn)) {
  Integer num = versionCount.get(dn.getSoftwareVersion());
===
// Check isAlive too because right after removeDatanode(),
// isDatanodeDead() is still true
if (shouldCountVersion(dn)) {
  Integer num = datanodesSoftwareVersions.get(dn.getSoftwareVersion());
>>> e0abb0a... HDFS-9371. Code cleanup for DatanodeManager. Contributed by 
>>> Jing Zhao.
{code}

> Code cleanup for DatanodeManager
> 
>
> Key: HDFS-9371
> URL: https://issues.apache.org/jira/browse/HDFS-9371
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: HDFS-9371.000.patch, HDFS-9371.001.patch, 
> HDFS-9371.002.patch, HDFS-9371.003.patch, HDFS-9371.004.patch
>
>
> Some code cleanup for DatanodeManager. The main changes include:
> # make the synchronization of {{datanodeMap}} and 
> {{datanodesSoftwareVersions}} consistent
> # remove unnecessary lock in {{handleHeartbeat}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9482) Replace DatanodeInfo constructors with a builder pattern

2016-11-08 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-9482:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   Status: Resolved  (was: Patch Available)

Committed to trunk. This was not an easy patch to review and I am sure not very 
fun to write either. :)

Thanks for contributing this refactoring change Brahma. I will take a look at 
the branch-2 patch later this week.

> Replace DatanodeInfo constructors with a builder pattern
> 
>
> Key: HDFS-9482
> URL: https://issues.apache.org/jira/browse/HDFS-9482
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-9482-002.patch, HDFS-9482-003.patch, 
> HDFS-9482-branch-2.8.patch, HDFS-9482-branch-2.patch, HDFS-9482.patch
>
>
> As per  [~arpitagarwal] comment 
> [here|https://issues.apache.org/jira/browse/HDFS-9038?focusedCommentId=15018761=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15018761],Replace
>  DatanodeInfo constructors with a builder pattern, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-5692) viewfs shows resolved path in FileNotFoundException

2016-11-08 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy resolved HDFS-5692.
--
Resolution: Won't Fix

> viewfs shows resolved path in FileNotFoundException
> ---
>
> Key: HDFS-5692
> URL: https://issues.apache.org/jira/browse/HDFS-5692
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.2.0
>Reporter: Keith Turner
>Assignee: Manoj Govindassamy
>
> With the following config, if I call fs.listStatus("/nn1/a/b") when 
> {{/nn1/a/b}} does not exist then ...
> {noformat}
> 
>   
> fs.default.name
> viewfs:///
>   
>   
> fs.viewfs.mounttable.default.link./nn1
> hdfs://host1:9000
>   
>   
> fs.viewfs.mounttable.default.link./nn2
> hdfs://host2:9000
>   
> 
> {noformat}
> I will see an error message like the following.  
> {noformat}
> java.io.FileNotFoundException: File /a/b does not exist.
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:644)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:92)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:702)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:698)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:698)
> at 
> org.apache.hadoop.fs.FilterFileSystem.listStatus(FilterFileSystem.java:222)
> at 
> org.apache.hadoop.fs.viewfs.ChRootedFileSystem.listStatus(ChRootedFileSystem.java:228)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.listStatus(ViewFileSystem.java:366)
> {noformat}
> I think it would be useful for ViewFS to wrap the FileNotFoundException from 
> the inner filesystem, giving an error message like the following.  The 
> following error message has the resolved and unresolved paths which is very 
> useful for debugging.
> {noformat}
> java.io.FileNotFoundException: File /nn1/a/b does not exist.
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.listStatus(ViewFileSystem.java:366)
> Caused by: java.io.FileNotFoundException: File /a/b does not exist.
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:644)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:92)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:702)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:698)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:698)
> at 
> org.apache.hadoop.fs.FilterFileSystem.listStatus(FilterFileSystem.java:222)
> at 
> org.apache.hadoop.fs.viewfs.ChRootedFileSystem.listStatus(ChRootedFileSystem.java:228)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.listStatus(ViewFileSystem.java:366)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-5692) viewfs shows resolved path in FileNotFoundException

2016-11-08 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649552#comment-15649552
 ] 

Manoj Govindassamy edited comment on HDFS-5692 at 11/9/16 2:17 AM:
---

[~kturner],

This sounds useful. Apps invoking vfs.listStatus() will get to see the VFS 
paths in the exception and not the internal paths. It is also very important to 
throw FileNotFoundException() whenever paths being accessed are not found. 

But, FileNotFoundException unfortunately doesn't accept a throwable. So, 
throwing a nested FileNotFoundException as you suggested is not possible. If 
you a way of throwing this nested FileNotFoundException, please let me know.

We can throw an IOException carrying VFS path details and nested with 
FileNotFoundException carrying internal path details. But that will violate the 
contract of listStatus() API.  Unlike APIs, shell commands throw errors with 
VFS paths and not internal paths. 

{noformat}
java.io.IOException: File /nn1/a/b does not exist.
at 
org.apache.hadoop.fs.viewfs.ViewFileSystem.listStatus(ViewFileSystem.java:366)
Caused by: java.io.FileNotFoundException: File /a/b does not exist.
at 
org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:644)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:92)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:702)
{noformat}

In order to retain the right contract of throwing FileNotFoundException for 
listStatus() API, I am inclined to close this bug as wont fix. Please let me 
know if you think otherwise. Would love to hear if there are ways to nest 
FileNotFoundException. 


was (Author: manojg):
[~kturner],

This sounds useful. Apps invoking vfs.listStatus() will get to see the VFS 
paths in the exception and not the internal paths. It is also very important to 
throw FileNotFoundException() whenever paths being accessed are not. But, 
FileNotFoundException unfortunately doesn't accept a throwable. So, throwing a 
nested FileNotFoundExceptions is not possible. We can throw an IOException 
carrying VFS path details and nested with FileNotFoundException carrying 
internal path details. But that will violate the contract of listStatus() API. 

> viewfs shows resolved path in FileNotFoundException
> ---
>
> Key: HDFS-5692
> URL: https://issues.apache.org/jira/browse/HDFS-5692
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.2.0
>Reporter: Keith Turner
>Assignee: Manoj Govindassamy
>
> With the following config, if I call fs.listStatus("/nn1/a/b") when 
> {{/nn1/a/b}} does not exist then ...
> {noformat}
> 
>   
> fs.default.name
> viewfs:///
>   
>   
> fs.viewfs.mounttable.default.link./nn1
> hdfs://host1:9000
>   
>   
> fs.viewfs.mounttable.default.link./nn2
> hdfs://host2:9000
>   
> 
> {noformat}
> I will see an error message like the following.  
> {noformat}
> java.io.FileNotFoundException: File /a/b does not exist.
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:644)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:92)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:702)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:698)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:698)
> at 
> org.apache.hadoop.fs.FilterFileSystem.listStatus(FilterFileSystem.java:222)
> at 
> org.apache.hadoop.fs.viewfs.ChRootedFileSystem.listStatus(ChRootedFileSystem.java:228)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.listStatus(ViewFileSystem.java:366)
> {noformat}
> I think it would be useful for ViewFS to wrap the FileNotFoundException from 
> the inner filesystem, giving an error message like the following.  The 
> following error message has the resolved and unresolved paths which is very 
> useful for debugging.
> {noformat}
> java.io.FileNotFoundException: File /nn1/a/b does not exist.
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.listStatus(ViewFileSystem.java:366)
> Caused by: java.io.FileNotFoundException: File /a/b does not exist.
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:644)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:92)
> at 
> 

[jira] [Commented] (HDFS-5692) viewfs shows resolved path in FileNotFoundException

2016-11-08 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649552#comment-15649552
 ] 

Manoj Govindassamy commented on HDFS-5692:
--

[~kturner],

This sounds useful. Apps invoking vfs.listStatus() will get to see the VFS 
paths in the exception and not the internal paths. It is also very important to 
throw FileNotFoundException() whenever paths being accessed are not. But, 
FileNotFoundException unfortunately doesn't accept a throwable. So, throwing a 
nested FileNotFoundExceptions is not possible. We can throw an IOException 
carrying VFS path details and nested with FileNotFoundException carrying 
internal path details. But that will violate the contract of listStatus() API. 

> viewfs shows resolved path in FileNotFoundException
> ---
>
> Key: HDFS-5692
> URL: https://issues.apache.org/jira/browse/HDFS-5692
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.2.0
>Reporter: Keith Turner
>Assignee: Manoj Govindassamy
>
> With the following config, if I call fs.listStatus("/nn1/a/b") when 
> {{/nn1/a/b}} does not exist then ...
> {noformat}
> 
>   
> fs.default.name
> viewfs:///
>   
>   
> fs.viewfs.mounttable.default.link./nn1
> hdfs://host1:9000
>   
>   
> fs.viewfs.mounttable.default.link./nn2
> hdfs://host2:9000
>   
> 
> {noformat}
> I will see an error message like the following.  
> {noformat}
> java.io.FileNotFoundException: File /a/b does not exist.
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:644)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:92)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:702)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:698)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:698)
> at 
> org.apache.hadoop.fs.FilterFileSystem.listStatus(FilterFileSystem.java:222)
> at 
> org.apache.hadoop.fs.viewfs.ChRootedFileSystem.listStatus(ChRootedFileSystem.java:228)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.listStatus(ViewFileSystem.java:366)
> {noformat}
> I think it would be useful for ViewFS to wrap the FileNotFoundException from 
> the inner filesystem, giving an error message like the following.  The 
> following error message has the resolved and unresolved paths which is very 
> useful for debugging.
> {noformat}
> java.io.FileNotFoundException: File /nn1/a/b does not exist.
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.listStatus(ViewFileSystem.java:366)
> Caused by: java.io.FileNotFoundException: File /a/b does not exist.
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:644)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:92)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:702)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:698)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:698)
> at 
> org.apache.hadoop.fs.FilterFileSystem.listStatus(FilterFileSystem.java:222)
> at 
> org.apache.hadoop.fs.viewfs.ChRootedFileSystem.listStatus(ChRootedFileSystem.java:228)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.listStatus(ViewFileSystem.java:366)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10885) [SPS]: Mover tool should not be allowed to run when Storage Policy Satisfier is on

2016-11-08 Thread Wei Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zhou updated HDFS-10885:

Attachment: HDFS-10885-HDFS-10285.07.patch

Fix issues reported by Jenkins. The following unit test failures may not be 
caused by this patch and can not be reproduced in local environment. Thanks!
{quote}
hadoop.hdfs.server.datanode.TestDataNodeLifeline
hadoop.hdfs.TestFileChecksum
hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots 
{quote}

> [SPS]: Mover tool should not be allowed to run when Storage Policy Satisfier 
> is on
> --
>
> Key: HDFS-10885
> URL: https://issues.apache.org/jira/browse/HDFS-10885
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Wei Zhou
>Assignee: Wei Zhou
> Fix For: HDFS-10285
>
> Attachments: HDFS-10800-HDFS-10885-00.patch, 
> HDFS-10800-HDFS-10885-01.patch, HDFS-10800-HDFS-10885-02.patch, 
> HDFS-10885-HDFS-10285.03.patch, HDFS-10885-HDFS-10285.04.patch, 
> HDFS-10885-HDFS-10285.05.patch, HDFS-10885-HDFS-10285.06.patch, 
> HDFS-10885-HDFS-10285.07.patch
>
>
> These two can not work at the same time to avoid conflicts and fight with 
> each other.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-5692) viewfs shows resolved path in FileNotFoundException

2016-11-08 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy reassigned HDFS-5692:


Assignee: Manoj Govindassamy

> viewfs shows resolved path in FileNotFoundException
> ---
>
> Key: HDFS-5692
> URL: https://issues.apache.org/jira/browse/HDFS-5692
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.2.0
>Reporter: Keith Turner
>Assignee: Manoj Govindassamy
>
> With the following config, if I call fs.listStatus("/nn1/a/b") when 
> {{/nn1/a/b}} does not exist then ...
> {noformat}
> 
>   
> fs.default.name
> viewfs:///
>   
>   
> fs.viewfs.mounttable.default.link./nn1
> hdfs://host1:9000
>   
>   
> fs.viewfs.mounttable.default.link./nn2
> hdfs://host2:9000
>   
> 
> {noformat}
> I will see an error message like the following.  
> {noformat}
> java.io.FileNotFoundException: File /a/b does not exist.
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:644)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:92)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:702)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:698)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:698)
> at 
> org.apache.hadoop.fs.FilterFileSystem.listStatus(FilterFileSystem.java:222)
> at 
> org.apache.hadoop.fs.viewfs.ChRootedFileSystem.listStatus(ChRootedFileSystem.java:228)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.listStatus(ViewFileSystem.java:366)
> {noformat}
> I think it would be useful for ViewFS to wrap the FileNotFoundException from 
> the inner filesystem, giving an error message like the following.  The 
> following error message has the resolved and unresolved paths which is very 
> useful for debugging.
> {noformat}
> java.io.FileNotFoundException: File /nn1/a/b does not exist.
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.listStatus(ViewFileSystem.java:366)
> Caused by: java.io.FileNotFoundException: File /a/b does not exist.
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:644)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:92)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:702)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:698)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:698)
> at 
> org.apache.hadoop.fs.FilterFileSystem.listStatus(FilterFileSystem.java:222)
> at 
> org.apache.hadoop.fs.viewfs.ChRootedFileSystem.listStatus(ChRootedFileSystem.java:228)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.listStatus(ViewFileSystem.java:366)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11114) Support for running async disk checks in DataNode

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649453#comment-15649453
 ] 

Hadoop QA commented on HDFS-4:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  1m 
20s{color} | {color:red} root in branch-2 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} branch-2 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} branch-2 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
44s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 24s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 49m 45s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}123m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_111 Failed junit tests | 
hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | HDFS-4 |
| GITHUB PR | https://github.com/apache/hadoop/pull/154 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1542552283ba 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / 797c7bf |
| Default Java | 1.7.0_111 |
| Multi-JDK versions 

[jira] [Commented] (HDFS-11083) Add unit test for DFSAdmin -report command

2016-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649447#comment-15649447
 ] 

Hudson commented on HDFS-11083:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10795 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10795/])
HDFS-11083. Add unit test for DFSAdmin -report command. Contributed by 
(liuml07: rev 62d8c17dfda75a6a6de06aedad2f22699a1cbad6)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdmin.java


> Add unit test for DFSAdmin -report command
> --
>
> Key: HDFS-11083
> URL: https://issues.apache.org/jira/browse/HDFS-11083
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: shell, test
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11083.000.patch, HDFS-11083.001.patch, 
> HDFS-11083.002.patch, HDFS-11083.003.patch, HDFS-11083.004.patch
>
>
> {{hdfs dfsadmin -report}} has very useful information about the cluster. 
> There are some existing customized tools that depend on this command 
> functionality. We should add unit test for it. Specially,
> # If one datanode is dead, the report should indicate this
> # If one block is corrupt, the "Missing blocks:" field should report this
> # TBD...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-11089) Accelerate TestDiskBalancerCommand using static shared MiniDFSCluster

2016-11-08 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu resolved HDFS-11089.
--
Resolution: Not A Problem

Thanks [~arpitagarwal] for your suggestion. Closing this JIRA as "Not A 
Problem".

> Accelerate TestDiskBalancerCommand using static shared MiniDFSCluster
> -
>
> Key: HDFS-11089
> URL: https://issues.apache.org/jira/browse/HDFS-11089
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> It takes 50+ seconds to run the test suite. Similar to HDFS-11079, static 
> shared MiniDFSCluster will be used to accelerate the run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Closed] (HDFS-11089) Accelerate TestDiskBalancerCommand using static shared MiniDFSCluster

2016-11-08 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu closed HDFS-11089.


> Accelerate TestDiskBalancerCommand using static shared MiniDFSCluster
> -
>
> Key: HDFS-11089
> URL: https://issues.apache.org/jira/browse/HDFS-11089
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> It takes 50+ seconds to run the test suite. Similar to HDFS-11079, static 
> shared MiniDFSCluster will be used to accelerate the run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11083) Add unit test for DFSAdmin -report command

2016-11-08 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-11083:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.8.0
   Status: Resolved  (was: Patch Available)

Committed to {{trunk}} through {{branch-2.8}} branches. I resolved minor 
conflicts when committing. Thanks for your contribution, [~xiaobingo].

> Add unit test for DFSAdmin -report command
> --
>
> Key: HDFS-11083
> URL: https://issues.apache.org/jira/browse/HDFS-11083
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: shell, test
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11083.000.patch, HDFS-11083.001.patch, 
> HDFS-11083.002.patch, HDFS-11083.003.patch, HDFS-11083.004.patch
>
>
> {{hdfs dfsadmin -report}} has very useful information about the cluster. 
> There are some existing customized tools that depend on this command 
> functionality. We should add unit test for it. Specially,
> # If one datanode is dead, the report should indicate this
> # If one block is corrupt, the "Missing blocks:" field should report this
> # TBD...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11116) Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue

2016-11-08 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-6:
-
Attachment: HDFS-6.002.patch

> Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue
> -
>
> Key: HDFS-6
> URL: https://issues.apache.org/jira/browse/HDFS-6
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-6.001.patch, HDFS-6.002.patch
>
>
> There were some Jenkins warinings related with TestViewFsDefaultValue in each 
> Jenkins building.
> {code}
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9]
>  [deprecation] getDefaultBlockSize() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9]
>  [deprecation] getDefaultReplication() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43]
>  [deprecation] getServerDefaults() in FileSystem has been deprecated
> {code}
> We should use the method {{getDefaultBlockSize(Path)}} replace with 
> deprecation API {{getDefaultBlockSize}}. The same to the 
> {{getDefaultReplication}} and {{getServerDefaults}}. The {{Path}} can be a 
> not-in-mountpoint path in filesystem to trigger the 
> {{NotInMountpointException}} in test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11116) Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue

2016-11-08 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-6:
-
Attachment: (was: HDFS-6.002.patch)

> Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue
> -
>
> Key: HDFS-6
> URL: https://issues.apache.org/jira/browse/HDFS-6
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-6.001.patch, HDFS-6.002.patch
>
>
> There were some Jenkins warinings related with TestViewFsDefaultValue in each 
> Jenkins building.
> {code}
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9]
>  [deprecation] getDefaultBlockSize() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9]
>  [deprecation] getDefaultReplication() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43]
>  [deprecation] getServerDefaults() in FileSystem has been deprecated
> {code}
> We should use the method {{getDefaultBlockSize(Path)}} replace with 
> deprecation API {{getDefaultBlockSize}}. The same to the 
> {{getDefaultReplication}} and {{getServerDefaults}}. The {{Path}} can be a 
> not-in-mountpoint path in filesystem to trigger the 
> {{NotInMountpointException}} in test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11119) Support for parallel checking of StorageLocations on DataNode startup

2016-11-08 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-9:
-
Status: Patch Available  (was: Open)

> Support for parallel checking of StorageLocations on DataNode startup
> -
>
> Key: HDFS-9
> URL: https://issues.apache.org/jira/browse/HDFS-9
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> The {{AsyncChecker}} support introduced by HDFS-4 can be used to 
> parallelize checking {{StorageLocation}} s on Datanode startup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11112) Journal Nodes should refuse to format non-empty directories

2016-11-08 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-2:
-
Attachment: HDFS-2.002.patch

Thanks [~arpitagarwal] for the comments. Attach a new patch to address the 
comments.

> Journal Nodes should refuse to format non-empty directories
> ---
>
> Key: HDFS-2
> URL: https://issues.apache.org/jira/browse/HDFS-2
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Yiqun Lin
> Attachments: HDFS-2.001.patch, HDFS-2.002.patch
>
>
> Journal Nodes should reject the {{format}} RPC request if a storage directory 
> is non-empty. The relevant code is in {{JNStorage#format}}.
> {code}
>   void format(NamespaceInfo nsInfo) throws IOException {
> setStorageInfo(nsInfo);
> ...
> unlockAll();
> sd.clearDirectory();
> writeProperties(sd);
> createPaxosDir();
> analyzeStorage();
> {code}
> This would make the behavior similar to {{namenode -format -nonInteractive}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11083) Add unit test for DFSAdmin -report command

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649359#comment-15649359
 ] 

Hadoop QA commented on HDFS-11083:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 60m 
52s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 80m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HDFS-11083 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838072/HDFS-11083.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 741218d5fced 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 29e3b34 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17477/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17477/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add unit test for DFSAdmin -report command
> --
>
> Key: HDFS-11083
> URL: https://issues.apache.org/jira/browse/HDFS-11083
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: shell, test
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
> Attachments: HDFS-11083.000.patch, HDFS-11083.001.patch, 
> HDFS-11083.002.patch, HDFS-11083.003.patch, HDFS-11083.004.patch
>
>
> {{hdfs dfsadmin -report}} has very useful information about the cluster. 
> There are some existing customized tools that depend on 

[jira] [Commented] (HDFS-11119) Support for parallel checking of StorageLocations on DataNode startup

2016-11-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649351#comment-15649351
 ] 

ASF GitHub Bot commented on HDFS-9:
---

GitHub user arp7 opened a pull request:

https://github.com/apache/hadoop/pull/155

HDFS-9. Support for parallel checking of StorageLocations on DataNode 
startup

Introduce a StorageLocationChecker class that can parallelize checking 
StorageLocations. It also detects stalled checks and flags such volumes as 
failed. The DataNode will use this class in the next Jira.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/arp7/hadoop HDFS-9

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/155.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #155


commit 3561232c37de5eddde6185cec60933e0c687bbed
Author: Arpit Agarwal 
Date:   2016-11-08T22:41:59Z

HDFS-9. Support for parallel checking of StorageLocations on DataNode 
startup.




> Support for parallel checking of StorageLocations on DataNode startup
> -
>
> Key: HDFS-9
> URL: https://issues.apache.org/jira/browse/HDFS-9
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> The {{AsyncChecker}} support introduced by HDFS-4 can be used to 
> parallelize checking {{StorageLocation}} s on Datanode startup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11056) Concurrent append and read operations lead to checksum error

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649325#comment-15649325
 ] 

Hadoop QA commented on HDFS-11056:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
20s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m  3s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}168m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_101 Failed junit tests | 
hadoop.hdfs.server.datanode.TestFsDatasetCache |
| JDK v1.7.0_111 Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | HDFS-11056 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838052/HDFS-11056.branch-2.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2b8b5a8d049c 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / b77239b |
| Default 

[jira] [Assigned] (HDFS-11120) TestEncryptionZones should waitActive

2016-11-08 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge reassigned HDFS-11120:
-

Assignee: John Zhuge

> TestEncryptionZones should waitActive
> -
>
> Key: HDFS-11120
> URL: https://issues.apache.org/jira/browse/HDFS-11120
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: John Zhuge
>Priority: Minor
>
> Happened to notice this.
> {{TestEncryptionZones#setup}} didn't {{waitActive}} on the minicluster. 
> There's also a test case that does a unnecessary waitActive:
> {code}
> cluster.restartNameNode(true);
> cluster.waitActive();
> {code}
> We should fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11120) TestEncryptionZones should waitActive

2016-11-08 Thread Xiao Chen (JIRA)
Xiao Chen created HDFS-11120:


 Summary: TestEncryptionZones should waitActive
 Key: HDFS-11120
 URL: https://issues.apache.org/jira/browse/HDFS-11120
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Affects Versions: 2.8.0
Reporter: Xiao Chen
Priority: Minor


Happened to notice this.

{{TestEncryptionZones#setup}} didn't {{waitActive}} on the minicluster. There's 
also a test case that does a unnecessary waitActive:
{code}
cluster.restartNameNode(true);
cluster.waitActive();
{code}

We should fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11119) Support for parallel checking of StorageLocations on DataNode startup

2016-11-08 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-9:
-
Description: The {{AsyncChecker}} support introduced by HDFS-4 can be 
used to parallelize checking {{StorageLocation}} s on Datanode startup.  (was: 
The {{AsyncChecker}} support introduced by HDFS-4 can be used to 
parallelize checking {{StorageLocation}}s on Datanode startup.)

> Support for parallel checking of StorageLocations on DataNode startup
> -
>
> Key: HDFS-9
> URL: https://issues.apache.org/jira/browse/HDFS-9
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> The {{AsyncChecker}} support introduced by HDFS-4 can be used to 
> parallelize checking {{StorageLocation}} s on Datanode startup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11114) Support for running async disk checks in DataNode

2016-11-08 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649157#comment-15649157
 ] 

Arpit Agarwal edited comment on HDFS-4 at 11/8/16 11:27 PM:


I'd be happy to reattach the patch files. IIUC GitHub integration is officially 
supported so it shouldn't be a new workflow. I removed the attachments as 
Jenkins wasn't doing the right thing when I had patch files and pull request 
both.


was (Author: arpitagarwal):
I'd be happy to reattach the patch files. IIUC Git integration is officially 
supported so it shouldn't be a new workflow. I removed the attachments as 
Jenkins wasn't doing the right thing when I had patch files and pull request 
both.

> Support for running async disk checks in DataNode
> -
>
> Key: HDFS-4
> URL: https://issues.apache.org/jira/browse/HDFS-4
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 3.0.0-alpha2
>
>
> Introduce support for running async checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11114) Support for running async disk checks in DataNode

2016-11-08 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649157#comment-15649157
 ] 

Arpit Agarwal commented on HDFS-4:
--

I'd be happy to reattach the patch files. IIUC Git integration is officially 
supported so it shouldn't be a new workflow. I removed the attachments as 
Jenkins wasn't doing the right thing when I had patch files and pull request 
both.

> Support for running async disk checks in DataNode
> -
>
> Key: HDFS-4
> URL: https://issues.apache.org/jira/browse/HDFS-4
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 3.0.0-alpha2
>
>
> Introduce support for running async checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11083) Add unit test for DFSAdmin -report command

2016-11-08 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649123#comment-15649123
 ] 

Xiaobing Zhou commented on HDFS-11083:
--

Posted v004 to fix the check style.
short + 1 is type of int, so needs explicit cast to avoid warning, thanks 
though.

> Add unit test for DFSAdmin -report command
> --
>
> Key: HDFS-11083
> URL: https://issues.apache.org/jira/browse/HDFS-11083
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: shell, test
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
> Attachments: HDFS-11083.000.patch, HDFS-11083.001.patch, 
> HDFS-11083.002.patch, HDFS-11083.003.patch, HDFS-11083.004.patch
>
>
> {{hdfs dfsadmin -report}} has very useful information about the cluster. 
> There are some existing customized tools that depend on this command 
> functionality. We should add unit test for it. Specially,
> # If one datanode is dead, the report should indicate this
> # If one block is corrupt, the "Missing blocks:" field should report this
> # TBD...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11083) Add unit test for DFSAdmin -report command

2016-11-08 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-11083:
-
Attachment: HDFS-11083.004.patch

> Add unit test for DFSAdmin -report command
> --
>
> Key: HDFS-11083
> URL: https://issues.apache.org/jira/browse/HDFS-11083
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: shell, test
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
> Attachments: HDFS-11083.000.patch, HDFS-11083.001.patch, 
> HDFS-11083.002.patch, HDFS-11083.003.patch, HDFS-11083.004.patch
>
>
> {{hdfs dfsadmin -report}} has very useful information about the cluster. 
> There are some existing customized tools that depend on this command 
> functionality. We should add unit test for it. Specially,
> # If one datanode is dead, the report should indicate this
> # If one block is corrupt, the "Missing blocks:" field should report this
> # TBD...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11114) Support for running async disk checks in DataNode

2016-11-08 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649105#comment-15649105
 ] 

Yongjun Zhang commented on HDFS-4:
--

Thanks [~arpitagarwal]. This seems a workflow change. It'd be nice to stick to 
the same workflow by attaching patch files to be consistent. What do you think? 
thanks.




> Support for running async disk checks in DataNode
> -
>
> Key: HDFS-4
> URL: https://issues.apache.org/jira/browse/HDFS-4
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 3.0.0-alpha2
>
>
> Introduce support for running async checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11119) Support for parallel checking of StorageLocations on DataNode startup

2016-11-08 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-9:


 Summary: Support for parallel checking of StorageLocations on 
DataNode startup
 Key: HDFS-9
 URL: https://issues.apache.org/jira/browse/HDFS-9
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


The {{AsyncChecker}} support introduced by HDFS-4 can be used to 
parallelize checking {{StorageLocation}}s on Datanode startup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11083) Add unit test for DFSAdmin -report command

2016-11-08 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649004#comment-15649004
 ] 

Mingliang Liu commented on HDFS-11083:
--

{code}
537   fs.setReplication(file, (short) (replFactor + 1));
{code}
The replFactor is short type, isn't it?

{quote}
It addressed the 5 points though I didn't quite get #4.
{quote}
That was a bad format, I think you got the idea (using {{replFactor +1}} 
instead of 2), as the above comment.

+1

> Add unit test for DFSAdmin -report command
> --
>
> Key: HDFS-11083
> URL: https://issues.apache.org/jira/browse/HDFS-11083
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: shell, test
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
> Attachments: HDFS-11083.000.patch, HDFS-11083.001.patch, 
> HDFS-11083.002.patch, HDFS-11083.003.patch
>
>
> {{hdfs dfsadmin -report}} has very useful information about the cluster. 
> There are some existing customized tools that depend on this command 
> functionality. We should add unit test for it. Specially,
> # If one datanode is dead, the report should indicate this
> # If one block is corrupt, the "Missing blocks:" field should report this
> # TBD...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11114) Support for running async disk checks in DataNode

2016-11-08 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648943#comment-15648943
 ] 

Arpit Agarwal commented on HDFS-4:
--

Hi [~yzhangal], the changes are in the pull requests. #153 has two patches for 
trunk. #154 has the branch-2 patch.

> Support for running async disk checks in DataNode
> -
>
> Key: HDFS-4
> URL: https://issues.apache.org/jira/browse/HDFS-4
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 3.0.0-alpha2
>
>
> Introduce support for running async checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11114) Support for running async disk checks in DataNode

2016-11-08 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648933#comment-15648933
 ] 

Yongjun Zhang commented on HDFS-4:
--

HI [~arpiagariu],

Thanks for working on this issue. I don't see patch files attached, and guess 
they were removed. Is it intentional? It'd be nice to have the patch files in 
the jira. 

Thanks.

> Support for running async disk checks in DataNode
> -
>
> Key: HDFS-4
> URL: https://issues.apache.org/jira/browse/HDFS-4
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 3.0.0-alpha2
>
>
> Introduce support for running async checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11114) Support for running async disk checks in DataNode

2016-11-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648914#comment-15648914
 ] 

ASF GitHub Bot commented on HDFS-4:
---

GitHub user arp7 opened a pull request:

https://github.com/apache/hadoop/pull/154

HDFS-4. Support for running async disk checks in DataNode.

Patch for branch-2 to fix Java 7 compiler errors. Changes were minor - 
added final for variables referenced in anonymous inner classes and explicit 
type parameter when in the call to 
_Futures.immediateFailedFuture(result.exception)_.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/arp7/hadoop branch-2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/154.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #154


commit cec4608d302af06d8ee01d2dc7ef5e453536bbc3
Author: Arpit Agarwal 
Date:   2016-11-08T21:53:07Z

HDFS-4. Support for running async disk checks in DataNode.

Change-Id: Ifb2850ca4b9f9a60e23d456be0afa3e5ebd04b1b




> Support for running async disk checks in DataNode
> -
>
> Key: HDFS-4
> URL: https://issues.apache.org/jira/browse/HDFS-4
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 3.0.0-alpha2
>
>
> Introduce support for running async checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11114) Support for running async disk checks in DataNode

2016-11-08 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-4:
-
Status: Patch Available  (was: Reopened)

> Support for running async disk checks in DataNode
> -
>
> Key: HDFS-4
> URL: https://issues.apache.org/jira/browse/HDFS-4
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 3.0.0-alpha2
>
>
> Introduce support for running async checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-11114) Support for running async disk checks in DataNode

2016-11-08 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reopened HDFS-4:
--

> Support for running async disk checks in DataNode
> -
>
> Key: HDFS-4
> URL: https://issues.apache.org/jira/browse/HDFS-4
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 3.0.0-alpha2
>
>
> Introduce support for running async checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11056) Concurrent append and read operations lead to checksum error

2016-11-08 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-11056:
---
Attachment: HDFS-11056.branch-2.patch

Attach branch-2 patch for precommit check

> Concurrent append and read operations lead to checksum error
> 
>
> Key: HDFS-11056
> URL: https://issues.apache.org/jira/browse/HDFS-11056
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, httpfs
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-11056.001.patch, HDFS-11056.002.patch, 
> HDFS-11056.branch-2.patch, HDFS-11056.reproduce.patch
>
>
> If there are two clients, one of them open-append-close a file continuously, 
> while the other open-read-close the same file continuously, the reader 
> eventually gets a checksum error in the data read.
> On my local Mac, it takes a few minutes to produce the error. This happens to 
> httpfs clients, but there's no reason not believe this happens to any append 
> clients.
> I have a unit test that demonstrates the checksum error. Will attach later.
> Relevant log:
> {quote}
> 2016-10-25 15:34:45,153 INFO  audit - allowed=trueugi=weichiu 
> (auth:SIMPLE)   ip=/127.0.0.1   cmd=opensrc=/tmp/bar.txt
> dst=nullperm=null   proto=rpc
> 2016-10-25 15:34:45,155 INFO  DataNode - Receiving 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 src: 
> /127.0.0.1:51130 dest: /127.0.0.1:50131
> 2016-10-25 15:34:45,155 INFO  FsDatasetImpl - Appending to FinalizedReplica, 
> blk_1073741825_1182, FINALIZED
>   getNumBytes() = 182
>   getBytesOnDisk()  = 182
>   getVisibleLength()= 182
>   getVolume()   = 
> /Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1
>   getBlockURI() = 
> file:/Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1/current/BP-837130339-172.16.1.88-1477434851452/current/finalized/subdir0/subdir0/blk_1073741825
> 2016-10-25 15:34:45,167 INFO  DataNode - opReadBlock 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 received exception 
> java.io.IOException: No data exists for block 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182
> 2016-10-25 15:34:45,167 WARN  DataNode - 
> DatanodeRegistration(127.0.0.1:50131, 
> datanodeUuid=41c96335-5e4b-4950-ac22-3d21b353abb8, infoPort=50133, 
> infoSecurePort=0, ipcPort=50134, 
> storageInfo=lv=-57;cid=testClusterID;nsid=1472068852;c=1477434851452):Got 
> exception while serving 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 to /127.0.0.1:51121
> java.io.IOException: No data exists for block 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockInputStream(FsDatasetImpl.java:773)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:400)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:581)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:150)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:102)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289)
>   at java.lang.Thread.run(Thread.java:745)
> 2016-10-25 15:34:45,168 INFO  FSNamesystem - 
> updatePipeline(blk_1073741825_1182, newGS=1183, newLength=182, 
> newNodes=[127.0.0.1:50131], client=DFSClient_NONMAPREDUCE_-1743096965_197)
> 2016-10-25 15:34:45,168 ERROR DataNode - 127.0.0.1:50131:DataXceiver error 
> processing READ_BLOCK operation  src: /127.0.0.1:51121 dst: /127.0.0.1:50131
> java.io.IOException: No data exists for block 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockInputStream(FsDatasetImpl.java:773)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:400)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:581)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:150)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:102)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289)
>   at java.lang.Thread.run(Thread.java:745)
> 2016-10-25 15:34:45,168 INFO  FSNamesystem - 
> updatePipeline(blk_1073741825_1182 => blk_1073741825_1183) success
> 2016-10-25 15:34:45,170 WARN  DFSClient - Found Checksum error for 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 from 
> 

[jira] [Commented] (HDFS-11112) Journal Nodes should refuse to format non-empty directories

2016-11-08 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648748#comment-15648748
 ] 

Arpit Agarwal commented on HDFS-2:
--

Thank you for taking this up [~linyiqun]. Couple of nitpicks with the patch:
# This comment can be removed. Since we call unlockAll() and analyzeStorage(), 
it follows that we expect the storage can be non-empty and locked.
{code}
// Unlock the directory since the storage maybe not empty and
// locked before formatting sometimes.
{code}
# We should wrap the _sd.analyzeStorage(StartupOption.FORMAT, this, true);_ 
call in try-finally and release the lock in the finally block, so we don't 
return with the lock held if analyze failed.

This change may be incompatible as there is a good chance of breaking someone's 
test setup. I'd commit it to trunk only, just to be safe.

> Journal Nodes should refuse to format non-empty directories
> ---
>
> Key: HDFS-2
> URL: https://issues.apache.org/jira/browse/HDFS-2
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Yiqun Lin
> Attachments: HDFS-2.001.patch
>
>
> Journal Nodes should reject the {{format}} RPC request if a storage directory 
> is non-empty. The relevant code is in {{JNStorage#format}}.
> {code}
>   void format(NamespaceInfo nsInfo) throws IOException {
> setStorageInfo(nsInfo);
> ...
> unlockAll();
> sd.clearDirectory();
> writeProperties(sd);
> createPaxosDir();
> analyzeStorage();
> {code}
> This would make the behavior similar to {{namenode -format -nonInteractive}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11114) Support for running async disk checks in DataNode

2016-11-08 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-4:
-
Fix Version/s: (was: 2.9.0)
   3.0.0-alpha2

> Support for running async disk checks in DataNode
> -
>
> Key: HDFS-4
> URL: https://issues.apache.org/jira/browse/HDFS-4
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 3.0.0-alpha2
>
>
> Introduce support for running async checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10206) getBlockLocations might not sort datanodes properly by distance

2016-11-08 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-10206:
-
Assignee: Nandakumar

> getBlockLocations might not sort datanodes properly by distance
> ---
>
> Key: HDFS-10206
> URL: https://issues.apache.org/jira/browse/HDFS-10206
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Nandakumar
>
> If the DFSClient machine is not a datanode, but it shares its rack with some 
> datanodes of the HDFS block requested, {{DatanodeManager#sortLocatedBlocks}} 
> might not put the local-rack datanodes at the beginning of the sorted list. 
> That is because the function didn't call {{networktopology.add(client);}} to 
> properly set the node's parent node; something required by 
> {{networktopology.sortByDistance}} to compute distance between two nodes in 
> the same topology tree.
> Another issue with {{networktopology.sortByDistance}} is it only 
> distinguishes local rack from remote rack, but it doesn't support general 
> distance calculation to tell how remote the rack is.
> {noformat}
> NetworkTopology.java
>   protected int getWeight(Node reader, Node node) {
> // 0 is local, 1 is same rack, 2 is off rack
> // Start off by initializing to off rack
> int weight = 2;
> if (reader != null) {
>   if (reader.equals(node)) {
> weight = 0;
>   } else if (isOnSameRack(reader, node)) {
> weight = 1;
>   }
> }
> return weight;
>   }
> {noformat}
> HDFS-10203 has suggested moving the sorting from namenode to DFSClient to 
> address another issue. Regardless of where we do the sorting, we still need 
> fix the issues outline here.
> Note that BlockPlacementPolicyDefault shares the same NetworkTopology object 
> used by DatanodeManager and requires Nodes stored in the topology to be 
> {{DatanodeDescriptor}} for block placement. So we need to make sure we don't 
> pollute the  NetworkTopology if we plan to fix it on the server side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11114) Support for running async disk checks in DataNode

2016-11-08 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648524#comment-15648524
 ] 

Arpit Agarwal commented on HDFS-4:
--

Thanks for the heads up [~kihwal] and [~ste...@apache.org]. I have reverted it 
from branch-2 for now.

I usually compile on branch-2 before pushing but skipped it this time.

> Support for running async disk checks in DataNode
> -
>
> Key: HDFS-4
> URL: https://issues.apache.org/jira/browse/HDFS-4
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.9.0
>
>
> Introduce support for running async checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10885) [SPS]: Mover tool should not be allowed to run when Storage Policy Satisfier is on

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648491#comment-15648491
 ] 

Hadoop QA commented on HDFS-10885:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
47s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
42s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
28s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
37s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} HDFS-10285 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 40s{color} | {color:orange} hadoop-hdfs-project: The patch generated 9 new + 
983 unchanged - 2 fixed = 992 total (was 985) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
3s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
3s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m  9s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}101m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Exception is caught when Exception is not thrown in 
org.apache.hadoop.hdfs.server.namenode.StoragePolicySatisfier.checkIfMoverRunning()
  At StoragePolicySatisfier.java:is not thrown in 
org.apache.hadoop.hdfs.server.namenode.StoragePolicySatisfier.checkIfMoverRunning()
  At StoragePolicySatisfier.java:[line 128] |
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestStoragePolicySatisfyWorker |
|   | hadoop.hdfs.server.namenode.TestStoragePolicySatisfier |
|   | hadoop.hdfs.server.datanode.TestDataNodeLifeline |
|   | hadoop.hdfs.TestFileChecksum |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  

[jira] [Commented] (HDFS-10899) Add functionality to re-encrypt EDEKs.

2016-11-08 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648478#comment-15648478
 ] 

Andrew Wang commented on HDFS-10899:


bq.  

Makes sense, though at this point I'd prefer to choose a datastructure for 
clarity rather than performance. We're likely going to be bottlenecked on KMS 
RPCs rather than this map, so even simple synchronization will probably be okay.

It's also not so bad to just use a Map for a Set, the JDK HashSet for instance 
is actually backed by a map:

http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/util/HashSet.java

> Add functionality to re-encrypt EDEKs.
> --
>
> Key: HDFS-10899
> URL: https://issues.apache.org/jira/browse/HDFS-10899
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-10899.wip.patch, Re-encrypt edek design doc.pdf
>
>
> Currently when an encryption zone (EZ) key is rotated, it only takes effect 
> on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key 
> rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11083) Add unit test for DFSAdmin -report command

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648469#comment-15648469
 ] 

Hadoop QA commented on HDFS-11083:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 18s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.checker.TestThrottledAsyncChecker |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HDFS-11083 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838011/HDFS-11083.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1cad489ee95e 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dbb133c |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17473/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17473/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17473/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17473/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add unit test for DFSAdmin -report command
> --
>
>  

[jira] [Updated] (HDFS-11103) Ozone: Cleanup some dependencies

2016-11-08 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11103:

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

[~xyao] Thank you for the code reviews. I have committed this to the feature 
branch.

> Ozone: Cleanup some dependencies
> 
>
> Key: HDFS-11103
> URL: https://issues.apache.org/jira/browse/HDFS-11103
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Trivial
> Fix For: HDFS-7240
>
> Attachments: HDFS-11103-HDFS-7240.001.patch, 
> HDFS-11103-HDFS-7240.002.patch, HDFS-11103-HDFS-7240.003.patch, 
> HDFS-11103-HDFS-7240.004.patch
>
>
> Cleanup some unwanted dependencies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9337) Should check required params in WebHDFS to avoid NPE

2016-11-08 Thread Jagadesh Kiran N (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648302#comment-15648302
 ] 

Jagadesh Kiran N commented on HDFS-9337:


The test failures and check style error is not related to patch, [~vinayrpet] 
,Please review,

> Should check required params in WebHDFS to avoid NPE
> 
>
> Key: HDFS-9337
> URL: https://issues.apache.org/jira/browse/HDFS-9337
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jagadesh Kiran N
>Assignee: Jagadesh Kiran N
> Attachments: HDFS-9337_00.patch, HDFS-9337_01.patch, 
> HDFS-9337_02.patch, HDFS-9337_03.patch, HDFS-9337_04.patch, 
> HDFS-9337_05.patch, HDFS-9337_06.patch, HDFS-9337_07.patch, 
> HDFS-9337_08.patch, HDFS-9337_09.patch, HDFS-9337_10.patch, 
> HDFS-9337_11.patch, HDFS-9337_12.patch, HDFS-9337_13.patch, 
> HDFS-9337_14.patch, HDFS-9337_15.patch, HDFS-9337_16.patch, 
> HDFS-9337_17.patch, HDFS-9337_18.patch, HDFS-9337_19.patch, 
> HDFS-9337_20.patch, HDFS-9337_21.patch
>
>
> {code}
>  curl -i -X PUT 
> "http://10.19.92.127:50070/webhdfs/v1/kiran/sreenu?op=RENAMESNAPSHOT=SNAPSHOTNAME;
> {code}
> Null point exception will be thrown
> {code}
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10885) [SPS]: Mover tool should not be allowed to run when Storage Policy Satisfier is on

2016-11-08 Thread Wei Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zhou updated HDFS-10885:

Attachment: HDFS-10885-HDFS-10285.06.patch

Thanks [~rakeshr] and [~umamaheswararao] for the comments! The patch is updated 
accroding to the suggestions but with some differences:
1. RPC call {{isStoragePolicySatisfierActive()}} added for client to identify 
the status of SPS, it returns only a bool value.
{quote}
After the startup, should do a double check ensuring that there is no lease 
exists. If lease exists, then stop SPS.
{quote}
2. Currenly not implemented as the startup stage is not a time consuming 
procedure.
3. On Mover side, code of processes like connecting to NN and creating ID file 
are shared by many modules, it's inconvenient to modify the code to add checks 
that special for Mover. So I only check the status of SPS just before move 
operation, suppose this should not bring in logical bugs. What's your opinion? 
Thanks!

> [SPS]: Mover tool should not be allowed to run when Storage Policy Satisfier 
> is on
> --
>
> Key: HDFS-10885
> URL: https://issues.apache.org/jira/browse/HDFS-10885
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Wei Zhou
>Assignee: Wei Zhou
> Fix For: HDFS-10285
>
> Attachments: HDFS-10800-HDFS-10885-00.patch, 
> HDFS-10800-HDFS-10885-01.patch, HDFS-10800-HDFS-10885-02.patch, 
> HDFS-10885-HDFS-10285.03.patch, HDFS-10885-HDFS-10285.04.patch, 
> HDFS-10885-HDFS-10285.05.patch, HDFS-10885-HDFS-10285.06.patch
>
>
> These two can not work at the same time to avoid conflicts and fight with 
> each other.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11083) Add unit test for DFSAdmin -report command

2016-11-08 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648152#comment-15648152
 ] 

Xiaobing Zhou commented on HDFS-11083:
--

Thanks [~liuml07]. I posted patch v003. It addressed the 5 points though I 
didn't quite get #4.

> Add unit test for DFSAdmin -report command
> --
>
> Key: HDFS-11083
> URL: https://issues.apache.org/jira/browse/HDFS-11083
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: shell, test
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
> Attachments: HDFS-11083.000.patch, HDFS-11083.001.patch, 
> HDFS-11083.002.patch, HDFS-11083.003.patch
>
>
> {{hdfs dfsadmin -report}} has very useful information about the cluster. 
> There are some existing customized tools that depend on this command 
> functionality. We should add unit test for it. Specially,
> # If one datanode is dead, the report should indicate this
> # If one block is corrupt, the "Missing blocks:" field should report this
> # TBD...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11083) Add unit test for DFSAdmin -report command

2016-11-08 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-11083:
-
Attachment: HDFS-11083.003.patch

> Add unit test for DFSAdmin -report command
> --
>
> Key: HDFS-11083
> URL: https://issues.apache.org/jira/browse/HDFS-11083
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: shell, test
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
> Attachments: HDFS-11083.000.patch, HDFS-11083.001.patch, 
> HDFS-11083.002.patch, HDFS-11083.003.patch
>
>
> {{hdfs dfsadmin -report}} has very useful information about the cluster. 
> There are some existing customized tools that depend on this command 
> functionality. We should add unit test for it. Specially,
> # If one datanode is dead, the report should indicate this
> # If one block is corrupt, the "Missing blocks:" field should report this
> # TBD...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11068) [SPS]: Provide unique trackID to track the block movement sends to coordinator

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648036#comment-15648036
 ] 

Hadoop QA commented on HDFS-11068:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
19s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 76 unchanged - 1 fixed = 77 total (was 77) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 44s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 83m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestFileChecksum |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-11068 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837987/HDFS-11068-HDFS-10285-01.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ed4512d2a743 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10285 / 3adef4f |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17472/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17472/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17472/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17472/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [SPS]: Provide unique trackID to track the block movement sends to 

[jira] [Commented] (HDFS-11115) Remove bytes2Array and string2Bytes

2016-11-08 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648021#comment-15648021
 ] 

Kihwal Lee commented on HDFS-5:
---

You are undoing this optimization.
{code}
  // Using the charset canonical name for String/byte[] conversions is much
  // more efficient due to use of cached encoders/decoders.
  private static final String UTF8_CSN = StandardCharsets.UTF_8.name();
{code}

> Remove bytes2Array and string2Bytes
> ---
>
> Key: HDFS-5
> URL: https://issues.apache.org/jira/browse/HDFS-5
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, hdfs-client
>Reporter: Sahil Kang
>Priority: Minor
>
> In DFSUtilClient.java we have something like:
> {code: language=java}
> public static byte[] string2Bytes(String str) {
>   try {
> return str.getBytes("UTF-8");
>   } catch (UnsupportedEncodingException e) {
> throw new IllegalArgumentException("UTF8 decoding is not supported", e);
>   }
> }
> static String bytes2String(byte[] bytes, int offset, int length) {
>   try {
> return new String(bytes, offset, length, "UTF-8");
>   } catch (UnsupportedEncodingException e) {
> throw new IllegalArgumentException("UTF8 encoding is not supported", e);
>   }
> }
> {code}
> Using StandardCharsets, these methods become trivial:
> {code: language=java}
> public static byte[] string2Bytes(String str) {
>   return str.getBytes(StandardCharsets.UTF_8);
> }
> static String bytes2String(byte[] bytes, int offset, int length) {
>   return new String(bytes, offset, length, StandardCharsets.UTF_8);
> }
> {code}
> I think we should remove these methods and use StandardCharsets whenever we 
> need to convert between bytes and strings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11029) [SPS]:Provide retry mechanism for the blocks which were failed while moving its storage at DNs

2016-11-08 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15647981#comment-15647981
 ] 

Rakesh R commented on HDFS-11029:
-

Thank you [~umamaheswararao] for the important work. I've few comments on the 
patch, please take care.

# {{storage movement}} -> could you rephrase this as {{block storage movement}}
# In the BlockStorageMovementAttemptedItems constructor, it would be good to 
add a log message about the {{checkTimeout}}, {{selfRetryTimeout}} configured 
values for debugging purpose.
# Please make {{private}} class StorageMovementAttemptResultMonitor.
# Can we move the following to the constructor rather than resolving multiple 
times.
{code}
long period = Math.min(DEFAULT_RECHECK_INTERVAL, checkTimeout);
{code}
# Can we add debug log message after removal, to get the status of the block 
movement completion.
{code}
+  if (!exist) {
+blockStorageMovementNeeded.add(blockCollectionID);
+  }
+  iter.remove();
{code}
# Just a suggestion to add few java comments about this datastructure
{code}
  private final Map storageMovementAttemptedItems;
{code}
# Please name the {{timerThread}}.

> [SPS]:Provide retry mechanism for the blocks which were failed while moving 
> its storage at DNs
> --
>
> Key: HDFS-11029
> URL: https://issues.apache.org/jira/browse/HDFS-11029
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-10285
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-11029-HDFS-10285-00.patch
>
>
> When DN co-ordinator finds some of blocks associated to trackedID could not 
> be moved its storages, due to some errors.Here retry may work in some cases, 
> example if target node has no space. Then retry by finding another target can 
> work. 
> So, based on the movement result flag(SUCCESS/FAILURE) from DN Co-ordinator,  
> NN would retry by scanning the blocks again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11114) Support for running async disk checks in DataNode

2016-11-08 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15647937#comment-15647937
 ] 

Kihwal Lee commented on HDFS-4:
---

It broke branch-2. [~steve_l] also noticed it.

> Support for running async disk checks in DataNode
> -
>
> Key: HDFS-4
> URL: https://issues.apache.org/jira/browse/HDFS-4
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.9.0
>
>
> Introduce support for running async checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9337) Should check required params in WebHDFS to avoid NPE

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15647919#comment-15647919
 ] 

Hadoop QA commented on HDFS-9337:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 205 unchanged - 0 fixed = 206 total (was 205) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 51s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}112m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.TestMaintenanceState |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HDFS-9337 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837979/HDFS-9337_21.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 35002ad56ef8 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 026b39a |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17470/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17470/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17470/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17470/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.




[jira] [Commented] (HDFS-9081) False-positive ACK slow log in DFSClient

2016-11-08 Thread static-max (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15647839#comment-15647839
 ] 

static-max commented on HDFS-9081:
--

I also get this warning in my client logs, but not on any namenode or datanode.
So I think this is a rather annoying bug. I get this error with a client 
application (Apache Flink) with very low throughput (1MB / minute) and all my 
other Hadoop applications work without any problem.

Any work planned to fix this?

> False-positive ACK slow log in DFSClient
> 
>
> Key: HDFS-9081
> URL: https://issues.apache.org/jira/browse/HDFS-9081
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: He Tianyi
>Assignee: He Tianyi
>Priority: Minor
>
> This issue is related with code below:
> {noformat}
> if (duration > dfsclientSlowLogThresholdMs
> && ack.getSeqno() != Packet.HEART_BEAT_SEQNO) {
>   DFSClient.LOG
>   .warn("Slow ReadProcessor read fields took " + duration
>   + "ms (threshold=" + dfsclientSlowLogThresholdMs + "ms); ack: "
>   + ack + ", targets: " + Arrays.asList(targets));
> } else if (DFSClient.LOG.isDebugEnabled()) {
>   DFSClient.LOG.debug("DFSClient " + ack);
> }
> {noformat}
> DFSClient prints slow log when awaited after unexpected amount of time 
> (usually 3 ms). This is a good indicator for network or I/O performance 
> issue.
> However, there is scenario that this slow log is false-positive, i.e. a 
> reducer, (StageA) iterates over records with identical key, this takes 
> arbitrary amount of time, but generates no output. (StageB) Then, it output 
> arbitrary number of records when meet a different key.
> If one StageA lasts more than 3 ms (as the example above), there will be 
> one or more slow log generated, which is not related to any HDFS performance 
> issue. 
> In general cases, user should not expect this, as they could be misguided.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11112) Journal Nodes should refuse to format non-empty directories

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15647832#comment-15647832
 ] 

Hadoop QA commented on HDFS-2:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 53m 
29s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HDFS-2 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837981/HDFS-2.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a61d24f7e610 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 026b39a |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17471/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17471/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Journal Nodes should refuse to format non-empty directories
> ---
>
> Key: HDFS-2
> URL: https://issues.apache.org/jira/browse/HDFS-2
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Yiqun Lin
> Attachments: HDFS-2.001.patch
>
>
> Journal Nodes should reject the {{format}} RPC request if a storage directory 
> is non-empty. The relevant code is in {{JNStorage#format}}.
> {code}
>   void format(NamespaceInfo nsInfo) throws IOException {
> 

[jira] [Updated] (HDFS-11068) [SPS]: Provide unique trackID to track the block movement sends to coordinator

2016-11-08 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-11068:

Attachment: HDFS-11068-HDFS-10285-01.patch

> [SPS]: Provide unique trackID to track the block movement sends to coordinator
> --
>
> Key: HDFS-11068
> URL: https://issues.apache.org/jira/browse/HDFS-11068
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-11068-HDFS-10285-01.patch, 
> HDFS-11068-HDFS-10285.patch
>
>
> Presently DatanodeManager uses constant  value -1 as 
> [trackID|https://github.com/apache/hadoop/blob/HDFS-10285/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java#L1607],
>  which is a temporary value. As per discussion with [~umamaheswararao], one 
> proposal is to use {{BlockCollectionId/InodeFileId}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11112) Journal Nodes should refuse to format non-empty directories

2016-11-08 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-2:
-
Status: Patch Available  (was: Open)

Attach a initial patch. Please have a review. Thanks!

> Journal Nodes should refuse to format non-empty directories
> ---
>
> Key: HDFS-2
> URL: https://issues.apache.org/jira/browse/HDFS-2
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Yiqun Lin
> Attachments: HDFS-2.001.patch
>
>
> Journal Nodes should reject the {{format}} RPC request if a storage directory 
> is non-empty. The relevant code is in {{JNStorage#format}}.
> {code}
>   void format(NamespaceInfo nsInfo) throws IOException {
> setStorageInfo(nsInfo);
> ...
> unlockAll();
> sd.clearDirectory();
> writeProperties(sd);
> createPaxosDir();
> analyzeStorage();
> {code}
> This would make the behavior similar to {{namenode -format -nonInteractive}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11112) Journal Nodes should refuse to format non-empty directories

2016-11-08 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-2:
-
Attachment: HDFS-2.001.patch

> Journal Nodes should refuse to format non-empty directories
> ---
>
> Key: HDFS-2
> URL: https://issues.apache.org/jira/browse/HDFS-2
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Yiqun Lin
> Attachments: HDFS-2.001.patch
>
>
> Journal Nodes should reject the {{format}} RPC request if a storage directory 
> is non-empty. The relevant code is in {{JNStorage#format}}.
> {code}
>   void format(NamespaceInfo nsInfo) throws IOException {
> setStorageInfo(nsInfo);
> ...
> unlockAll();
> sd.clearDirectory();
> writeProperties(sd);
> createPaxosDir();
> analyzeStorage();
> {code}
> This would make the behavior similar to {{namenode -format -nonInteractive}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9337) Should check required params in WebHDFS to avoid NPE

2016-11-08 Thread Jagadesh Kiran N (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jagadesh Kiran N updated HDFS-9337:
---
Attachment: HDFS-9337_21.patch

Removed all delegation token tests,please review

> Should check required params in WebHDFS to avoid NPE
> 
>
> Key: HDFS-9337
> URL: https://issues.apache.org/jira/browse/HDFS-9337
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jagadesh Kiran N
>Assignee: Jagadesh Kiran N
> Attachments: HDFS-9337_00.patch, HDFS-9337_01.patch, 
> HDFS-9337_02.patch, HDFS-9337_03.patch, HDFS-9337_04.patch, 
> HDFS-9337_05.patch, HDFS-9337_06.patch, HDFS-9337_07.patch, 
> HDFS-9337_08.patch, HDFS-9337_09.patch, HDFS-9337_10.patch, 
> HDFS-9337_11.patch, HDFS-9337_12.patch, HDFS-9337_13.patch, 
> HDFS-9337_14.patch, HDFS-9337_15.patch, HDFS-9337_16.patch, 
> HDFS-9337_17.patch, HDFS-9337_18.patch, HDFS-9337_19.patch, 
> HDFS-9337_20.patch, HDFS-9337_21.patch
>
>
> {code}
>  curl -i -X PUT 
> "http://10.19.92.127:50070/webhdfs/v1/kiran/sreenu?op=RENAMESNAPSHOT=SNAPSHOTNAME;
> {code}
> Null point exception will be thrown
> {code}
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11116) Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue

2016-11-08 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-6:
-
Attachment: HDFS-6.002.patch

> Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue
> -
>
> Key: HDFS-6
> URL: https://issues.apache.org/jira/browse/HDFS-6
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-6.001.patch, HDFS-6.002.patch
>
>
> There were some Jenkins warinings related with TestViewFsDefaultValue in each 
> Jenkins building.
> {code}
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9]
>  [deprecation] getDefaultBlockSize() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9]
>  [deprecation] getDefaultReplication() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43]
>  [deprecation] getServerDefaults() in FileSystem has been deprecated
> {code}
> We should use the method {{getDefaultBlockSize(Path)}} replace with 
> deprecation API {{getDefaultBlockSize}}. The same to the 
> {{getDefaultReplication}} and {{getServerDefaults}}. The {{Path}} can be a 
> not-in-mountpoint path in filesystem to trigger the 
> {{NotInMountpointException}} in test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11116) Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue

2016-11-08 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-6:
-
Attachment: (was: HDFS-6.002.patch)

> Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue
> -
>
> Key: HDFS-6
> URL: https://issues.apache.org/jira/browse/HDFS-6
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-6.001.patch
>
>
> There were some Jenkins warinings related with TestViewFsDefaultValue in each 
> Jenkins building.
> {code}
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9]
>  [deprecation] getDefaultBlockSize() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9]
>  [deprecation] getDefaultReplication() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43]
>  [deprecation] getServerDefaults() in FileSystem has been deprecated
> {code}
> We should use the method {{getDefaultBlockSize(Path)}} replace with 
> deprecation API {{getDefaultBlockSize}}. The same to the 
> {{getDefaultReplication}} and {{getServerDefaults}}. The {{Path}} can be a 
> not-in-mountpoint path in filesystem to trigger the 
> {{NotInMountpointException}} in test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11073) FileContext.makeQualified add default port for relavtive path with default fs as ha format

2016-11-08 Thread DENG FEI (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DENG FEI updated HDFS-11073:

Attachment: HDFS-11073.001.patch

> FileContext.makeQualified add default port  for relavtive path with default 
> fs as ha format
> ---
>
> Key: HDFS-11073
> URL: https://issues.apache.org/jira/browse/HDFS-11073
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: DENG FEI
> Attachments: HDFS-11073.001.patch
>
>
> JobHistoryUtils#getPreviousJobHistoryPath used FileContext#makeQualifie
> but history staging dir normally is relatived, the path will add default port 
> if "fs.defaultFS" is HA format.It's conflict with FileSystem#checkPath.
> {code}
>   Configuration aConf = new Configuration();
>   aConf.set("fs.defaultFS", "hdfs://mycluster/");
>   System.out.println(FileContext.getFileContext(aConf).makeQualified(new 
> Path("/test")));
> {code}
> print hdfs://mycluster:8020/test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11116) Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue

2016-11-08 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-6:
-
Attachment: HDFS-6.002.patch

It seemed that the v001 patch still failed. Post the v002 patch to make a quick 
fix.

> Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue
> -
>
> Key: HDFS-6
> URL: https://issues.apache.org/jira/browse/HDFS-6
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-6.001.patch, HDFS-6.002.patch
>
>
> There were some Jenkins warinings related with TestViewFsDefaultValue in each 
> Jenkins building.
> {code}
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9]
>  [deprecation] getDefaultBlockSize() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9]
>  [deprecation] getDefaultReplication() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43]
>  [deprecation] getServerDefaults() in FileSystem has been deprecated
> {code}
> We should use the method {{getDefaultBlockSize(Path)}} replace with 
> deprecation API {{getDefaultBlockSize}}. The same to the 
> {{getDefaultReplication}} and {{getServerDefaults}}. The {{Path}} can be a 
> not-in-mountpoint path in filesystem to trigger the 
> {{NotInMountpointException}} in test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9337) Should check required params in WebHDFS to avoid NPE

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15647460#comment-15647460
 ] 

Hadoop QA commented on HDFS-9337:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 28s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 267 unchanged - 6 fixed = 270 total (was 273) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 41s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestSafeModeWithStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HDFS-9337 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837952/HDFS-9337_20.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 813ba0cf722e 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 026b39a |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17468/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17468/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17468/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17468/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Should check required params in WebHDFS to avoid NPE
> 
>
> Key: HDFS-9337
> URL: 

[jira] [Commented] (HDFS-11073) FileContext.makeQualified add default port for relavtive path with default fs as ha format

2016-11-08 Thread DENG FEI (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15647463#comment-15647463
 ] 

DENG FEI commented on HDFS-11073:
-

@ [~brahmareddy]   
[HADOOP-12053|https://issues.apache.org/jira/browse/HADOOP-12053] met same 
issue,but not work for this.
I think the "default port" mechanism is not fit HA mode for Hdfs,normally 
cluster will choose HA.
upload a walk around patch.

> FileContext.makeQualified add default port  for relavtive path with default 
> fs as ha format
> ---
>
> Key: HDFS-11073
> URL: https://issues.apache.org/jira/browse/HDFS-11073
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: DENG FEI
>
> JobHistoryUtils#getPreviousJobHistoryPath used FileContext#makeQualifie
> but history staging dir normally is relatived, the path will add default port 
> if "fs.defaultFS" is HA format.It's conflict with FileSystem#checkPath.
> {code}
>   Configuration aConf = new Configuration();
>   aConf.set("fs.defaultFS", "hdfs://mycluster/");
>   System.out.println(FileContext.getFileContext(aConf).makeQualified(new 
> Path("/test")));
> {code}
> print hdfs://mycluster:8020/test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9337) Should check required params in WebHDFS to avoid NPE

2016-11-08 Thread Jagadesh Kiran N (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jagadesh Kiran N updated HDFS-9337:
---
Attachment: HDFS-9337_20.patch

The check style and test failures are fixed, please review

> Should check required params in WebHDFS to avoid NPE
> 
>
> Key: HDFS-9337
> URL: https://issues.apache.org/jira/browse/HDFS-9337
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jagadesh Kiran N
>Assignee: Jagadesh Kiran N
> Attachments: HDFS-9337_00.patch, HDFS-9337_01.patch, 
> HDFS-9337_02.patch, HDFS-9337_03.patch, HDFS-9337_04.patch, 
> HDFS-9337_05.patch, HDFS-9337_06.patch, HDFS-9337_07.patch, 
> HDFS-9337_08.patch, HDFS-9337_09.patch, HDFS-9337_10.patch, 
> HDFS-9337_11.patch, HDFS-9337_12.patch, HDFS-9337_13.patch, 
> HDFS-9337_14.patch, HDFS-9337_15.patch, HDFS-9337_16.patch, 
> HDFS-9337_17.patch, HDFS-9337_18.patch, HDFS-9337_19.patch, HDFS-9337_20.patch
>
>
> {code}
>  curl -i -X PUT 
> "http://10.19.92.127:50070/webhdfs/v1/kiran/sreenu?op=RENAMESNAPSHOT=SNAPSHOTNAME;
> {code}
> Null point exception will be thrown
> {code}
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org