[jira] [Created] (HDFS-11116) Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue

2016-11-07 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDFS-6:


 Summary: Fix Jenkins warnings caused by deprecation APIs in 
TestViewFsDefaultValue
 Key: HDFS-6
 URL: https://issues.apache.org/jira/browse/HDFS-6
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0-alpha1
Reporter: Yiqun Lin
Assignee: Yiqun Lin
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11116) Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue

2016-11-07 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-6:
-
Description: 
There were some Jenkins warinings related with TestViewFsDefaultValue in each 
Jenkins building.

{code}
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9]
 [deprecation] getDefaultBlockSize() in FileSystem has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9]
 [deprecation] getDefaultReplication() in FileSystem has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43]
 [deprecation] getServerDefaults() in FileSystem has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestIncrementalBrVariations.java:[175,12]
 [deprecation]
{code}

> Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue
> -
>
> Key: HDFS-6
> URL: https://issues.apache.org/jira/browse/HDFS-6
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
>
> There were some Jenkins warinings related with TestViewFsDefaultValue in each 
> Jenkins building.
> {code}
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9]
>  [deprecation] getDefaultBlockSize() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9]
>  [deprecation] getDefaultReplication() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43]
>  [deprecation] getServerDefaults() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestIncrementalBrVariations.java:[175,12]
>  [deprecation]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11116) Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue

2016-11-07 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-6:
-
Description: 
There were some Jenkins warinings related with TestViewFsDefaultValue in each 
Jenkins building.

{code}
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9]
 [deprecation] getDefaultBlockSize() in FileSystem has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9]
 [deprecation] getDefaultReplication() in FileSystem has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43]
 [deprecation] getServerDefaults() in FileSystem has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestIncrementalBrVariations.java:[175,12]
 [deprecation]
{code}

We should use the method {{getDefaultBlockSize(Path)}} replace with deprecation 
API {{getDefaultBlockSize}}. The same to the {{getDefaultReplication}} and 
{{getServerDefaults}}.

  was:
There were some Jenkins warinings related with TestViewFsDefaultValue in each 
Jenkins building.

{code}
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9]
 [deprecation] getDefaultBlockSize() in FileSystem has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9]
 [deprecation] getDefaultReplication() in FileSystem has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43]
 [deprecation] getServerDefaults() in FileSystem has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestIncrementalBrVariations.java:[175,12]
 [deprecation]
{code}


> Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue
> -
>
> Key: HDFS-6
> URL: https://issues.apache.org/jira/browse/HDFS-6
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
>
> There were some Jenkins warinings related with TestViewFsDefaultValue in each 
> Jenkins building.
> {code}
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9]
>  [deprecation] getDefaultBlockSize() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9]
>  [deprecation] getDefaultReplication() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43]
>  [deprecation] getServerDefaults() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestIncrementalBrVariations.java:[175,12]
>  [deprecation]
> {code}
> We should use the method {{getDefaultBlockSize(Path)}} replace with 
> deprecation API {{getDefaultBlockSize}}. The same to the 
> {{getDefaultReplication}} and {{getServerDefaults}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11116) Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue

2016-11-07 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-6:
-
Description: 
There were some Jenkins warinings related with TestViewFsDefaultValue in each 
Jenkins building.

{code}
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9]
 [deprecation] getDefaultBlockSize() in FileSystem has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9]
 [deprecation] getDefaultReplication() in FileSystem has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43]
 [deprecation] getServerDefaults() in FileSystem has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestIncrementalBrVariations.java:[175,12]
 [deprecation]
{code}

We should use the method {{getDefaultBlockSize(Path)}} replace with deprecation 
API {{getDefaultBlockSize}}. The same to the {{getDefaultReplication}} and 
{{getServerDefaults}}. The {{Path}} can be a not-mounted pathin filesystem to 
trigger the {{NotInMountException}}.

  was:
There were some Jenkins warinings related with TestViewFsDefaultValue in each 
Jenkins building.

{code}
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9]
 [deprecation] getDefaultBlockSize() in FileSystem has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9]
 [deprecation] getDefaultReplication() in FileSystem has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43]
 [deprecation] getServerDefaults() in FileSystem has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestIncrementalBrVariations.java:[175,12]
 [deprecation]
{code}

We should use the method {{getDefaultBlockSize(Path)}} replace with deprecation 
API {{getDefaultBlockSize}}. The same to the {{getDefaultReplication}} and 
{{getServerDefaults}}.


> Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue
> -
>
> Key: HDFS-6
> URL: https://issues.apache.org/jira/browse/HDFS-6
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
>
> There were some Jenkins warinings related with TestViewFsDefaultValue in each 
> Jenkins building.
> {code}
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9]
>  [deprecation] getDefaultBlockSize() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9]
>  [deprecation] getDefaultReplication() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43]
>  [deprecation] getServerDefaults() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestIncrementalBrVariations.java:[175,12]
>  [deprecation]
> {code}
> We should use the method {{getDefaultBlockSize(Path)}} replace with 
> deprecation API {{getDefaultBlockSize}}. The same to the 
> {{getDefaultReplication}} and {{getServerDefaults}}. The {{Path}} can be a 
> not-mounted pathin filesystem to trigger the {{NotInMountException}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11116) Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue

2016-11-07 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-6:
-
Description: 
There were some Jenkins warinings related with TestViewFsDefaultValue in each 
Jenkins building.

{code}
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9]
 [deprecation] getDefaultBlockSize() in FileSystem has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9]
 [deprecation] getDefaultReplication() in FileSystem has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43]
 [deprecation] getServerDefaults() in FileSystem has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestIncrementalBrVariations.java:[175,12]
 [deprecation]
{code}

We should use the method {{getDefaultBlockSize(Path)}} replace with deprecation 
API {{getDefaultBlockSize}}. The same to the {{getDefaultReplication}} and 
{{getServerDefaults}}. The {{Path}} can be a not-mounted path in filesystem to 
trigger the {{NotInMountException}}.

  was:
There were some Jenkins warinings related with TestViewFsDefaultValue in each 
Jenkins building.

{code}
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9]
 [deprecation] getDefaultBlockSize() in FileSystem has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9]
 [deprecation] getDefaultReplication() in FileSystem has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43]
 [deprecation] getServerDefaults() in FileSystem has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestIncrementalBrVariations.java:[175,12]
 [deprecation]
{code}

We should use the method {{getDefaultBlockSize(Path)}} replace with deprecation 
API {{getDefaultBlockSize}}. The same to the {{getDefaultReplication}} and 
{{getServerDefaults}}. The {{Path}} can be a not-mounted pathin filesystem to 
trigger the {{NotInMountException}}.


> Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue
> -
>
> Key: HDFS-6
> URL: https://issues.apache.org/jira/browse/HDFS-6
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
>
> There were some Jenkins warinings related with TestViewFsDefaultValue in each 
> Jenkins building.
> {code}
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9]
>  [deprecation] getDefaultBlockSize() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9]
>  [deprecation] getDefaultReplication() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43]
>  [deprecation] getServerDefaults() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestIncrementalBrVariations.java:[175,12]
>  [deprecation]
> {code}
> We should use the method {{getDefaultBlockSize(Path)}} replace with 
> deprecation API {{getDefaultBlockSize}}. The same to the 
> {{getDefaultReplication}} and {{getServerDefaults}}. The {{Path}} can be a 
> not-mounted path in filesystem to trigger the {{NotInMountException}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9668) Optimize the locking in FsDatasetImpl

2016-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15643673#comment-15643673
 ] 

Hadoop QA commented on HDFS-9668:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
19s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 50s{color} | {color:orange} root: The patch generated 1 new + 1006 unchanged 
- 14 fixed = 1007 total (was 1020) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
35s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 76m 
19s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}137m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HDFS-9668 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837702/HDFS-9668-23.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 005a4ef1f40e 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ca33bdd |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17447/artifact/patchprocess/diff-checkstyle-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17447/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs U: . |
| Console output | 
https://builds.apache.org/job/

[jira] [Commented] (HDFS-10885) [SPS]: Mover tool should not be allowed to run when Storage Policy Satisfier is on

2016-11-07 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15643727#comment-15643727
 ] 

Rakesh R commented on HDFS-10885:
-

Thanks [~drankye], [~zhouwei] for the useful thoughts. Please let me know your 
feedback on below points.

bq. It looks to me more natural that we have StoragePolicySatisfier as a first 
citizen class (not just under BM) under NameNode since it needs to access both 
BM and the name system. I'm thinking that in future we may extend SPS to do 
more NN wide things.
Yes, presently the layering is causing difficulties in handling the 
simultaneous actions between Mover tool and SPS. Last day myself and 
[~umamaheswararao] had an internal discussion about the same and discussed in 
similar lines. Probably we could move SPS to NN and add the FSNamesystem 
reference to access its APIs, that will reduce lot of complexities.

bq. I'm not sure it's a good idea to create the MOVER_ID file here in SPS, 
because it may be the duty of mover to create the flag file. And also, not sure 
how easy it is to do the creation in NN without inviting race or tricky 
condition.
There are few ideas came up in the discussion to handle the race condition. 
Below is the draft idea which we([~umamaheswararao] & me) discussed. Uma, 
please feel free to update if I missed anything. Thanks!

*SPS:*
Should introduce a mechanism to get the running status of SPS so that Mover can 
use this and understand the status. It could be a new RPC call.

During SPS startup, it should do following checks:
1) Check the lease of MOVER_ID (INodesInPath) if not exists then continue with 
SPS startup

2) After the startup, should do a double check ensuring that there is no lease 
exists. If lease exists, then stop SPS.

*Mover:*
During Mover startup, it should do following checks:
1) Ensure SPS is not running by using the new RPC call or function. If SPS is 
not running then continue with Mover run by creating the MOVER_ID.

2) After MOVER_ID creation, should do a double check ensuring that SPS is not 
running. If SPS running, then do Mover exists by deleting the MOVER_ID.

> [SPS]: Mover tool should not be allowed to run when Storage Policy Satisfier 
> is on
> --
>
> Key: HDFS-10885
> URL: https://issues.apache.org/jira/browse/HDFS-10885
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Wei Zhou
>Assignee: Wei Zhou
> Fix For: HDFS-10285
>
> Attachments: HDFS-10800-HDFS-10885-00.patch, 
> HDFS-10800-HDFS-10885-01.patch, HDFS-10800-HDFS-10885-02.patch, 
> HDFS-10885-HDFS-10285.03.patch, HDFS-10885-HDFS-10285.04.patch, 
> HDFS-10885-HDFS-10285.05.patch
>
>
> These two can not work at the same time to avoid conflicts and fight with 
> each other.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8693) refreshNamenodes does not support adding a new standby to a running DN

2016-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15643888#comment-15643888
 ] 

Hadoop QA commented on HDFS-8693:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} HDFS-8693 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-8693 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12796035/HDFS-8693.1.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17450/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> refreshNamenodes does not support adding a new standby to a running DN
> --
>
> Key: HDFS-8693
> URL: https://issues.apache.org/jira/browse/HDFS-8693
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, ha
>Affects Versions: 2.6.0
>Reporter: Jian Fang
>Assignee: Ajith S
>Priority: Critical
> Attachments: HDFS-8693.1.patch
>
>
> I tried to run the following command on a Hadoop 2.6.0 cluster with HA 
> support 
> $ hdfs dfsadmin -refreshNamenodes datanode-host:port
> to refresh name nodes on data nodes after I replaced one name node with a new 
> one so that I don't need to restart the data nodes. However, I got the 
> following error:
> refreshNamenodes: HA does not currently support adding a new standby to a 
> running DN. Please do a rolling restart of DNs to reconfigure the list of NNs.
> I checked the 2.6.0 code and the error was thrown by the following code 
> snippet, which led me to this JIRA.
> void refreshNNList(ArrayList addrs) throws IOException {
> Set oldAddrs = Sets.newHashSet();
> for (BPServiceActor actor : bpServices)
> { oldAddrs.add(actor.getNNSocketAddress()); }
> Set newAddrs = Sets.newHashSet(addrs);
> if (!Sets.symmetricDifference(oldAddrs, newAddrs).isEmpty())
> { // Keep things simple for now -- we can implement this at a later date. 
> throw new IOException( "HA does not currently support adding a new standby to 
> a running DN. " + "Please do a rolling restart of DNs to reconfigure the list 
> of NNs."); }
> }
> Looks like this the refreshNameNodes command is an uncompleted feature. 
> Unfortunately, the new name node on a replacement is critical for auto 
> provisioning a hadoop cluster with HDFS HA support. Without this support, the 
> HA feature could not really be used. I also observed that the new standby 
> name node on the replacement instance could stuck in safe mode because no 
> data nodes check in with it. Even with a rolling restart, it may take quite 
> some time to restart all data nodes if we have a big cluster, for example, 
> with 4000 data nodes, let alone restarting DN is way too intrusive and it is 
> not a preferable operation in production. It also increases the chance for a 
> double failure because the standby name node is not really ready for a 
> failover in the case that the current active name node fails. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8693) refreshNamenodes does not support adding a new standby to a running DN

2016-11-07 Thread Ajith S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajith S updated HDFS-8693:
--
Status: Open  (was: Patch Available)

> refreshNamenodes does not support adding a new standby to a running DN
> --
>
> Key: HDFS-8693
> URL: https://issues.apache.org/jira/browse/HDFS-8693
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, ha
>Affects Versions: 2.6.0
>Reporter: Jian Fang
>Assignee: Ajith S
>Priority: Critical
> Attachments: HDFS-8693.1.patch
>
>
> I tried to run the following command on a Hadoop 2.6.0 cluster with HA 
> support 
> $ hdfs dfsadmin -refreshNamenodes datanode-host:port
> to refresh name nodes on data nodes after I replaced one name node with a new 
> one so that I don't need to restart the data nodes. However, I got the 
> following error:
> refreshNamenodes: HA does not currently support adding a new standby to a 
> running DN. Please do a rolling restart of DNs to reconfigure the list of NNs.
> I checked the 2.6.0 code and the error was thrown by the following code 
> snippet, which led me to this JIRA.
> void refreshNNList(ArrayList addrs) throws IOException {
> Set oldAddrs = Sets.newHashSet();
> for (BPServiceActor actor : bpServices)
> { oldAddrs.add(actor.getNNSocketAddress()); }
> Set newAddrs = Sets.newHashSet(addrs);
> if (!Sets.symmetricDifference(oldAddrs, newAddrs).isEmpty())
> { // Keep things simple for now -- we can implement this at a later date. 
> throw new IOException( "HA does not currently support adding a new standby to 
> a running DN. " + "Please do a rolling restart of DNs to reconfigure the list 
> of NNs."); }
> }
> Looks like this the refreshNameNodes command is an uncompleted feature. 
> Unfortunately, the new name node on a replacement is critical for auto 
> provisioning a hadoop cluster with HDFS HA support. Without this support, the 
> HA feature could not really be used. I also observed that the new standby 
> name node on the replacement instance could stuck in safe mode because no 
> data nodes check in with it. Even with a rolling restart, it may take quite 
> some time to restart all data nodes if we have a big cluster, for example, 
> with 4000 data nodes, let alone restarting DN is way too intrusive and it is 
> not a preferable operation in production. It also increases the chance for a 
> double failure because the standby name node is not really ready for a 
> failover in the case that the current active name node fails. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8693) refreshNamenodes does not support adding a new standby to a running DN

2016-11-07 Thread Ajith S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajith S updated HDFS-8693:
--
Attachment: HDFS-8693.02.patch

> refreshNamenodes does not support adding a new standby to a running DN
> --
>
> Key: HDFS-8693
> URL: https://issues.apache.org/jira/browse/HDFS-8693
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, ha
>Affects Versions: 2.6.0
>Reporter: Jian Fang
>Assignee: Ajith S
>Priority: Critical
> Attachments: HDFS-8693.02.patch, HDFS-8693.1.patch
>
>
> I tried to run the following command on a Hadoop 2.6.0 cluster with HA 
> support 
> $ hdfs dfsadmin -refreshNamenodes datanode-host:port
> to refresh name nodes on data nodes after I replaced one name node with a new 
> one so that I don't need to restart the data nodes. However, I got the 
> following error:
> refreshNamenodes: HA does not currently support adding a new standby to a 
> running DN. Please do a rolling restart of DNs to reconfigure the list of NNs.
> I checked the 2.6.0 code and the error was thrown by the following code 
> snippet, which led me to this JIRA.
> void refreshNNList(ArrayList addrs) throws IOException {
> Set oldAddrs = Sets.newHashSet();
> for (BPServiceActor actor : bpServices)
> { oldAddrs.add(actor.getNNSocketAddress()); }
> Set newAddrs = Sets.newHashSet(addrs);
> if (!Sets.symmetricDifference(oldAddrs, newAddrs).isEmpty())
> { // Keep things simple for now -- we can implement this at a later date. 
> throw new IOException( "HA does not currently support adding a new standby to 
> a running DN. " + "Please do a rolling restart of DNs to reconfigure the list 
> of NNs."); }
> }
> Looks like this the refreshNameNodes command is an uncompleted feature. 
> Unfortunately, the new name node on a replacement is critical for auto 
> provisioning a hadoop cluster with HDFS HA support. Without this support, the 
> HA feature could not really be used. I also observed that the new standby 
> name node on the replacement instance could stuck in safe mode because no 
> data nodes check in with it. Even with a rolling restart, it may take quite 
> some time to restart all data nodes if we have a big cluster, for example, 
> with 4000 data nodes, let alone restarting DN is way too intrusive and it is 
> not a preferable operation in production. It also increases the chance for a 
> double failure because the standby name node is not really ready for a 
> failover in the case that the current active name node fails. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8693) refreshNamenodes does not support adding a new standby to a running DN

2016-11-07 Thread Ajith S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajith S updated HDFS-8693:
--
Status: Patch Available  (was: Open)

> refreshNamenodes does not support adding a new standby to a running DN
> --
>
> Key: HDFS-8693
> URL: https://issues.apache.org/jira/browse/HDFS-8693
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, ha
>Affects Versions: 2.6.0
>Reporter: Jian Fang
>Assignee: Ajith S
>Priority: Critical
> Attachments: HDFS-8693.02.patch, HDFS-8693.1.patch
>
>
> I tried to run the following command on a Hadoop 2.6.0 cluster with HA 
> support 
> $ hdfs dfsadmin -refreshNamenodes datanode-host:port
> to refresh name nodes on data nodes after I replaced one name node with a new 
> one so that I don't need to restart the data nodes. However, I got the 
> following error:
> refreshNamenodes: HA does not currently support adding a new standby to a 
> running DN. Please do a rolling restart of DNs to reconfigure the list of NNs.
> I checked the 2.6.0 code and the error was thrown by the following code 
> snippet, which led me to this JIRA.
> void refreshNNList(ArrayList addrs) throws IOException {
> Set oldAddrs = Sets.newHashSet();
> for (BPServiceActor actor : bpServices)
> { oldAddrs.add(actor.getNNSocketAddress()); }
> Set newAddrs = Sets.newHashSet(addrs);
> if (!Sets.symmetricDifference(oldAddrs, newAddrs).isEmpty())
> { // Keep things simple for now -- we can implement this at a later date. 
> throw new IOException( "HA does not currently support adding a new standby to 
> a running DN. " + "Please do a rolling restart of DNs to reconfigure the list 
> of NNs."); }
> }
> Looks like this the refreshNameNodes command is an uncompleted feature. 
> Unfortunately, the new name node on a replacement is critical for auto 
> provisioning a hadoop cluster with HDFS HA support. Without this support, the 
> HA feature could not really be used. I also observed that the new standby 
> name node on the replacement instance could stuck in safe mode because no 
> data nodes check in with it. Even with a rolling restart, it may take quite 
> some time to restart all data nodes if we have a big cluster, for example, 
> with 4000 data nodes, let alone restarting DN is way too intrusive and it is 
> not a preferable operation in production. It also increases the chance for a 
> double failure because the standby name node is not really ready for a 
> failover in the case that the current active name node fails. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8693) refreshNamenodes does not support adding a new standby to a running DN

2016-11-07 Thread Ajith S (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15643993#comment-15643993
 ] 

Ajith S commented on HDFS-8693:
---

Attacing rebased patch. Please review

> refreshNamenodes does not support adding a new standby to a running DN
> --
>
> Key: HDFS-8693
> URL: https://issues.apache.org/jira/browse/HDFS-8693
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, ha
>Affects Versions: 2.6.0
>Reporter: Jian Fang
>Assignee: Ajith S
>Priority: Critical
> Attachments: HDFS-8693.02.patch, HDFS-8693.1.patch
>
>
> I tried to run the following command on a Hadoop 2.6.0 cluster with HA 
> support 
> $ hdfs dfsadmin -refreshNamenodes datanode-host:port
> to refresh name nodes on data nodes after I replaced one name node with a new 
> one so that I don't need to restart the data nodes. However, I got the 
> following error:
> refreshNamenodes: HA does not currently support adding a new standby to a 
> running DN. Please do a rolling restart of DNs to reconfigure the list of NNs.
> I checked the 2.6.0 code and the error was thrown by the following code 
> snippet, which led me to this JIRA.
> void refreshNNList(ArrayList addrs) throws IOException {
> Set oldAddrs = Sets.newHashSet();
> for (BPServiceActor actor : bpServices)
> { oldAddrs.add(actor.getNNSocketAddress()); }
> Set newAddrs = Sets.newHashSet(addrs);
> if (!Sets.symmetricDifference(oldAddrs, newAddrs).isEmpty())
> { // Keep things simple for now -- we can implement this at a later date. 
> throw new IOException( "HA does not currently support adding a new standby to 
> a running DN. " + "Please do a rolling restart of DNs to reconfigure the list 
> of NNs."); }
> }
> Looks like this the refreshNameNodes command is an uncompleted feature. 
> Unfortunately, the new name node on a replacement is critical for auto 
> provisioning a hadoop cluster with HDFS HA support. Without this support, the 
> HA feature could not really be used. I also observed that the new standby 
> name node on the replacement instance could stuck in safe mode because no 
> data nodes check in with it. Even with a rolling restart, it may take quite 
> some time to restart all data nodes if we have a big cluster, for example, 
> with 4000 data nodes, let alone restarting DN is way too intrusive and it is 
> not a preferable operation in production. It also increases the chance for a 
> double failure because the standby name node is not really ready for a 
> failover in the case that the current active name node fails. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10996) Ability to specify per-file EC policy at create time

2016-11-07 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HDFS-10996:
-
Status: Patch Available  (was: Open)

> Ability to specify per-file EC policy at create time
> 
>
> Key: HDFS-10996
> URL: https://issues.apache.org/jira/browse/HDFS-10996
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: SammiChen
> Attachments: HDFS-10996-v1.patch
>
>
> Based on discussion in HDFS-10971, it would be useful to specify the EC 
> policy when the file is created. This is useful for situations where app 
> requirements do not map nicely to the current directory-level policies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10996) Ability to specify per-file EC policy at create time

2016-11-07 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HDFS-10996:
-
Attachment: HDFS-10996-v1.patch

Initial patch

> Ability to specify per-file EC policy at create time
> 
>
> Key: HDFS-10996
> URL: https://issues.apache.org/jira/browse/HDFS-10996
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: SammiChen
> Attachments: HDFS-10996-v1.patch
>
>
> Based on discussion in HDFS-10971, it would be useful to specify the EC 
> policy when the file is created. This is useful for situations where app 
> requirements do not map nicely to the current directory-level policies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11117) Refactor striped file unit test case structure

2016-11-07 Thread SammiChen (JIRA)
SammiChen created HDFS-7:


 Summary: Refactor striped file unit test case structure
 Key: HDFS-7
 URL: https://issues.apache.org/jira/browse/HDFS-7
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: SammiChen
Assignee: SammiChen


This task is going to refactor current striped file test case structures, 
especially {{StripedFileTestUtil}} file which is used in many striped file test 
 cases. All current striped file test cases only support one erasure coding 
policy, that's the default RS-DEFAULT-6-3-64k policy.  The goal of the refactor 
is to make the structures more convenient to support other erasure coding 
policies, such as XOR policy. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11116) Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue

2016-11-07 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-6:
-
Description: 
There were some Jenkins warinings related with TestViewFsDefaultValue in each 
Jenkins building.

{code}
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9]
 [deprecation] getDefaultBlockSize() in FileSystem has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9]
 [deprecation] getDefaultReplication() in FileSystem has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43]
 [deprecation] getServerDefaults() in FileSystem has been deprecated
{code}

We should use the method {{getDefaultBlockSize(Path)}} replace with deprecation 
API {{getDefaultBlockSize}}. The same to the {{getDefaultReplication}} and 
{{getServerDefaults}}. The {{Path}} can be a not-in-mountpoint path in 
filesystem to trigger the {{NotInMountpointException}} in test.

  was:
There were some Jenkins warinings related with TestViewFsDefaultValue in each 
Jenkins building.

{code}
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9]
 [deprecation] getDefaultBlockSize() in FileSystem has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9]
 [deprecation] getDefaultReplication() in FileSystem has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43]
 [deprecation] getServerDefaults() in FileSystem has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestIncrementalBrVariations.java:[175,12]
 [deprecation]
{code}

We should use the method {{getDefaultBlockSize(Path)}} replace with deprecation 
API {{getDefaultBlockSize}}. The same to the {{getDefaultReplication}} and 
{{getServerDefaults}}. The {{Path}} can be a not-mounted path in filesystem to 
trigger the {{NotInMountException}}.


> Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue
> -
>
> Key: HDFS-6
> URL: https://issues.apache.org/jira/browse/HDFS-6
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
>
> There were some Jenkins warinings related with TestViewFsDefaultValue in each 
> Jenkins building.
> {code}
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9]
>  [deprecation] getDefaultBlockSize() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9]
>  [deprecation] getDefaultReplication() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43]
>  [deprecation] getServerDefaults() in FileSystem has been deprecated
> {code}
> We should use the method {{getDefaultBlockSize(Path)}} replace with 
> deprecation API {{getDefaultBlockSize}}. The same to the 
> {{getDefaultReplication}} and {{getServerDefaults}}. The {{Path}} can be a 
> not-in-mountpoint path in filesystem to trigger the 
> {{NotInMountpointException}} in test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11116) Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue

2016-11-07 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-6:
-
Status: Patch Available  (was: Open)

Attach a initial patch to make a fix. Thanks for the review!

> Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue
> -
>
> Key: HDFS-6
> URL: https://issues.apache.org/jira/browse/HDFS-6
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
>
> There were some Jenkins warinings related with TestViewFsDefaultValue in each 
> Jenkins building.
> {code}
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9]
>  [deprecation] getDefaultBlockSize() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9]
>  [deprecation] getDefaultReplication() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43]
>  [deprecation] getServerDefaults() in FileSystem has been deprecated
> {code}
> We should use the method {{getDefaultBlockSize(Path)}} replace with 
> deprecation API {{getDefaultBlockSize}}. The same to the 
> {{getDefaultReplication}} and {{getServerDefaults}}. The {{Path}} can be a 
> not-in-mountpoint path in filesystem to trigger the 
> {{NotInMountpointException}} in test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11116) Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue

2016-11-07 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-6:
-
Attachment: HDFS-11105.001.patch

> Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue
> -
>
> Key: HDFS-6
> URL: https://issues.apache.org/jira/browse/HDFS-6
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
>
> There were some Jenkins warinings related with TestViewFsDefaultValue in each 
> Jenkins building.
> {code}
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9]
>  [deprecation] getDefaultBlockSize() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9]
>  [deprecation] getDefaultReplication() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43]
>  [deprecation] getServerDefaults() in FileSystem has been deprecated
> {code}
> We should use the method {{getDefaultBlockSize(Path)}} replace with 
> deprecation API {{getDefaultBlockSize}}. The same to the 
> {{getDefaultReplication}} and {{getServerDefaults}}. The {{Path}} can be a 
> not-in-mountpoint path in filesystem to trigger the 
> {{NotInMountpointException}} in test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11116) Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue

2016-11-07 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-6:
-
Attachment: (was: HDFS-11105.001.patch)

> Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue
> -
>
> Key: HDFS-6
> URL: https://issues.apache.org/jira/browse/HDFS-6
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
>
> There were some Jenkins warinings related with TestViewFsDefaultValue in each 
> Jenkins building.
> {code}
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9]
>  [deprecation] getDefaultBlockSize() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9]
>  [deprecation] getDefaultReplication() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43]
>  [deprecation] getServerDefaults() in FileSystem has been deprecated
> {code}
> We should use the method {{getDefaultBlockSize(Path)}} replace with 
> deprecation API {{getDefaultBlockSize}}. The same to the 
> {{getDefaultReplication}} and {{getServerDefaults}}. The {{Path}} can be a 
> not-in-mountpoint path in filesystem to trigger the 
> {{NotInMountpointException}} in test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11116) Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue

2016-11-07 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-6:
-
Attachment: HDFS-6.001.patch

> Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue
> -
>
> Key: HDFS-6
> URL: https://issues.apache.org/jira/browse/HDFS-6
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-6.001.patch
>
>
> There were some Jenkins warinings related with TestViewFsDefaultValue in each 
> Jenkins building.
> {code}
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9]
>  [deprecation] getDefaultBlockSize() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9]
>  [deprecation] getDefaultReplication() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43]
>  [deprecation] getServerDefaults() in FileSystem has been deprecated
> {code}
> We should use the method {{getDefaultBlockSize(Path)}} replace with 
> deprecation API {{getDefaultBlockSize}}. The same to the 
> {{getDefaultReplication}} and {{getServerDefaults}}. The {{Path}} can be a 
> not-in-mountpoint path in filesystem to trigger the 
> {{NotInMountpointException}} in test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8693) refreshNamenodes does not support adding a new standby to a running DN

2016-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15644220#comment-15644220
 ] 

Hadoop QA commented on HDFS-8693:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 49 unchanged - 1 fixed = 50 total (was 50) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 22s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}100m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
|   | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HDFS-8693 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837756/HDFS-8693.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 537154b6144a 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b970446 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17452/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17452/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17452/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17452/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> refreshNamenodes does not support adding a new standby to a running DN
> ---

[jira] [Updated] (HDFS-10721) HDFS NFS Gateway - Exporting multiple Directories

2016-11-07 Thread Senthilkumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Senthilkumar updated HDFS-10721:

Labels: patch  (was: )
Status: Patch Available  (was: Open)

> HDFS NFS Gateway - Exporting multiple Directories 
> --
>
> Key: HDFS-10721
> URL: https://issues.apache.org/jira/browse/HDFS-10721
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Senthilkumar
>Assignee: Senthilkumar
>Priority: Minor
>  Labels: patch
>
> Current HDFS NFS gateway Supports exporting only one Directory.. 
> Example :  
>
>   nfs.export.point
>   /user
>  
> This property helps us to export particular directory .. 
> Code Block : 
> public RpcProgramMountd(NfsConfiguration config,
>   DatagramSocket registrationSocket, boolean allowInsecurePorts)
>   throws IOException {
> // Note that RPC cache is not enabled
> super("mountd", "localhost", config.getInt(
> NfsConfigKeys.DFS_NFS_MOUNTD_PORT_KEY,
> NfsConfigKeys.DFS_NFS_MOUNTD_PORT_DEFAULT), PROGRAM, VERSION_1,
> VERSION_3, registrationSocket, allowInsecurePorts);
> exports = new ArrayList();
>  exports.add(config.get(NfsConfigKeys.DFS_NFS_EXPORT_POINT_KEY,
> NfsConfigKeys.DFS_NFS_EXPORT_POINT_DEFAULT));
> this.hostsMatcher = NfsExports.getInstance(config);
> this.mounts = Collections.synchronizedList(new ArrayList());
> UserGroupInformation.setConfiguration(config);
> SecurityUtil.login(config, NfsConfigKeys.DFS_NFS_KEYTAB_FILE_KEY,
> NfsConfigKeys.DFS_NFS_KERBEROS_PRINCIPAL_KEY);
> this.dfsClient = new DFSClient(NameNode.getAddress(config), config);
>   }
> Export List:
> exports.add(config.get(NfsConfigKeys.DFS_NFS_EXPORT_POINT_KEY,
> NfsConfigKeys.DFS_NFS_EXPORT_POINT_DEFAULT));
> Current Code is supporting only one directory to be exposed ... Based on our 
> example /user can be exported ..
> Most of the production environment expects more number of directories should 
> be exported and the same can be mounted for different clients.. 
> Example: 
> 
>   nfs.export.point
>   /user,/data/web_crawler,/app-logs
>  
> Here i have three directories to be exposed ..
> 1)/user
> 2)   /data/web_crawler
> 3)   /app-logs
> This would help us  to mount directories for particular client ( Say client A 
> wants to write data in /app-logs - Hadoop Admin can mount and handover to 
> clients  ).
> Please advise here..  Sorry if this feature is already implemented.. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10721) HDFS NFS Gateway - Exporting multiple Directories

2016-11-07 Thread Senthilkumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Senthilkumar updated HDFS-10721:

Attachment: HDFS-10721.001.patch

> HDFS NFS Gateway - Exporting multiple Directories 
> --
>
> Key: HDFS-10721
> URL: https://issues.apache.org/jira/browse/HDFS-10721
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Senthilkumar
>Assignee: Senthilkumar
>Priority: Minor
>  Labels: patch
> Attachments: HDFS-10721.001.patch
>
>
> Current HDFS NFS gateway Supports exporting only one Directory.. 
> Example :  
>
>   nfs.export.point
>   /user
>  
> This property helps us to export particular directory .. 
> Code Block : 
> public RpcProgramMountd(NfsConfiguration config,
>   DatagramSocket registrationSocket, boolean allowInsecurePorts)
>   throws IOException {
> // Note that RPC cache is not enabled
> super("mountd", "localhost", config.getInt(
> NfsConfigKeys.DFS_NFS_MOUNTD_PORT_KEY,
> NfsConfigKeys.DFS_NFS_MOUNTD_PORT_DEFAULT), PROGRAM, VERSION_1,
> VERSION_3, registrationSocket, allowInsecurePorts);
> exports = new ArrayList();
>  exports.add(config.get(NfsConfigKeys.DFS_NFS_EXPORT_POINT_KEY,
> NfsConfigKeys.DFS_NFS_EXPORT_POINT_DEFAULT));
> this.hostsMatcher = NfsExports.getInstance(config);
> this.mounts = Collections.synchronizedList(new ArrayList());
> UserGroupInformation.setConfiguration(config);
> SecurityUtil.login(config, NfsConfigKeys.DFS_NFS_KEYTAB_FILE_KEY,
> NfsConfigKeys.DFS_NFS_KERBEROS_PRINCIPAL_KEY);
> this.dfsClient = new DFSClient(NameNode.getAddress(config), config);
>   }
> Export List:
> exports.add(config.get(NfsConfigKeys.DFS_NFS_EXPORT_POINT_KEY,
> NfsConfigKeys.DFS_NFS_EXPORT_POINT_DEFAULT));
> Current Code is supporting only one directory to be exposed ... Based on our 
> example /user can be exported ..
> Most of the production environment expects more number of directories should 
> be exported and the same can be mounted for different clients.. 
> Example: 
> 
>   nfs.export.point
>   /user,/data/web_crawler,/app-logs
>  
> Here i have three directories to be exposed ..
> 1)/user
> 2)   /data/web_crawler
> 3)   /app-logs
> This would help us  to mount directories for particular client ( Say client A 
> wants to write data in /app-logs - Hadoop Admin can mount and handover to 
> clients  ).
> Please advise here..  Sorry if this feature is already implemented.. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10721) HDFS NFS Gateway - Exporting multiple Directories

2016-11-07 Thread Senthilkumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15644260#comment-15644260
 ] 

Senthilkumar edited comment on HDFS-10721 at 11/7/16 2:09 PM:
--

Hi [~benoyantony] / [~jzhuge] , Can you Pls review the attached Patch and let 
me know  if you want to improve this .. 

Added new method  exportPointToList() in RpcProgramMountd  to parse the comma 
separated string 2 list.
Added new Test Case testMultipleExportPoint in  TestExportsTable.


was (Author: senthilec566):
Hi [~benoyantony] / [~jzhuge] , Can you Pls review the attached Patch and let 
me know your if you want to improve this .. 

Added new method  exportPointToList() in RpcProgramMountd  to parse the comma 
separated string 2 list.
Added new Test Case testMultipleExportPoint in  TestExportsTable.

> HDFS NFS Gateway - Exporting multiple Directories 
> --
>
> Key: HDFS-10721
> URL: https://issues.apache.org/jira/browse/HDFS-10721
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Senthilkumar
>Assignee: Senthilkumar
>Priority: Minor
>  Labels: patch
> Attachments: HDFS-10721.001.patch
>
>
> Current HDFS NFS gateway Supports exporting only one Directory.. 
> Example :  
>
>   nfs.export.point
>   /user
>  
> This property helps us to export particular directory .. 
> Code Block : 
> public RpcProgramMountd(NfsConfiguration config,
>   DatagramSocket registrationSocket, boolean allowInsecurePorts)
>   throws IOException {
> // Note that RPC cache is not enabled
> super("mountd", "localhost", config.getInt(
> NfsConfigKeys.DFS_NFS_MOUNTD_PORT_KEY,
> NfsConfigKeys.DFS_NFS_MOUNTD_PORT_DEFAULT), PROGRAM, VERSION_1,
> VERSION_3, registrationSocket, allowInsecurePorts);
> exports = new ArrayList();
>  exports.add(config.get(NfsConfigKeys.DFS_NFS_EXPORT_POINT_KEY,
> NfsConfigKeys.DFS_NFS_EXPORT_POINT_DEFAULT));
> this.hostsMatcher = NfsExports.getInstance(config);
> this.mounts = Collections.synchronizedList(new ArrayList());
> UserGroupInformation.setConfiguration(config);
> SecurityUtil.login(config, NfsConfigKeys.DFS_NFS_KEYTAB_FILE_KEY,
> NfsConfigKeys.DFS_NFS_KERBEROS_PRINCIPAL_KEY);
> this.dfsClient = new DFSClient(NameNode.getAddress(config), config);
>   }
> Export List:
> exports.add(config.get(NfsConfigKeys.DFS_NFS_EXPORT_POINT_KEY,
> NfsConfigKeys.DFS_NFS_EXPORT_POINT_DEFAULT));
> Current Code is supporting only one directory to be exposed ... Based on our 
> example /user can be exported ..
> Most of the production environment expects more number of directories should 
> be exported and the same can be mounted for different clients.. 
> Example: 
> 
>   nfs.export.point
>   /user,/data/web_crawler,/app-logs
>  
> Here i have three directories to be exposed ..
> 1)/user
> 2)   /data/web_crawler
> 3)   /app-logs
> This would help us  to mount directories for particular client ( Say client A 
> wants to write data in /app-logs - Hadoop Admin can mount and handover to 
> clients  ).
> Please advise here..  Sorry if this feature is already implemented.. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10721) HDFS NFS Gateway - Exporting multiple Directories

2016-11-07 Thread Senthilkumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15644260#comment-15644260
 ] 

Senthilkumar commented on HDFS-10721:
-

Hi [~benoyantony] / [~jzhuge] , Can you Pls review the attached Patch and let 
me know your if you want to improve this .. 

Added new method  exportPointToList() in RpcProgramMountd  to parse the comma 
separated string 2 list.
Added new Test Case testMultipleExportPoint in  TestExportsTable.

> HDFS NFS Gateway - Exporting multiple Directories 
> --
>
> Key: HDFS-10721
> URL: https://issues.apache.org/jira/browse/HDFS-10721
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Senthilkumar
>Assignee: Senthilkumar
>Priority: Minor
>  Labels: patch
> Attachments: HDFS-10721.001.patch
>
>
> Current HDFS NFS gateway Supports exporting only one Directory.. 
> Example :  
>
>   nfs.export.point
>   /user
>  
> This property helps us to export particular directory .. 
> Code Block : 
> public RpcProgramMountd(NfsConfiguration config,
>   DatagramSocket registrationSocket, boolean allowInsecurePorts)
>   throws IOException {
> // Note that RPC cache is not enabled
> super("mountd", "localhost", config.getInt(
> NfsConfigKeys.DFS_NFS_MOUNTD_PORT_KEY,
> NfsConfigKeys.DFS_NFS_MOUNTD_PORT_DEFAULT), PROGRAM, VERSION_1,
> VERSION_3, registrationSocket, allowInsecurePorts);
> exports = new ArrayList();
>  exports.add(config.get(NfsConfigKeys.DFS_NFS_EXPORT_POINT_KEY,
> NfsConfigKeys.DFS_NFS_EXPORT_POINT_DEFAULT));
> this.hostsMatcher = NfsExports.getInstance(config);
> this.mounts = Collections.synchronizedList(new ArrayList());
> UserGroupInformation.setConfiguration(config);
> SecurityUtil.login(config, NfsConfigKeys.DFS_NFS_KEYTAB_FILE_KEY,
> NfsConfigKeys.DFS_NFS_KERBEROS_PRINCIPAL_KEY);
> this.dfsClient = new DFSClient(NameNode.getAddress(config), config);
>   }
> Export List:
> exports.add(config.get(NfsConfigKeys.DFS_NFS_EXPORT_POINT_KEY,
> NfsConfigKeys.DFS_NFS_EXPORT_POINT_DEFAULT));
> Current Code is supporting only one directory to be exposed ... Based on our 
> example /user can be exported ..
> Most of the production environment expects more number of directories should 
> be exported and the same can be mounted for different clients.. 
> Example: 
> 
>   nfs.export.point
>   /user,/data/web_crawler,/app-logs
>  
> Here i have three directories to be exposed ..
> 1)/user
> 2)   /data/web_crawler
> 3)   /app-logs
> This would help us  to mount directories for particular client ( Say client A 
> wants to write data in /app-logs - Hadoop Admin can mount and handover to 
> clients  ).
> Please advise here..  Sorry if this feature is already implemented.. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8693) refreshNamenodes does not support adding a new standby to a running DN

2016-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15644292#comment-15644292
 ] 

Hadoop QA commented on HDFS-8693:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 49 unchanged - 1 fixed = 50 total (was 50) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 46s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestCrcCorruption |
|   | 
hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HDFS-8693 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837756/HDFS-8693.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5a127b773016 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f768955 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17454/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17454/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17454/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17454/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> refreshNamenodes does not support adding a new standby to a running DN
> -

[jira] [Commented] (HDFS-11116) Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue

2016-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15644312#comment-15644312
 ] 

Hadoop QA commented on HDFS-6:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  3m 
45s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} hadoop-hdfs-project_hadoop-hdfs generated 0 new + 26 
unchanged - 3 fixed = 26 total (was 29) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 18s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.viewfs.TestViewFsDefaultValue |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HDFS-6 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837767/HDFS-6.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 60763821b92f 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f768955 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17455/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17455/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17455/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue
> -
>
> Key: HDFS-6
> URL: https://issues.apache.org/jira/browse/HDFS-6
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affect

[jira] [Commented] (HDFS-10721) HDFS NFS Gateway - Exporting multiple Directories

2016-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15644320#comment-15644320
 ] 

Hadoop QA commented on HDFS-10721:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-nfs: The patch 
generated 4 new + 9 unchanged - 0 fixed = 13 total (was 9) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
12s{color} | {color:green} hadoop-hdfs-nfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HDFS-10721 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837775/HDFS-10721.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4c4bbc485ac6 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f768955 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17456/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-nfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17456/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-nfs U: 
hadoop-hdfs-project/hadoop-hdfs-nfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17456/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> HDFS NFS Gateway - Exporting multiple Directories 
> --
>
> Key: HDFS-10721
> URL: https://issues.apache.org/jira/browse/HDFS-10721
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Senthilkumar
>Assignee: S

[jira] [Commented] (HDFS-10996) Ability to specify per-file EC policy at create time

2016-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15644338#comment-15644338
 ] 

Hadoop QA commented on HDFS-10996:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
56s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
54s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 40s{color} | {color:orange} hadoop-hdfs-project: The patch generated 17 new 
+ 998 unchanged - 15 fixed = 1015 total (was 1013) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
52s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 52s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 95m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSClientRetries |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.fs.TestFcHdfsSetUMask |
|   | hadoop.fs.TestSymlinkHdfsFileContext |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.fs.TestFcHdfsPermission |
|   | hadoop.fs.viewfs.TestViewFsHdfs |
|   | hadoop.fs.TestResolveHdfsSymlink |
|   | hadoop.hdfs.TestEncryptionZonesWithKMS |
|   | hadoop.fs.TestFcHdfsCreateMkdir |
|   | hadoop.hdfs.TestEncryptionZones |
|   | hadoop.hdfs.TestListFilesInFileContext |
|   | hadoop.fs.loadGenerator.TestLoadGenerator |
|   | hadoop.fs.TestHDFSFileContextMainOperations |
|   | hadoop.fs.TestGlobPaths |
|   | hadoop.fs.viewfs.TestViewFsAtHdfsRoot |
|   | hadoop.hdfs.TestLease |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HDFS-10996 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837764/HDFS-10996-v1.patch |
| O

[jira] [Updated] (HDFS-11116) Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue

2016-11-07 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-6:
-
Attachment: (was: HDFS-6.001.patch)

> Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue
> -
>
> Key: HDFS-6
> URL: https://issues.apache.org/jira/browse/HDFS-6
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
>
> There were some Jenkins warinings related with TestViewFsDefaultValue in each 
> Jenkins building.
> {code}
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9]
>  [deprecation] getDefaultBlockSize() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9]
>  [deprecation] getDefaultReplication() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43]
>  [deprecation] getServerDefaults() in FileSystem has been deprecated
> {code}
> We should use the method {{getDefaultBlockSize(Path)}} replace with 
> deprecation API {{getDefaultBlockSize}}. The same to the 
> {{getDefaultReplication}} and {{getServerDefaults}}. The {{Path}} can be a 
> not-in-mountpoint path in filesystem to trigger the 
> {{NotInMountpointException}} in test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11116) Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue

2016-11-07 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-6:
-
Attachment: HDFS-6.001.patch

The failed test is related, reupload the v001 patch to fix that.

> Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue
> -
>
> Key: HDFS-6
> URL: https://issues.apache.org/jira/browse/HDFS-6
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-6.001.patch
>
>
> There were some Jenkins warinings related with TestViewFsDefaultValue in each 
> Jenkins building.
> {code}
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[108,9]
>  [deprecation] getDefaultBlockSize() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[123,9]
>  [deprecation] getDefaultReplication() in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsDefaultValue.java:[138,43]
>  [deprecation] getServerDefaults() in FileSystem has been deprecated
> {code}
> We should use the method {{getDefaultBlockSize(Path)}} replace with 
> deprecation API {{getDefaultBlockSize}}. The same to the 
> {{getDefaultReplication}} and {{getServerDefaults}}. The {{Path}} can be a 
> not-in-mountpoint path in filesystem to trigger the 
> {{NotInMountpointException}} in test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10759) Change fsimage bool isStriped from boolean to an enum

2016-11-07 Thread Ewan Higgs (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15644513#comment-15644513
 ] 

Ewan Higgs commented on HDFS-10759:
---

[~jingzhao]
{quote}
But we need to guarantee the compatibility: the old fsimage should still be 
supported and new enum types should be easily added (which means we may need to 
add UNKNOWN_TYPE in the enum according to the link).
{quote}

I looked into this but really as this is an optional field, it should default 
to the existing behaviour (i.e. default is contiguous and not "unknown"). If 
omitting the enum meant the block type was "unknown" new code wouldn't be able 
to handle legacy blocks (since none of them have this field set). This would 
mean the enum would be, in effect, required for all blocks since we need to 
specify that it's not unknown. 

Instead, I've made the default contiguous so existing blocks can be handled.

> Change fsimage bool isStriped from boolean to an enum
> -
>
> Key: HDFS-10759
> URL: https://issues.apache.org/jira/browse/HDFS-10759
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1, 3.0.0-beta1, 3.0.0-alpha2
>Reporter: Ewan Higgs
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-10759.0001.patch
>
>
> The new erasure coding project has updated the protocol for fsimage such that 
> the {{INodeFile}} has a boolean '{{isStriped}}'. I think this is better as an 
> enum or integer since a boolean precludes any future block types. 
> For example:
> {code}
> enum BlockType {
>   CONTIGUOUS = 0,
>   STRIPED = 1,
> }
> {code}
> We can also make this more robust to future changes where there are different 
> block types supported in a staged rollout.  Here, we would use 
> {{UNKNOWN_BLOCK_TYPE}} as the first value since this is the default value. 
> See 
> [here|http://androiddevblog.com/protocol-buffers-pitfall-adding-enum-values/] 
> for more discussion.
> {code}
> enum BlockType {
>   UNKNOWN_BLOCK_TYPE = 0,
>   CONTIGUOUS = 1,
>   STRIPED = 2,
> }
> {code}
> But I'm not convinced this is necessary since there are other enums that 
> don't use this approach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10759) Change fsimage bool isStriped from boolean to an enum

2016-11-07 Thread Ewan Higgs (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HDFS-10759:
--
Attachment: HDFS-10759.0002.patch

Attached is HDFS-10759.0002.patch which will make this change backwards 
compatible to the existing {{boolean isStriped}} in the {{fsimage}}. This is 
done by making the {{BlockType}} enum work with {{CONTIGUOUS=0, STRIPED=1}} as 
this mirrors the boolean behaviour. 

As both the boolean and the enum are optional, they are usually left out of the 
wire protocol. If they are explicitly set then they will have the same semantic 
values on the wire now.

Given the following messages:
{code}
/** 
 * Types of recognized blocks. 
 */
enum BlockTypeProto {   
CONTIGUOUS = 0;
STRIPED = 1;   
}   
 
  message INodeFile {  
optional uint32 replication = 1;   
optional uint64 modificationTime = 2;  
optional uint64 accessTime = 3;
optional uint64 preferredBlockSize = 4;
optional fixed64 permission = 5;   
repeated BlockProto blocks = 6;
optional FileUnderConstructionFeature fileUC = 7;  
optional AclFeatureProto acl = 8;   
optional XAttrFeatureProto xAttrs = 9; 
optional uint32 storagePolicyID = 10;  
optional BlockTypeProto blockType = 11; 
  }

  /* ehiggs - old style using bool blockType */ 
  message INodeFileOld {   
optional uint32 replication = 1;   
optional uint64 modificationTime = 2;  
optional uint64 accessTime = 3;
optional uint64 preferredBlockSize = 4;
optional fixed64 permission = 5;   
repeated BlockProto blocks = 6;
optional FileUnderConstructionFeature fileUC = 7;  
optional AclFeatureProto acl = 8;  
optional XAttrFeatureProto xAttrs = 9; 
optional uint32 storagePolicyID = 10;  
optional bool isStriped = 11;  
  }  
{code}

We can then see that these serialise as the same values:

{code}
In [1]: import fsimage_pb2
 
In [2]: f_enum = fsimage_pb2.INodeSection.INodeFile()
 
In [3]: f_bool = fsimage_pb2.INodeSection.INodeFileOld()
 
In [4]: f_enum.SerializeToString() # Wire format of entirely optional message. 
Empty!
Out[4]: ''
 
In [5]: f_bool.SerializeToString() # With a bool, everything is still empty. No 
surprises.
Out[5]: ''
 
In [6]: f_enum.blockType = 0
 
In [7]: f_bool.isStriped = False
 
In [8]: f_enum.SerializeToString() # wire format of explicit 
BlockType.CONTIGUOUS
Out[8]: 'X\x00'
 
In [9]: f_bool.SerializeToString() # wire format of explicit false
Out[9]: 'X\x00'
 
In [10]: f_enum.blockType = 1 # Set explicity to STRIPED
 
In [11]: f_bool.isStriped = True # turn isStriped to True.
 
In [12]: f_enum.SerializeToString()
Out[12]: 'X\x01'
 
In [13]: f_bool.SerializeToString()
Out[13]: 'X\x01'
{code}

> Change fsimage bool isStriped from boolean to an enum
> -
>
> Key: HDFS-10759
> URL: https://issues.apache.org/jira/browse/HDFS-10759
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1, 3.0.0-beta1, 3.0.0-alpha2
>Reporter: Ewan Higgs
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-10759.0001.patch, HDFS-10759.0002.patch
>
>
> The new erasure coding project has updated the protocol for fsimage such that 
> the {{INodeFile}} has a boolean '{{isStriped}}'. I think this is better as an 
> enum or integer since 

[jira] [Commented] (HDFS-11099) Expose rack id in hdfsDNInfo

2016-11-07 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15644519#comment-15644519
 ] 

James Clampffer commented on HDFS-11099:


Committed to HDFS-8707, thanks for the patch [~xiaowei.zhu]!

> Expose rack id in hdfsDNInfo
> 
>
> Key: HDFS-11099
> URL: https://issues.apache.org/jira/browse/HDFS-11099
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaowei Zhu
>Assignee: Xiaowei Zhu
> Attachments: HDFS-11099.HDFS-8707.000.patch, 
> HDFS-11099.HDFS-8707.001.patch
>
>
> hdfsDNInfo is missing rack information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11099) Expose rack id in hdfsDNInfo

2016-11-07 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-11099:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Expose rack id in hdfsDNInfo
> 
>
> Key: HDFS-11099
> URL: https://issues.apache.org/jira/browse/HDFS-11099
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaowei Zhu
>Assignee: Xiaowei Zhu
> Attachments: HDFS-11099.HDFS-8707.000.patch, 
> HDFS-11099.HDFS-8707.001.patch
>
>
> hdfsDNInfo is missing rack information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9337) Should check required params in WebHDFS to avoid NPE

2016-11-07 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15644538#comment-15644538
 ] 

Vinayakumar B commented on HDFS-9337:
-

Looks like validation done for GETDELEGATIONTOKEN are not required. In-fact all 
the params for GETDELEGATIONTOKEN are optional. 

May be need to update the doc for this as well.

[~jagadesh.kiran], thanks for the multiple updates on the patch.
Please remove validation for GETDELEGATIONTOKEN and all related changes in test 
files.
And make all params optional in doc for GETDELEGATIONTOKEN.
Hopefully last update on patch.

Remaining validations looks good.

> Should check required params in WebHDFS to avoid NPE
> 
>
> Key: HDFS-9337
> URL: https://issues.apache.org/jira/browse/HDFS-9337
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jagadesh Kiran N
>Assignee: Jagadesh Kiran N
> Attachments: HDFS-9337_00.patch, HDFS-9337_01.patch, 
> HDFS-9337_02.patch, HDFS-9337_03.patch, HDFS-9337_04.patch, 
> HDFS-9337_05.patch, HDFS-9337_06.patch, HDFS-9337_07.patch, 
> HDFS-9337_08.patch, HDFS-9337_09.patch, HDFS-9337_10.patch, 
> HDFS-9337_11.patch, HDFS-9337_12.patch, HDFS-9337_13.patch, 
> HDFS-9337_14.patch, HDFS-9337_15.patch, HDFS-9337_16.patch, 
> HDFS-9337_17.patch, HDFS-9337_18.patch
>
>
> {code}
>  curl -i -X PUT 
> "http://10.19.92.127:50070/webhdfs/v1/kiran/sreenu?op=RENAMESNAPSHOT&snapshotname=SNAPSHOTNAME";
> {code}
> Null point exception will be thrown
> {code}
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11116) Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue

2016-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15644652#comment-15644652
 ] 

Hadoop QA commented on HDFS-6:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-hdfs-project_hadoop-hdfs generated 0 new + 26 
unchanged - 3 fixed = 26 total (was 29) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 16 unchanged - 0 fixed = 17 total (was 16) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 41s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 98m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.viewfs.TestViewFsDefaultValue |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HDFS-6 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837782/HDFS-6.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 19c89d5c9070 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f768955 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17457/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17457/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17457/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17457/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix Jenkins warnings caused by deprecation APIs in TestViewFsDefaultValue
> -

[jira] [Commented] (HDFS-11103) Ozone: Cleanup some dependencies

2016-11-07 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15644699#comment-15644699
 ] 

Xiaoyu Yao commented on HDFS-11103:
---

Thanks [~anu] for the update. LGTM and I just two more comments, 

1. NIT: in ContainerLocationManagerImpl.java, can we change the voumePaths to 
dataLocations(or dataPaths) and  locations to metadataLocations (or 
metadataPaths)?

2. Two unit tests failure seem to be related. Can you confirm?
{code}
2016-11-06 22:26:53,538 [Thread-98] ERROR  - Unable to find the chunk file. 
chunk info : ChunkInfo{chunkName='11641f90-bcb4-4ddf-9d33-52e479067a27.data.0, 
offset=0, len=1024}
2016-11-06 22:26:54,230 [Thread-100] ERROR  - Rejecting write chunk 
request. Chunk overwrite without explicit request. 
ChunkInfo{chunkName='0b75ef0b-48c2-4c84-89c0-e90eac83b01e.data.0, offset=0, 
len=1024}
2016-11-06 22:33:29,608 [Thread-106] ERROR  - creation of container failed. 
Name: b7ef16fd-e272-41cb-b9a7-1d2a71ab4638 
{code} 


> Ozone: Cleanup some dependencies
> 
>
> Key: HDFS-11103
> URL: https://issues.apache.org/jira/browse/HDFS-11103
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Trivial
> Fix For: HDFS-7240
>
> Attachments: HDFS-11103-HDFS-7240.001.patch, 
> HDFS-11103-HDFS-7240.002.patch, HDFS-11103-HDFS-7240.003.patch
>
>
> Cleanup some unwanted dependencies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11056) Concurrent append and read operations lead to checksum error

2016-11-07 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15644747#comment-15644747
 ] 

Wei-Chiu Chuang commented on HDFS-11056:


If no one objects -- I will commit the latest patch by end of Tuesday, and I 
will file a follow up jira to study if it's necessary to optimize checksum 
calculation by adding the last chunk checksum into finalized/temporary replica 
class.

> Concurrent append and read operations lead to checksum error
> 
>
> Key: HDFS-11056
> URL: https://issues.apache.org/jira/browse/HDFS-11056
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, httpfs
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-11056.001.patch, HDFS-11056.002.patch, 
> HDFS-11056.reproduce.patch
>
>
> If there are two clients, one of them open-append-close a file continuously, 
> while the other open-read-close the same file continuously, the reader 
> eventually gets a checksum error in the data read.
> On my local Mac, it takes a few minutes to produce the error. This happens to 
> httpfs clients, but there's no reason not believe this happens to any append 
> clients.
> I have a unit test that demonstrates the checksum error. Will attach later.
> Relevant log:
> {quote}
> 2016-10-25 15:34:45,153 INFO  audit - allowed=trueugi=weichiu 
> (auth:SIMPLE)   ip=/127.0.0.1   cmd=opensrc=/tmp/bar.txt
> dst=nullperm=null   proto=rpc
> 2016-10-25 15:34:45,155 INFO  DataNode - Receiving 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 src: 
> /127.0.0.1:51130 dest: /127.0.0.1:50131
> 2016-10-25 15:34:45,155 INFO  FsDatasetImpl - Appending to FinalizedReplica, 
> blk_1073741825_1182, FINALIZED
>   getNumBytes() = 182
>   getBytesOnDisk()  = 182
>   getVisibleLength()= 182
>   getVolume()   = 
> /Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1
>   getBlockURI() = 
> file:/Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1/current/BP-837130339-172.16.1.88-1477434851452/current/finalized/subdir0/subdir0/blk_1073741825
> 2016-10-25 15:34:45,167 INFO  DataNode - opReadBlock 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 received exception 
> java.io.IOException: No data exists for block 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182
> 2016-10-25 15:34:45,167 WARN  DataNode - 
> DatanodeRegistration(127.0.0.1:50131, 
> datanodeUuid=41c96335-5e4b-4950-ac22-3d21b353abb8, infoPort=50133, 
> infoSecurePort=0, ipcPort=50134, 
> storageInfo=lv=-57;cid=testClusterID;nsid=1472068852;c=1477434851452):Got 
> exception while serving 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 to /127.0.0.1:51121
> java.io.IOException: No data exists for block 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockInputStream(FsDatasetImpl.java:773)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:400)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:581)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:150)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:102)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289)
>   at java.lang.Thread.run(Thread.java:745)
> 2016-10-25 15:34:45,168 INFO  FSNamesystem - 
> updatePipeline(blk_1073741825_1182, newGS=1183, newLength=182, 
> newNodes=[127.0.0.1:50131], client=DFSClient_NONMAPREDUCE_-1743096965_197)
> 2016-10-25 15:34:45,168 ERROR DataNode - 127.0.0.1:50131:DataXceiver error 
> processing READ_BLOCK operation  src: /127.0.0.1:51121 dst: /127.0.0.1:50131
> java.io.IOException: No data exists for block 
> BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockInputStream(FsDatasetImpl.java:773)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:400)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:581)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:150)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:102)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289)
>   at java.lang.Thread.run(Thread.java:745)
> 2016-10-25 15:34:45,168 INFO  FSNamesystem - 
> updatePipeline(blk_1073741825_1182 => blk_1073741825_1183) 

[jira] [Commented] (HDFS-11111) Delete something in .Trash using "rm" should be forbidden without safety option

2016-11-07 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15645040#comment-15645040
 ] 

Ravi Prakash commented on HDFS-1:
-

Even though this would again be backward incompatible, I'm more amenable to 
this solution. Thanks Lantao! :-)

> Delete something in  .Trash using "rm" should be forbidden without safety 
> option 
> -
>
> Key: HDFS-1
> URL: https://issues.apache.org/jira/browse/HDFS-1
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Lantao Jin
>
> As we discussed in HDFS-11102, double confirmation seems not a graceful 
> solution for user. But deleting trash files unexpected is till an incident 
> issue. The behaviour of user I worried is rm something in trash, not rm 
> something out trash with "skipTrash" option(That's a very purposeful action).
> So it is not the same case with HADOOP-12358. The solution is throwing an 
> exception and remind user to add "-trash" option to delete dirs in trash for 
> safely:
> {code}
> Can not delete somthing in trash directly! Please add "-trash" or "-T" in 
> "rm" command to do that.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11111) Delete something in .Trash using "rm" should be forbidden without safety option

2016-11-07 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HDFS-1:

Description: 
As we discussed in HDFS-11102, double confirmation does not seem to be a 
graceful solution for users. Deleting files in .Trash accidentally is still an 
issue though. The behaviour of users I'm worried about is {{rm}}ing something 
in .Trash (without explicitly understanding that those files will not be 
recoverable). This is in contrast to {{rm}}ing something with "-skipTrash" 
option (That's a very purposeful action).

So it is not the same case as HADOOP-12358. The solution is throwing an 
exception and remind user to add "-trash" option to delete dirs in trash for 
safely:
{code}
Can not delete somehing trash directly! Please add "-trash" or "-T" in "rm" 
command to do that.
{code}

  was:
As we discussed in HDFS-11102, double confirmation seems not a graceful 
solution for user. But deleting trash files unexpected is till an incident 
issue. The behaviour of user I worried is rm something in trash, not rm 
something out trash with "skipTrash" option(That's a very purposeful action).

So it is not the same case with HADOOP-12358. The solution is throwing an 
exception and remind user to add "-trash" option to delete dirs in trash for 
safely:
{code}
Can not delete somthing in trash directly! Please add "-trash" or "-T" in "rm" 
command to do that.
{code}


> Delete something in  .Trash using "rm" should be forbidden without safety 
> option 
> -
>
> Key: HDFS-1
> URL: https://issues.apache.org/jira/browse/HDFS-1
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Lantao Jin
>
> As we discussed in HDFS-11102, double confirmation does not seem to be a 
> graceful solution for users. Deleting files in .Trash accidentally is still 
> an issue though. The behaviour of users I'm worried about is {{rm}}ing 
> something in .Trash (without explicitly understanding that those files will 
> not be recoverable). This is in contrast to {{rm}}ing something with 
> "-skipTrash" option (That's a very purposeful action).
> So it is not the same case as HADOOP-12358. The solution is throwing an 
> exception and remind user to add "-trash" option to delete dirs in trash for 
> safely:
> {code}
> Can not delete somehing trash directly! Please add "-trash" or "-T" in "rm" 
> command to do that.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11111) Delete something in .Trash using "rm" should be forbidden without safety option

2016-11-07 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HDFS-1:

Description: 
As we discussed in HDFS-11102, double confirmation does not seem to be a 
graceful solution for users. Deleting files in .Trash accidentally is still an 
issue though. The behaviour of users I'm worried about is {{rm}} ing something 
in .Trash (without explicitly understanding that those files will not be 
recoverable). This is in contrast to {{rm}} ing something with "-skipTrash" 
option (That's a very purposeful action).

So it is not the same case as HADOOP-12358. The solution is throwing an 
exception and remind user to add "-trash" option to delete dirs in trash for 
safely:
{code}
Can not delete somehing trash directly! Please add "-trash" or "-T" in "rm" 
command to do that.
{code}

  was:
As we discussed in HDFS-11102, double confirmation does not seem to be a 
graceful solution for users. Deleting files in .Trash accidentally is still an 
issue though. The behaviour of users I'm worried about is {{rm}}ing something 
in .Trash (without explicitly understanding that those files will not be 
recoverable). This is in contrast to {{rm}}ing something with "-skipTrash" 
option (That's a very purposeful action).

So it is not the same case as HADOOP-12358. The solution is throwing an 
exception and remind user to add "-trash" option to delete dirs in trash for 
safely:
{code}
Can not delete somehing trash directly! Please add "-trash" or "-T" in "rm" 
command to do that.
{code}


> Delete something in  .Trash using "rm" should be forbidden without safety 
> option 
> -
>
> Key: HDFS-1
> URL: https://issues.apache.org/jira/browse/HDFS-1
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Lantao Jin
>
> As we discussed in HDFS-11102, double confirmation does not seem to be a 
> graceful solution for users. Deleting files in .Trash accidentally is still 
> an issue though. The behaviour of users I'm worried about is {{rm}} ing 
> something in .Trash (without explicitly understanding that those files will 
> not be recoverable). This is in contrast to {{rm}} ing something with 
> "-skipTrash" option (That's a very purposeful action).
> So it is not the same case as HADOOP-12358. The solution is throwing an 
> exception and remind user to add "-trash" option to delete dirs in trash for 
> safely:
> {code}
> Can not delete somehing trash directly! Please add "-trash" or "-T" in "rm" 
> command to do that.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11048) Audit Log should escape control characters

2016-11-07 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15645163#comment-15645163
 ] 

Allen Wittenauer commented on HDFS-11048:
-

What happens if the filename has a backslash in it? 

> Audit Log should escape control characters
> --
>
> Key: HDFS-11048
> URL: https://issues.apache.org/jira/browse/HDFS-11048
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11048.001.patch, HDFS-11048.002.patch
>
>
> Allowing control characters without escaping them allows for spoofing audit 
> log entries at worst and accidentally breaking log parsing at best.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11114) Support for running async disk checks in DataNode

2016-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15645259#comment-15645259
 ] 

ASF GitHub Bot commented on HDFS-4:
---

GitHub user arp7 opened a pull request:

https://github.com/apache/hadoop/pull/153

HDFS-4. Support for running async disk checks in DataNode.

Interface for running async checks on a resource.

The implementation ThrottledAsyncChecker supports throttling and 
result-caching. DataNode changes to use it will be done in another Jira.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/arp7/hadoop trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/153.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #153


commit 729705d8d6b839d1e835cc1d60f7a15e7052fac1
Author: Arpit Agarwal 
Date:   2016-11-07T20:00:22Z

HDFS-4. Support for running async disk checks in DataNode.

Change-Id: Ib21cd21fe9b67ca35b38f8462c138e90b55f33df




> Support for running async disk checks in DataNode
> -
>
> Key: HDFS-4
> URL: https://issues.apache.org/jira/browse/HDFS-4
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-4.01.patch, HDFS-4.02.patch
>
>
> Introduce support for running async checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11114) Support for running async disk checks in DataNode

2016-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15645304#comment-15645304
 ] 

ASF GitHub Bot commented on HDFS-4:
---

Github user anuengineer commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/153#discussion_r86860005
  
--- Diff: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/AsyncChecker.java
 ---
@@ -0,0 +1,62 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.server.datanode.checker;
+
+import com.google.common.util.concurrent.ListenableFuture;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * A class that can be used to schedule an asynchronous check on a given
+ * {@link Checkable}. If the check is successfully scheduled then a
+ * {@link ListenableFuture} is returned.
+ *
+ */
+@InterfaceAudience.Private
+@InterfaceStability.Unstable
+public interface AsyncChecker {
+
+  /**
+   * Schedule an asynchronous check for the given object.
+   *
+   * @param target object to be checked.
+   *
+   * @param context the interpretation of the context depends on the
+   *target.
+   *
+   * @return returns a {@link ListenableFuture} that can be used to
+   * retrieve the result of the asynchronous check.
+   */
+  ListenableFuture schedule(Checkable target, K context);
+
+  /**
+   * Cancel all executing checks and wait for them to complete.
+   * First attempts a graceful cancellation, then cancels forcefully.
+   * Waits for the supplied timeout after both attempts.
+   *
+   * See {@link ExecutorService#awaitTermination} for a description of
+   * the parameters.
+   *
+   * @throws InterruptedException
+   */
+  void join(long timeout, TimeUnit timeUnit) throws InterruptedException;
--- End diff --

Just trying to understand this a little better, From the signature and 
implementation this function looks more a awaitTermination in executorService. 
That is this function will wait for a while and cancel and task if the timeout 
occurs, in that case would you consider calling this await or awaitTermination. 
Java "join" seems to imply a wait without timeouts. Just making sure that the 
intended was indeed a shutdown/await pattern.


> Support for running async disk checks in DataNode
> -
>
> Key: HDFS-4
> URL: https://issues.apache.org/jira/browse/HDFS-4
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-4.01.patch, HDFS-4.02.patch
>
>
> Introduce support for running async checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11114) Support for running async disk checks in DataNode

2016-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15645338#comment-15645338
 ] 

ASF GitHub Bot commented on HDFS-4:
---

Github user arp7 commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/153#discussion_r86861747
  
--- Diff: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/AsyncChecker.java
 ---
@@ -0,0 +1,62 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.server.datanode.checker;
+
+import com.google.common.util.concurrent.ListenableFuture;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * A class that can be used to schedule an asynchronous check on a given
+ * {@link Checkable}. If the check is successfully scheduled then a
+ * {@link ListenableFuture} is returned.
+ *
+ */
+@InterfaceAudience.Private
+@InterfaceStability.Unstable
+public interface AsyncChecker {
+
+  /**
+   * Schedule an asynchronous check for the given object.
+   *
+   * @param target object to be checked.
+   *
+   * @param context the interpretation of the context depends on the
+   *target.
+   *
+   * @return returns a {@link ListenableFuture} that can be used to
+   * retrieve the result of the asynchronous check.
+   */
+  ListenableFuture schedule(Checkable target, K context);
+
+  /**
+   * Cancel all executing checks and wait for them to complete.
+   * First attempts a graceful cancellation, then cancels forcefully.
+   * Waits for the supplied timeout after both attempts.
+   *
+   * See {@link ExecutorService#awaitTermination} for a description of
+   * the parameters.
+   *
+   * @throws InterruptedException
+   */
+  void join(long timeout, TimeUnit timeUnit) throws InterruptedException;
--- End diff --

Thanks for taking a look @anuengineer. The method covers both shutdown and 
awaitTermination semantics. I could call it shutdownAndAwaitTermination() to 
make it clearer.


> Support for running async disk checks in DataNode
> -
>
> Key: HDFS-4
> URL: https://issues.apache.org/jira/browse/HDFS-4
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-4.01.patch, HDFS-4.02.patch
>
>
> Introduce support for running async checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11114) Support for running async disk checks in DataNode

2016-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15645364#comment-15645364
 ] 

ASF GitHub Bot commented on HDFS-4:
---

Github user arp7 commented on the issue:

https://github.com/apache/hadoop/pull/153
  
Renamed it to shutdownAndWait.


> Support for running async disk checks in DataNode
> -
>
> Key: HDFS-4
> URL: https://issues.apache.org/jira/browse/HDFS-4
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-4.01.patch, HDFS-4.02.patch
>
>
> Introduce support for running async checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11114) Support for running async disk checks in DataNode

2016-11-07 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15645397#comment-15645397
 ] 

Anu Engineer commented on HDFS-4:
-

+1, LGTM.  [~arpitagarwal] Thanks for providing this patch.

> Support for running async disk checks in DataNode
> -
>
> Key: HDFS-4
> URL: https://issues.apache.org/jira/browse/HDFS-4
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> Introduce support for running async checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11114) Support for running async disk checks in DataNode

2016-11-07 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-4:
-
Status: Open  (was: Patch Available)

> Support for running async disk checks in DataNode
> -
>
> Key: HDFS-4
> URL: https://issues.apache.org/jira/browse/HDFS-4
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> Introduce support for running async checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11114) Support for running async disk checks in DataNode

2016-11-07 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-4:
-
Attachment: (was: HDFS-4.01.patch)

> Support for running async disk checks in DataNode
> -
>
> Key: HDFS-4
> URL: https://issues.apache.org/jira/browse/HDFS-4
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> Introduce support for running async checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11114) Support for running async disk checks in DataNode

2016-11-07 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-4:
-
Attachment: (was: HDFS-4.02.patch)

> Support for running async disk checks in DataNode
> -
>
> Key: HDFS-4
> URL: https://issues.apache.org/jira/browse/HDFS-4
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> Introduce support for running async checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11114) Support for running async disk checks in DataNode

2016-11-07 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-4:
-
Status: Patch Available  (was: Open)

> Support for running async disk checks in DataNode
> -
>
> Key: HDFS-4
> URL: https://issues.apache.org/jira/browse/HDFS-4
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> Introduce support for running async checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11048) Audit Log should escape control characters

2016-11-07 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15645403#comment-15645403
 ] 

Eric Badger commented on HDFS-11048:


bq. What happens if the filename has a backslash in it?
The backslash will be escaped and printed as a single backslash. 

> Audit Log should escape control characters
> --
>
> Key: HDFS-11048
> URL: https://issues.apache.org/jira/browse/HDFS-11048
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11048.001.patch, HDFS-11048.002.patch
>
>
> Allowing control characters without escaping them allows for spoofing audit 
> log entries at worst and accidentally breaking log parsing at best.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11048) Audit Log should escape control characters

2016-11-07 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15645429#comment-15645429
 ] 

Allen Wittenauer commented on HDFS-11048:
-

So in the log it will be "\\" or "\"?

> Audit Log should escape control characters
> --
>
> Key: HDFS-11048
> URL: https://issues.apache.org/jira/browse/HDFS-11048
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11048.001.patch, HDFS-11048.002.patch
>
>
> Allowing control characters without escaping them allows for spoofing audit 
> log entries at worst and accidentally breaking log parsing at best.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11048) Audit Log should escape control characters

2016-11-07 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15645452#comment-15645452
 ] 

Eric Badger commented on HDFS-11048:


All backslashes in the input will be printed in the audit log as actual 
backslashes, because they will be escaped by StringEscapeUtils and replaced 
with double backslashes. So when they are actually printed, the double 
backslash will be escaped and you will see a single backslash. All control 
characters such as "\r" and "\n" will also be escaped and printed in their 
escaped form.

You can walk through the {{TestAuditLogs#testAuditCharacterEscape}} test in a 
debugger to see how the backslashes are escaped using 
{{StringEscapeUtils.escapeJavaStyleString()}}


> Audit Log should escape control characters
> --
>
> Key: HDFS-11048
> URL: https://issues.apache.org/jira/browse/HDFS-11048
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11048.001.patch, HDFS-11048.002.patch
>
>
> Allowing control characters without escaping them allows for spoofing audit 
> log entries at worst and accidentally breaking log parsing at best.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11048) Audit Log should escape control characters

2016-11-07 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15645455#comment-15645455
 ] 

Eric Badger commented on HDFS-11048:


Oops, never actually answered your question. An input of "\" would be printed 
as "\" in the audit log.

> Audit Log should escape control characters
> --
>
> Key: HDFS-11048
> URL: https://issues.apache.org/jira/browse/HDFS-11048
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11048.001.patch, HDFS-11048.002.patch
>
>
> Allowing control characters without escaping them allows for spoofing audit 
> log entries at worst and accidentally breaking log parsing at best.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11048) Audit Log should escape control characters

2016-11-07 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15645480#comment-15645480
 ] 

Allen Wittenauer commented on HDFS-11048:
-

OK, that's what I thought.  We probably need to print that as a double 
backslash to avoid the ambiguity.  e.g., does '\thisfile' begin with a tab or 
does it begin with a backlash?

> Audit Log should escape control characters
> --
>
> Key: HDFS-11048
> URL: https://issues.apache.org/jira/browse/HDFS-11048
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11048.001.patch, HDFS-11048.002.patch
>
>
> Allowing control characters without escaping them allows for spoofing audit 
> log entries at worst and accidentally breaking log parsing at best.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11048) Audit Log should escape control characters

2016-11-07 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15645543#comment-15645543
 ] 

Eric Badger commented on HDFS-11048:


bq. e.g., does '\thisfile' begin with a tab or does it begin with a backlash?
'\thisfile' would begin with a backslash. 

I'm not sure I understand what you mean about the ambiguity. I can think of one 
pretty contrived case where I think this might cause less than ideal behavior. 
If you had a file that started with a tab followed by "hisfile", it would be 
printed as "\thisfile" in the audit log. However, if you had a file called 
"\thisfile" (where the \t are 2 separate ascii chars), it would also be printed 
in the audit log as "\thisfile". 

> Audit Log should escape control characters
> --
>
> Key: HDFS-11048
> URL: https://issues.apache.org/jira/browse/HDFS-11048
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11048.001.patch, HDFS-11048.002.patch
>
>
> Allowing control characters without escaping them allows for spoofing audit 
> log entries at worst and accidentally breaking log parsing at best.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11048) Audit Log should escape control characters

2016-11-07 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15645558#comment-15645558
 ] 

Allen Wittenauer commented on HDFS-11048:
-

bq.  I can think of one pretty contrived case where I think this might cause 
less than ideal behavior.

That's my point.  I'm looking at this from the point of view of what is in the 
log.   "\thisfile" is ambiguous.  That's super bad.

> Audit Log should escape control characters
> --
>
> Key: HDFS-11048
> URL: https://issues.apache.org/jira/browse/HDFS-11048
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11048.001.patch, HDFS-11048.002.patch
>
>
> Allowing control characters without escaping them allows for spoofing audit 
> log entries at worst and accidentally breaking log parsing at best.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11048) Audit Log should escape control characters

2016-11-07 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15645579#comment-15645579
 ] 

Eric Badger commented on HDFS-11048:


What do you propose to fix that? Changing the single backslash to a double 
backslash just moves the problem instead of fixing it. Instead of 'tab + 
"hisfile"' being the same as '\thisfile', 'tab + "hisfile"' would be the same 
as '\\thisfile'.

> Audit Log should escape control characters
> --
>
> Key: HDFS-11048
> URL: https://issues.apache.org/jira/browse/HDFS-11048
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11048.001.patch, HDFS-11048.002.patch
>
>
> Allowing control characters without escaping them allows for spoofing audit 
> log entries at worst and accidentally breaking log parsing at best.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11083) Add unit test for DFSAdmin -report command

2016-11-07 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-11083:
-
Attachment: HDFS-11083.000.patch

> Add unit test for DFSAdmin -report command
> --
>
> Key: HDFS-11083
> URL: https://issues.apache.org/jira/browse/HDFS-11083
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: shell, test
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
> Attachments: HDFS-11083.000.patch
>
>
> {{hdfs dfsadmin -report}} has very useful information about the cluster. 
> There are some existing customized tools that depend on this command 
> functionality. We should add unit test for it. Specially,
> # If one datanode is dead, the report should indicate this
> # If one block is corrupt, the "Missing blocks:" field should report this
> # TBD...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11083) Add unit test for DFSAdmin -report command

2016-11-07 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-11083:
-
Status: Patch Available  (was: Open)

> Add unit test for DFSAdmin -report command
> --
>
> Key: HDFS-11083
> URL: https://issues.apache.org/jira/browse/HDFS-11083
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: shell, test
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
> Attachments: HDFS-11083.000.patch
>
>
> {{hdfs dfsadmin -report}} has very useful information about the cluster. 
> There are some existing customized tools that depend on this command 
> functionality. We should add unit test for it. Specially,
> # If one datanode is dead, the report should indicate this
> # If one block is corrupt, the "Missing blocks:" field should report this
> # TBD...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11083) Add unit test for DFSAdmin -report command

2016-11-07 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15645592#comment-15645592
 ] 

Xiaobing Zhou commented on HDFS-11083:
--

Posted initial patch for reviews.

> Add unit test for DFSAdmin -report command
> --
>
> Key: HDFS-11083
> URL: https://issues.apache.org/jira/browse/HDFS-11083
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: shell, test
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
> Attachments: HDFS-11083.000.patch
>
>
> {{hdfs dfsadmin -report}} has very useful information about the cluster. 
> There are some existing customized tools that depend on this command 
> functionality. We should add unit test for it. Specially,
> # If one datanode is dead, the report should indicate this
> # If one block is corrupt, the "Missing blocks:" field should report this
> # TBD...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11048) Audit Log should escape control characters

2016-11-07 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15645607#comment-15645607
 ] 

Allen Wittenauer commented on HDFS-11048:
-

It's a pretty standard practice to escape the escape character.  But I can't 
help but think that instead of using backslash to escape if this patch wouldn't 
have been better off using URI escaping to match what happens elsewhere in 
Apache Hadoop.

> Audit Log should escape control characters
> --
>
> Key: HDFS-11048
> URL: https://issues.apache.org/jira/browse/HDFS-11048
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11048.001.patch, HDFS-11048.002.patch
>
>
> Allowing control characters without escaping them allows for spoofing audit 
> log entries at worst and accidentally breaking log parsing at best.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11048) Audit Log should escape control characters

2016-11-07 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15645693#comment-15645693
 ] 

Eric Badger commented on HDFS-11048:


Using URI escaping wouldn't be great because it would make more paths look 
weird, while only giving benefit to this small use-case. I think the best 
solution would be to replace control characters with their escaped equivalents 
(e.g. tab becomes '\t', newline becomes '\n', etc.) and escape backslashes with 
a double backslash (e.g. '\' becomes '\\'). However, this would require 
creating a new library to do the escaping since we can't touch 
StringEscapeUtils. 

> Audit Log should escape control characters
> --
>
> Key: HDFS-11048
> URL: https://issues.apache.org/jira/browse/HDFS-11048
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11048.001.patch, HDFS-11048.002.patch
>
>
> Allowing control characters without escaping them allows for spoofing audit 
> log entries at worst and accidentally breaking log parsing at best.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11114) Support for running async disk checks in DataNode

2016-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15645726#comment-15645726
 ] 

Hadoop QA commented on HDFS-4:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 41s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HDFS-4 |
| GITHUB PR | https://github.com/apache/hadoop/pull/153 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 74ca2b1ed68f 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / de3b4aa |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17459/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17459/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17459/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17459/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Support for running async disk checks in DataNode
> -
>
> Key: HDFS-4
>

[jira] [Commented] (HDFS-11083) Add unit test for DFSAdmin -report command

2016-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15645811#comment-15645811
 ] 

Hadoop QA commented on HDFS-11083:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 6 new + 59 unchanged - 0 fixed = 65 total (was 59) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 58s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 94m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestRollingUpgrade |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HDFS-11083 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837846/HDFS-11083.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8a5f19bfeafc 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / de3b4aa |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17460/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17460/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17460/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17460/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add unit test for DFSAdmin -report command
> --
>
> Key: 

[jira] [Updated] (HDFS-11083) Add unit test for DFSAdmin -report command

2016-11-07 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-11083:
-
Attachment: HDFS-11083.001.patch

> Add unit test for DFSAdmin -report command
> --
>
> Key: HDFS-11083
> URL: https://issues.apache.org/jira/browse/HDFS-11083
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: shell, test
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
> Attachments: HDFS-11083.000.patch, HDFS-11083.001.patch
>
>
> {{hdfs dfsadmin -report}} has very useful information about the cluster. 
> There are some existing customized tools that depend on this command 
> functionality. We should add unit test for it. Specially,
> # If one datanode is dead, the report should indicate this
> # If one block is corrupt, the "Missing blocks:" field should report this
> # TBD...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11083) Add unit test for DFSAdmin -report command

2016-11-07 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15645830#comment-15645830
 ] 

Xiaobing Zhou commented on HDFS-11083:
--

v001 fixed some check style issues.

> Add unit test for DFSAdmin -report command
> --
>
> Key: HDFS-11083
> URL: https://issues.apache.org/jira/browse/HDFS-11083
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: shell, test
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
> Attachments: HDFS-11083.000.patch, HDFS-11083.001.patch
>
>
> {{hdfs dfsadmin -report}} has very useful information about the cluster. 
> There are some existing customized tools that depend on this command 
> functionality. We should add unit test for it. Specially,
> # If one datanode is dead, the report should indicate this
> # If one block is corrupt, the "Missing blocks:" field should report this
> # TBD...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11118) Block Storage for HDFS

2016-11-07 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-8:
---

 Summary: Block Storage for HDFS
 Key: HDFS-8
 URL: https://issues.apache.org/jira/browse/HDFS-8
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs
Reporter: Anu Engineer
Assignee: Anu Engineer


This JIRA proposes extending HDFS to provide replicated block storage 
capabilities using Storage Containers. This is would allow users to run 
unmodified programs that assume that they are running on a posix file system.

With this extension, HDFS can be used like a block store. For example, YARN 
jobs could mount and use a volume at will. This is made possible by leveraging 
Storage Containers and will share the storage layer with Ozone and HDFS in 
future.

Please see the attached design document for more details on this proposal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11118) Block Storage for HDFS

2016-11-07 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-8:

Attachment: cblock-proposal.pdf

> Block Storage for HDFS
> --
>
> Key: HDFS-8
> URL: https://issues.apache.org/jira/browse/HDFS-8
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: cblock-proposal.pdf
>
>
> This JIRA proposes extending HDFS to provide replicated block storage 
> capabilities using Storage Containers. This is would allow users to run 
> unmodified programs that assume that they are running on a posix file system.
> With this extension, HDFS can be used like a block store. For example, YARN 
> jobs could mount and use a volume at will. This is made possible by 
> leveraging Storage Containers and will share the storage layer with Ozone and 
> HDFS in future.
> Please see the attached design document for more details on this proposal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9482) Replace DatanodeInfo constructors with a builder pattern

2016-11-07 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15645911#comment-15645911
 ] 

Brahma Reddy Battula commented on HDFS-9482:


[~arpitagarwal] can you please take look once..I think, branch-2.8 might not 
require.

> Replace DatanodeInfo constructors with a builder pattern
> 
>
> Key: HDFS-9482
> URL: https://issues.apache.org/jira/browse/HDFS-9482
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9482-002.patch, HDFS-9482-003.patch, 
> HDFS-9482-branch-2.8.patch, HDFS-9482-branch-2.patch, HDFS-9482.patch
>
>
> As per  [~arpitagarwal] comment 
> [here|https://issues.apache.org/jira/browse/HDFS-9038?focusedCommentId=15018761&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15018761],Replace
>  DatanodeInfo constructors with a builder pattern, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11083) Add unit test for DFSAdmin -report command

2016-11-07 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-11083:
-
Attachment: HDFS-11083.002.patch

> Add unit test for DFSAdmin -report command
> --
>
> Key: HDFS-11083
> URL: https://issues.apache.org/jira/browse/HDFS-11083
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: shell, test
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
> Attachments: HDFS-11083.000.patch, HDFS-11083.001.patch, 
> HDFS-11083.002.patch
>
>
> {{hdfs dfsadmin -report}} has very useful information about the cluster. 
> There are some existing customized tools that depend on this command 
> functionality. We should add unit test for it. Specially,
> # If one datanode is dead, the report should indicate this
> # If one block is corrupt, the "Missing blocks:" field should report this
> # TBD...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11083) Add unit test for DFSAdmin -report command

2016-11-07 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15645922#comment-15645922
 ] 

Xiaobing Zhou commented on HDFS-11083:
--

Posted patch v002, I just noticed previous patches contain unnecessary hack-in 
stuff.

> Add unit test for DFSAdmin -report command
> --
>
> Key: HDFS-11083
> URL: https://issues.apache.org/jira/browse/HDFS-11083
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: shell, test
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
> Attachments: HDFS-11083.000.patch, HDFS-11083.001.patch, 
> HDFS-11083.002.patch
>
>
> {{hdfs dfsadmin -report}} has very useful information about the cluster. 
> There are some existing customized tools that depend on this command 
> functionality. We should add unit test for it. Specially,
> # If one datanode is dead, the report should indicate this
> # If one block is corrupt, the "Missing blocks:" field should report this
> # TBD...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11083) Add unit test for DFSAdmin -report command

2016-11-07 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15645938#comment-15645938
 ] 

Mingliang Liu edited comment on HDFS-11083 at 11/8/16 12:26 AM:


The patch looks good to me overall. Thanks,

# Is {{cluster.setDataNodeDead}} helpful here?
{code}
467   /* wait until DN report is updated */
468   GenericTestUtils.waitFor(new Supplier() {
469 @Override
470 public Boolean get() {
471   DatanodeInfo[] nodeInfo = null;
472   try {
473 nodeInfo = client.datanodeReport(DatanodeReportType.DEAD);
474   } catch (IOException e) {
475 return false;
476   }
477   return nodeInfo != null && nodeInfo.length == 1;
478 }
479   }, 100, 6);
{code}
# I prefer not not to change the existing order the imports as this may cause 
backporting conflict which is traumatic for tools.
{code}
25  import static org.hamcrest.CoreMatchers.allOf;
26  import static org.hamcrest.CoreMatchers.anyOf;
27  import static org.hamcrest.CoreMatchers.containsString;
28  import static org.hamcrest.CoreMatchers.is;
29  import static org.hamcrest.CoreMatchers.not;
30  import static org.junit.Assert.assertEquals;
31  import static org.junit.Assert.assertThat;
32  import static org.junit.Assert.assertTrue;
33  import static org.mockito.Matchers.any;
34  import static org.mockito.Mockito.mock;
35  import static org.mockito.Mockito.when;
{code}
# Are you saying {{"Fail to corrupt all replicas for block " + block}} as 
assertion message?
{code}
496   assertEquals("No all replicas corrupted", repl_factor,
497   blockFilesCorrupted);
{code}
# Better {{ fs.setReplication(file,  fs.setReplication(file, (short)2); + 1);}}
{code}
499   /*
500* Increase replication factor, this should invoke transfer 
request.
501* Receiving datanode fails on checksum and reports it to namenode
502*/
503   fs.setReplication(file, (short)2);
{code}
# printout() is used ever


was (Author: liuml07):
The patch looks good to me overall. Thanks,

# Is {{cluster.setDataNodeDead}} helpful here?
{code}
467   /* wait until DN report is updated */
468   GenericTestUtils.waitFor(new Supplier() {
469 @Override
470 public Boolean get() {
471   DatanodeInfo[] nodeInfo = null;
472   try {
473 nodeInfo = client.datanodeReport(DatanodeReportType.DEAD);
474   } catch (IOException e) {
475 return false;
476   }
477   return nodeInfo != null && nodeInfo.length == 1;
478 }
479   }, 100, 6);
{code}
# I prefer not not to change the existing order the imports as this may cause 
backporting conflict which is traumatic for tools.
{code}
25  import static org.hamcrest.CoreMatchers.allOf;
26  import static org.hamcrest.CoreMatchers.anyOf;
27  import static org.hamcrest.CoreMatchers.containsString;
28  import static org.hamcrest.CoreMatchers.is;
29  import static org.hamcrest.CoreMatchers.not;
30  import static org.junit.Assert.assertEquals;
31  import static org.junit.Assert.assertThat;
32  import static org.junit.Assert.assertTrue;
33  import static org.mockito.Matchers.any;
34  import static org.mockito.Mockito.mock;
35  import static org.mockito.Mockito.when;
{code}
# Are you saying {{"Fail to corrupt all replicas for block" + block}} as 
assertion message?
{code}
496   assertEquals("No all replicas corrupted", repl_factor,
497   blockFilesCorrupted);
{code}
# Better {{ fs.setReplication(file,  fs.setReplication(file, (short)2); + 1);}}
{code}
499   /*
500* Increase replication factor, this should invoke transfer 
request.
501* Receiving datanode fails on checksum and reports it to namenode
502*/
503   fs.setReplication(file, (short)2);
{code}
# printout() is used ever

> Add unit test for DFSAdmin -report command
> --
>
> Key: HDFS-11083
> URL: https://issues.apache.org/jira/browse/HDFS-11083
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: shell, test
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
> Attachments: HDFS-11083.000.patch, HDFS-11083.001.patch, 
> HDFS-11083.002.patch
>
>
> {{hdfs dfsadmin -report}} has very useful information about the cluster. 
> There are some existing customized tools that depend on this command 
> functionality. We should add unit test for it. Specially,
> # If one datanode is dead, the report should indicate this
> # If one block is corru

[jira] [Commented] (HDFS-11083) Add unit test for DFSAdmin -report command

2016-11-07 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15645938#comment-15645938
 ] 

Mingliang Liu commented on HDFS-11083:
--

The patch looks good to me overall. Thanks,

# Is {{cluster.setDataNodeDead}} helpful here?
{code}
467   /* wait until DN report is updated */
468   GenericTestUtils.waitFor(new Supplier() {
469 @Override
470 public Boolean get() {
471   DatanodeInfo[] nodeInfo = null;
472   try {
473 nodeInfo = client.datanodeReport(DatanodeReportType.DEAD);
474   } catch (IOException e) {
475 return false;
476   }
477   return nodeInfo != null && nodeInfo.length == 1;
478 }
479   }, 100, 6);
{code}
# I prefer not not to change the existing order the imports as this may cause 
backporting conflict which is traumatic for tools.
{code}
25  import static org.hamcrest.CoreMatchers.allOf;
26  import static org.hamcrest.CoreMatchers.anyOf;
27  import static org.hamcrest.CoreMatchers.containsString;
28  import static org.hamcrest.CoreMatchers.is;
29  import static org.hamcrest.CoreMatchers.not;
30  import static org.junit.Assert.assertEquals;
31  import static org.junit.Assert.assertThat;
32  import static org.junit.Assert.assertTrue;
33  import static org.mockito.Matchers.any;
34  import static org.mockito.Mockito.mock;
35  import static org.mockito.Mockito.when;
{code}
# Are you saying {{"Fail to corrupt all replicas for block" + block}} as 
assertion message?
{code}
496   assertEquals("No all replicas corrupted", repl_factor,
497   blockFilesCorrupted);
{code}
# Better {{ fs.setReplication(file,  fs.setReplication(file, (short)2); + 1);}}
{code}
499   /*
500* Increase replication factor, this should invoke transfer 
request.
501* Receiving datanode fails on checksum and reports it to namenode
502*/
503   fs.setReplication(file, (short)2);
{code}
# printout() is used ever

> Add unit test for DFSAdmin -report command
> --
>
> Key: HDFS-11083
> URL: https://issues.apache.org/jira/browse/HDFS-11083
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: shell, test
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
> Attachments: HDFS-11083.000.patch, HDFS-11083.001.patch, 
> HDFS-11083.002.patch
>
>
> {{hdfs dfsadmin -report}} has very useful information about the cluster. 
> There are some existing customized tools that depend on this command 
> functionality. We should add unit test for it. Specially,
> # If one datanode is dead, the report should indicate this
> # If one block is corrupt, the "Missing blocks:" field should report this
> # TBD...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11083) Add unit test for DFSAdmin -report command

2016-11-07 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15645938#comment-15645938
 ] 

Mingliang Liu edited comment on HDFS-11083 at 11/8/16 12:26 AM:


The patch looks good to me overall. Thanks,

# Is {{cluster.setDataNodeDead}} helpful here?
{code}
467   /* wait until DN report is updated */
468   GenericTestUtils.waitFor(new Supplier() {
469 @Override
470 public Boolean get() {
471   DatanodeInfo[] nodeInfo = null;
472   try {
473 nodeInfo = client.datanodeReport(DatanodeReportType.DEAD);
474   } catch (IOException e) {
475 return false;
476   }
477   return nodeInfo != null && nodeInfo.length == 1;
478 }
479   }, 100, 6);
{code}
# I prefer not not to change the existing order the imports as this may cause 
backporting conflict which is traumatic for tools.
{code}
25  import static org.hamcrest.CoreMatchers.allOf;
26  import static org.hamcrest.CoreMatchers.anyOf;
27  import static org.hamcrest.CoreMatchers.containsString;
28  import static org.hamcrest.CoreMatchers.is;
29  import static org.hamcrest.CoreMatchers.not;
30  import static org.junit.Assert.assertEquals;
31  import static org.junit.Assert.assertThat;
32  import static org.junit.Assert.assertTrue;
33  import static org.mockito.Matchers.any;
34  import static org.mockito.Mockito.mock;
35  import static org.mockito.Mockito.when;
{code}
# Are you saying {{"Fail to corrupt all replicas for block " + block}} as 
assertion message?
{code}
496   assertEquals("No all replicas corrupted", repl_factor,
497   blockFilesCorrupted);
{code}
# Better {{ fs.setReplication(file,  fs.setReplication(file, (short)2); + 1);}}
{code}
499   /*
500* Increase replication factor, this should invoke transfer 
request.
501* Receiving datanode fails on checksum and reports it to namenode
502*/
503   fs.setReplication(file, (short)2);
{code}
# printout() is used ever?


was (Author: liuml07):
The patch looks good to me overall. Thanks,

# Is {{cluster.setDataNodeDead}} helpful here?
{code}
467   /* wait until DN report is updated */
468   GenericTestUtils.waitFor(new Supplier() {
469 @Override
470 public Boolean get() {
471   DatanodeInfo[] nodeInfo = null;
472   try {
473 nodeInfo = client.datanodeReport(DatanodeReportType.DEAD);
474   } catch (IOException e) {
475 return false;
476   }
477   return nodeInfo != null && nodeInfo.length == 1;
478 }
479   }, 100, 6);
{code}
# I prefer not not to change the existing order the imports as this may cause 
backporting conflict which is traumatic for tools.
{code}
25  import static org.hamcrest.CoreMatchers.allOf;
26  import static org.hamcrest.CoreMatchers.anyOf;
27  import static org.hamcrest.CoreMatchers.containsString;
28  import static org.hamcrest.CoreMatchers.is;
29  import static org.hamcrest.CoreMatchers.not;
30  import static org.junit.Assert.assertEquals;
31  import static org.junit.Assert.assertThat;
32  import static org.junit.Assert.assertTrue;
33  import static org.mockito.Matchers.any;
34  import static org.mockito.Mockito.mock;
35  import static org.mockito.Mockito.when;
{code}
# Are you saying {{"Fail to corrupt all replicas for block " + block}} as 
assertion message?
{code}
496   assertEquals("No all replicas corrupted", repl_factor,
497   blockFilesCorrupted);
{code}
# Better {{ fs.setReplication(file,  fs.setReplication(file, (short)2); + 1);}}
{code}
499   /*
500* Increase replication factor, this should invoke transfer 
request.
501* Receiving datanode fails on checksum and reports it to namenode
502*/
503   fs.setReplication(file, (short)2);
{code}
# printout() is used ever

> Add unit test for DFSAdmin -report command
> --
>
> Key: HDFS-11083
> URL: https://issues.apache.org/jira/browse/HDFS-11083
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: shell, test
>Reporter: Mingliang Liu
>Assignee: Xiaobing Zhou
> Attachments: HDFS-11083.000.patch, HDFS-11083.001.patch, 
> HDFS-11083.002.patch
>
>
> {{hdfs dfsadmin -report}} has very useful information about the cluster. 
> There are some existing customized tools that depend on this command 
> functionality. We should add unit test for it. Specially,
> # If one datanode is dead, the report should indicate this
> # If one block is cor

[jira] [Commented] (HDFS-9482) Replace DatanodeInfo constructors with a builder pattern

2016-11-07 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15646064#comment-15646064
 ] 

Arpit Agarwal commented on HDFS-9482:
-

Hi [~brahmareddy], this is on my to-do list. I'll try to review it this week.

> Replace DatanodeInfo constructors with a builder pattern
> 
>
> Key: HDFS-9482
> URL: https://issues.apache.org/jira/browse/HDFS-9482
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9482-002.patch, HDFS-9482-003.patch, 
> HDFS-9482-branch-2.8.patch, HDFS-9482-branch-2.patch, HDFS-9482.patch
>
>
> As per  [~arpitagarwal] comment 
> [here|https://issues.apache.org/jira/browse/HDFS-9038?focusedCommentId=15018761&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15018761],Replace
>  DatanodeInfo constructors with a builder pattern, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11083) Add unit test for DFSAdmin -report command

2016-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15646114#comment-15646114
 ] 

Hadoop QA commented on HDFS-11083:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 28s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 59 unchanged - 0 fixed = 61 total (was 59) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 43s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}112m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestBlocksScheduledCounter |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
|   | hadoop.fs.TestSymlinkHdfsFileContext |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HDFS-11083 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837861/HDFS-11083.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 330fbad64222 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / de3b4aa |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17461/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17461/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17461/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17461/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT  

[jira] [Commented] (HDFS-11083) Add unit test for DFSAdmin -report command

2016-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15646159#comment-15646159
 ] 

Hadoop QA commented on HDFS-11083:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  4m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m  1s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 90m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestAddStripedBlockInFBR |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HDFS-11083 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837871/HDFS-11083.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1072b825a0a8 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3dbad5d |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17462/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17462/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17462/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17462/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add unit test for DFSAdmin -report command
> --
>
> Key: HDFS-11083
> URL: https://i

[jira] [Updated] (HDFS-9868) add reading source cluster with HA access mode feature for DistCp

2016-11-07 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9868:

Attachment: HDFS-9868.05.patch

I'm attaching a patch 5 to help move this forward, [~iceberg565] hope you don't 
mind. Thanks again for the work so far. Feel free to let me know if you want to 
continue the work on this.

Here's what's in patch 5:
- rebased to latest trunk, mainly due to HDFS-9640 as [~jojochuang] pointed out.
- addressed comments above
- Various nitty modifications based from my review.

A more general comment I'm still trying to address is, 'source' here seems 
vague. It really depends on where the {{distcp}} command is run. In the doc 
example, it actually looks more like a 'destination' config. So I'm thinking to 
generalize it as 'remote' configuration. Additionally, it seems we should 
provide a directory so both {{hdfs-site.xml}} and {{core-site.xml}} can be 
read. Maybe there're also some MR/Yarn level changes, I'll test and see.

> add reading source cluster with HA access mode feature for DistCp
> -
>
> Key: HDFS-9868
> URL: https://issues.apache.org/jira/browse/HDFS-9868
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: distcp
>Affects Versions: 2.7.1
>Reporter: NING DING
>Assignee: NING DING
> Attachments: HDFS-9868.05.patch, HDFS-9868.1.patch, 
> HDFS-9868.2.patch, HDFS-9868.3.patch, HDFS-9868.4.patch
>
>
> Normally the HDFS cluster is HA enabled. It could take a long time when 
> coping huge data by distp. If the source cluster changes active namenode, the 
> distp will run failed. This patch supports the DistCp can read source cluster 
> files in HA access mode. A source cluster configuration file needs to be 
> specified (via the -sourceClusterConf option).
>   The following is an example of the contents of a source cluster 
> configuration
>   file:
> {code:xml}
> 
>   
>   fs.defaultFS
>   hdfs://mycluster
> 
> 
>   dfs.nameservices
>   mycluster
> 
> 
>   dfs.ha.namenodes.mycluster
>   nn1,nn2
> 
> 
>   dfs.namenode.rpc-address.mycluster.nn1
>   host1:9000
> 
> 
>   dfs.namenode.rpc-address.mycluster.nn2
>   host2:9000
> 
> 
>   dfs.namenode.http-address.mycluster.nn1
>   host1:50070
> 
> 
>   dfs.namenode.http-address.mycluster.nn2
>   host2:50070
> 
> 
>   dfs.client.failover.proxy.provider.mycluster
>   
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> 
>   
> {code}
>   The invocation of DistCp is as below:
> {code}
> bash$ hadoop distcp -sourceClusterConf sourceCluster.xml /foo/bar 
> hdfs://nn2:8020/bar/foo
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9868) Add ability to read remote cluster configuration for DistCp

2016-11-07 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9868:

Summary: Add ability to read remote cluster configuration for DistCp  (was: 
add reading source cluster with HA access mode feature for DistCp)

> Add ability to read remote cluster configuration for DistCp
> ---
>
> Key: HDFS-9868
> URL: https://issues.apache.org/jira/browse/HDFS-9868
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: distcp
>Affects Versions: 2.7.1
>Reporter: NING DING
>Assignee: NING DING
> Attachments: HDFS-9868.05.patch, HDFS-9868.1.patch, 
> HDFS-9868.2.patch, HDFS-9868.3.patch, HDFS-9868.4.patch
>
>
> Normally the HDFS cluster is HA enabled. It could take a long time when 
> coping huge data by distp. If the source cluster changes active namenode, the 
> distp will run failed. This patch supports the DistCp can read source cluster 
> files in HA access mode. A source cluster configuration file needs to be 
> specified (via the -sourceClusterConf option).
>   The following is an example of the contents of a source cluster 
> configuration
>   file:
> {code:xml}
> 
>   
>   fs.defaultFS
>   hdfs://mycluster
> 
> 
>   dfs.nameservices
>   mycluster
> 
> 
>   dfs.ha.namenodes.mycluster
>   nn1,nn2
> 
> 
>   dfs.namenode.rpc-address.mycluster.nn1
>   host1:9000
> 
> 
>   dfs.namenode.rpc-address.mycluster.nn2
>   host2:9000
> 
> 
>   dfs.namenode.http-address.mycluster.nn1
>   host1:50070
> 
> 
>   dfs.namenode.http-address.mycluster.nn2
>   host2:50070
> 
> 
>   dfs.client.failover.proxy.provider.mycluster
>   
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> 
>   
> {code}
>   The invocation of DistCp is as below:
> {code}
> bash$ hadoop distcp -sourceClusterConf sourceCluster.xml /foo/bar 
> hdfs://nn2:8020/bar/foo
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10368) Erasure Coding: Deprecate replication-related config keys

2016-11-07 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15646222#comment-15646222
 ] 

Rakesh R commented on HDFS-10368:
-

Hi [~andrew.wang], I've tried an attempt to deprecate the keys. Could you 
please review when you get a chance. Thanks!

> Erasure Coding: Deprecate replication-related config keys
> -
>
> Key: HDFS-10368
> URL: https://issues.apache.org/jira/browse/HDFS-10368
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Rakesh R
>Assignee: Rakesh R
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-10368-00.patch, HDFS-10368-01.patch
>
>
> This jira is to visit the replication based config keys and deprecate them(if 
> necessary) in order to make it more meaningful.
> Please refer [discussion 
> thread|https://issues.apache.org/jira/browse/HDFS-9869?focusedCommentId=15249363&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15249363]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11114) Support for running async disk checks in DataNode

2016-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15646257#comment-15646257
 ] 

ASF GitHub Bot commented on HDFS-4:
---

Github user asfgit closed the pull request at:

https://github.com/apache/hadoop/pull/153


> Support for running async disk checks in DataNode
> -
>
> Key: HDFS-4
> URL: https://issues.apache.org/jira/browse/HDFS-4
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> Introduce support for running async checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11114) Support for running async disk checks in DataNode

2016-11-07 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-4:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

The checkstyle issue is a false positive and the unit test failures are 
unrelated.

I've committed this. Thanks for the code review [~anu].

> Support for running async disk checks in DataNode
> -
>
> Key: HDFS-4
> URL: https://issues.apache.org/jira/browse/HDFS-4
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.9.0
>
>
> Introduce support for running async checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9868) Add ability to read remote cluster configuration for DistCp

2016-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15646274#comment-15646274
 ] 

Hadoop QA commented on HDFS-9868:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} hadoop-tools/hadoop-distcp: The patch generated 
2 new + 172 unchanged - 0 fixed = 174 total (was 172) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 
48s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HDFS-9868 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837892/HDFS-9868.05.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 81a7ef0a8459 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3dbad5d |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17463/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-distcp.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17463/testReport/ |
| modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17463/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add ability to read remote cluster configuration for DistCp
> ---
>
> Key: HDFS-9868
> URL: https://issues.apache.org/jira/browse/HDFS-9868
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: distcp
>Affects Versions: 2.7.1
>Reporter: NING DING
>Assigne

[jira] [Updated] (HDFS-10285) Storage Policy Satisfier in Namenode

2016-11-07 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-10285:
---
Attachment: HDFS-11029-HDFS-10285-00.patch

Attached the initial patch for this work. Please review.

> Storage Policy Satisfier in Namenode
> 
>
> Key: HDFS-10285
> URL: https://issues.apache.org/jira/browse/HDFS-10285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Affects Versions: 2.7.2
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: Storage-Policy-Satisfier-in-HDFS-May10.pdf
>
>
> Heterogeneous storage in HDFS introduced the concept of storage policy. These 
> policies can be set on directory/file to specify the user preference, where 
> to store the physical block. When user set the storage policy before writing 
> data, then the blocks could take advantage of storage policy preferences and 
> stores physical block accordingly. 
> If user set the storage policy after writing and completing the file, then 
> the blocks would have been written with default storage policy (nothing but 
> DISK). User has to run the ‘Mover tool’ explicitly by specifying all such 
> file names as a list. In some distributed system scenarios (ex: HBase) it 
> would be difficult to collect all the files and run the tool as different 
> nodes can write files separately and file can have different paths.
> Another scenarios is, when user rename the files from one effected storage 
> policy file (inherited policy from parent directory) to another storage 
> policy effected directory, it will not copy inherited storage policy from 
> source. So it will take effect from destination file/dir parent storage 
> policy. This rename operation is just a metadata change in Namenode. The 
> physical blocks still remain with source storage policy.
> So, Tracking all such business logic based file names could be difficult for 
> admins from distributed nodes(ex: region servers) and running the Mover tool. 
> Here the proposal is to provide an API from Namenode itself for trigger the 
> storage policy satisfaction. A Daemon thread inside Namenode should track 
> such calls and process to DN as movement commands. 
> Will post the detailed design thoughts document soon. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10285) Storage Policy Satisfier in Namenode

2016-11-07 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-10285:
---
Attachment: (was: HDFS-11029-HDFS-10285-00.patch)

> Storage Policy Satisfier in Namenode
> 
>
> Key: HDFS-10285
> URL: https://issues.apache.org/jira/browse/HDFS-10285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Affects Versions: 2.7.2
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: Storage-Policy-Satisfier-in-HDFS-May10.pdf
>
>
> Heterogeneous storage in HDFS introduced the concept of storage policy. These 
> policies can be set on directory/file to specify the user preference, where 
> to store the physical block. When user set the storage policy before writing 
> data, then the blocks could take advantage of storage policy preferences and 
> stores physical block accordingly. 
> If user set the storage policy after writing and completing the file, then 
> the blocks would have been written with default storage policy (nothing but 
> DISK). User has to run the ‘Mover tool’ explicitly by specifying all such 
> file names as a list. In some distributed system scenarios (ex: HBase) it 
> would be difficult to collect all the files and run the tool as different 
> nodes can write files separately and file can have different paths.
> Another scenarios is, when user rename the files from one effected storage 
> policy file (inherited policy from parent directory) to another storage 
> policy effected directory, it will not copy inherited storage policy from 
> source. So it will take effect from destination file/dir parent storage 
> policy. This rename operation is just a metadata change in Namenode. The 
> physical blocks still remain with source storage policy.
> So, Tracking all such business logic based file names could be difficult for 
> admins from distributed nodes(ex: region servers) and running the Mover tool. 
> Here the proposal is to provide an API from Namenode itself for trigger the 
> storage policy satisfaction. A Daemon thread inside Namenode should track 
> such calls and process to DN as movement commands. 
> Will post the detailed design thoughts document soon. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11029) [SPS]:Provide retry mechanism for the blocks which were failed while moving its storage at DNs

2016-11-07 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-11029:
---
Attachment: HDFS-11029-HDFS-10285-00.patch

Attached initial patch for this work. Please review.

> [SPS]:Provide retry mechanism for the blocks which were failed while moving 
> its storage at DNs
> --
>
> Key: HDFS-11029
> URL: https://issues.apache.org/jira/browse/HDFS-11029
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-10285
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-11029-HDFS-10285-00.patch
>
>
> When DN co-ordinator finds some of blocks associated to trackedID could not 
> be moved its storages, due to some errors.Here retry may work in some cases, 
> example if target node has no space. Then retry by finding another target can 
> work. 
> So, based on the movement result flag(SUCCESS/FAILURE) from DN Co-ordinator,  
> NN would retry by scanning the blocks again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-10285) Storage Policy Satisfier in Namenode

2016-11-07 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-10285:
---
Comment: was deleted

(was: Attached the initial patch for this work. Please review.)

> Storage Policy Satisfier in Namenode
> 
>
> Key: HDFS-10285
> URL: https://issues.apache.org/jira/browse/HDFS-10285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Affects Versions: 2.7.2
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: Storage-Policy-Satisfier-in-HDFS-May10.pdf
>
>
> Heterogeneous storage in HDFS introduced the concept of storage policy. These 
> policies can be set on directory/file to specify the user preference, where 
> to store the physical block. When user set the storage policy before writing 
> data, then the blocks could take advantage of storage policy preferences and 
> stores physical block accordingly. 
> If user set the storage policy after writing and completing the file, then 
> the blocks would have been written with default storage policy (nothing but 
> DISK). User has to run the ‘Mover tool’ explicitly by specifying all such 
> file names as a list. In some distributed system scenarios (ex: HBase) it 
> would be difficult to collect all the files and run the tool as different 
> nodes can write files separately and file can have different paths.
> Another scenarios is, when user rename the files from one effected storage 
> policy file (inherited policy from parent directory) to another storage 
> policy effected directory, it will not copy inherited storage policy from 
> source. So it will take effect from destination file/dir parent storage 
> policy. This rename operation is just a metadata change in Namenode. The 
> physical blocks still remain with source storage policy.
> So, Tracking all such business logic based file names could be difficult for 
> admins from distributed nodes(ex: region servers) and running the Mover tool. 
> Here the proposal is to provide an API from Namenode itself for trigger the 
> storage policy satisfaction. A Daemon thread inside Namenode should track 
> such calls and process to DN as movement commands. 
> Will post the detailed design thoughts document soon. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11029) [SPS]:Provide retry mechanism for the blocks which were failed while moving its storage at DNs

2016-11-07 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-11029:
---
Status: Patch Available  (was: Open)

> [SPS]:Provide retry mechanism for the blocks which were failed while moving 
> its storage at DNs
> --
>
> Key: HDFS-11029
> URL: https://issues.apache.org/jira/browse/HDFS-11029
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-10285
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-11029-HDFS-10285-00.patch
>
>
> When DN co-ordinator finds some of blocks associated to trackedID could not 
> be moved its storages, due to some errors.Here retry may work in some cases, 
> example if target node has no space. Then retry by finding another target can 
> work. 
> So, based on the movement result flag(SUCCESS/FAILURE) from DN Co-ordinator,  
> NN would retry by scanning the blocks again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9337) Should check required params in WebHDFS to avoid NPE

2016-11-07 Thread Jagadesh Kiran N (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jagadesh Kiran N updated HDFS-9337:
---
Attachment: HDFS-9337_19.patch

Thanks [~vinayrpet] for review,Updated the patch ,please review

> Should check required params in WebHDFS to avoid NPE
> 
>
> Key: HDFS-9337
> URL: https://issues.apache.org/jira/browse/HDFS-9337
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jagadesh Kiran N
>Assignee: Jagadesh Kiran N
> Attachments: HDFS-9337_00.patch, HDFS-9337_01.patch, 
> HDFS-9337_02.patch, HDFS-9337_03.patch, HDFS-9337_04.patch, 
> HDFS-9337_05.patch, HDFS-9337_06.patch, HDFS-9337_07.patch, 
> HDFS-9337_08.patch, HDFS-9337_09.patch, HDFS-9337_10.patch, 
> HDFS-9337_11.patch, HDFS-9337_12.patch, HDFS-9337_13.patch, 
> HDFS-9337_14.patch, HDFS-9337_15.patch, HDFS-9337_16.patch, 
> HDFS-9337_17.patch, HDFS-9337_18.patch, HDFS-9337_19.patch
>
>
> {code}
>  curl -i -X PUT 
> "http://10.19.92.127:50070/webhdfs/v1/kiran/sreenu?op=RENAMESNAPSHOT&snapshotname=SNAPSHOTNAME";
> {code}
> Null point exception will be thrown
> {code}
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11029) [SPS]:Provide retry mechanism for the blocks which were failed while moving its storage at DNs

2016-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15646560#comment-15646560
 ] 

Hadoop QA commented on HDFS-11029:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 2s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 4 new + 4 unchanged - 0 fixed = 8 total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m 19s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}117m 27s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.TestReadStripedFileWithDecoding |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotFileLength |
|   | hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
|   | hadoop.hdfs.TestFileChecksum |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-11029 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837906/HDFS-11029-HDFS-10285-00.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c043237c40d9 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10285 / 3adef4f |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17464/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommi

[jira] [Updated] (HDFS-11103) Ozone: Cleanup some dependencies

2016-11-07 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11103:

Attachment: HDFS-11103-HDFS-7240.004.patch

[~xyao] Thanks for the code review comments. I have updated the patch. I am not 
able to repro the test failure locally with this patch applied.


> Ozone: Cleanup some dependencies
> 
>
> Key: HDFS-11103
> URL: https://issues.apache.org/jira/browse/HDFS-11103
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Trivial
> Fix For: HDFS-7240
>
> Attachments: HDFS-11103-HDFS-7240.001.patch, 
> HDFS-11103-HDFS-7240.002.patch, HDFS-11103-HDFS-7240.003.patch, 
> HDFS-11103-HDFS-7240.004.patch
>
>
> Cleanup some unwanted dependencies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11068) [SPS]: Provide unique trackID to track the block movement sends to coordinator

2016-11-07 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-11068:

Attachment: HDFS-11068-HDFS-10285.patch

> [SPS]: Provide unique trackID to track the block movement sends to coordinator
> --
>
> Key: HDFS-11068
> URL: https://issues.apache.org/jira/browse/HDFS-11068
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-11068-HDFS-10285.patch
>
>
> Presently DatanodeManager uses constant  value -1 as 
> [trackID|https://github.com/apache/hadoop/blob/HDFS-10285/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java#L1607],
>  which is a temporary value. As per discussion with [~umamaheswararao], one 
> proposal is to use {{BlockCollectionId/InodeFileId}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11068) [SPS]: Provide unique trackID to track the block movement sends to coordinator

2016-11-07 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-11068:

Target Version/s: HDFS-10285
  Status: Patch Available  (was: Open)

> [SPS]: Provide unique trackID to track the block movement sends to coordinator
> --
>
> Key: HDFS-11068
> URL: https://issues.apache.org/jira/browse/HDFS-11068
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-11068-HDFS-10285.patch
>
>
> Presently DatanodeManager uses constant  value -1 as 
> [trackID|https://github.com/apache/hadoop/blob/HDFS-10285/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java#L1607],
>  which is a temporary value. As per discussion with [~umamaheswararao], one 
> proposal is to use {{BlockCollectionId/InodeFileId}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11068) [SPS]: Provide unique trackID to track the block movement sends to coordinator

2016-11-07 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15646628#comment-15646628
 ] 

Rakesh R commented on HDFS-11068:
-

Attached an initial patch with the suggested changes.

> [SPS]: Provide unique trackID to track the block movement sends to coordinator
> --
>
> Key: HDFS-11068
> URL: https://issues.apache.org/jira/browse/HDFS-11068
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-11068-HDFS-10285.patch
>
>
> Presently DatanodeManager uses constant  value -1 as 
> [trackID|https://github.com/apache/hadoop/blob/HDFS-10285/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java#L1607],
>  which is a temporary value. As per discussion with [~umamaheswararao], one 
> proposal is to use {{BlockCollectionId/InodeFileId}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9337) Should check required params in WebHDFS to avoid NPE

2016-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15646764#comment-15646764
 ] 

Hadoop QA commented on HDFS-9337:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 15 new + 267 unchanged - 6 fixed = 282 total (was 273) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 54m 11s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHDFS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HDFS-9337 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837914/HDFS-9337_19.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c979af2776ed 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3fff158 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17466/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17466/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17466/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17466/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Should check required params in WebHDFS to avoid NPE
> 
>
> Key: HDFS-9337
> URL: https

[jira] [Commented] (HDFS-11068) [SPS]: Provide unique trackID to track the block movement sends to coordinator

2016-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15646771#comment-15646771
 ] 

Hadoop QA commented on HDFS-11068:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
36s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 4 new + 77 unchanged - 0 fixed = 81 total (was 77) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 55m 32s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestFileChecksum |
|   | hadoop.hdfs.TestDFSShell |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-11068 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837916/HDFS-11068-HDFS-10285.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux cbb9a747d2c3 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10285 / 3adef4f |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17467/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17467/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17467/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17467/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [SPS]: Provide unique track

[jira] [Commented] (HDFS-11103) Ozone: Cleanup some dependencies

2016-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15646783#comment-15646783
 ] 

Hadoop QA commented on HDFS-11103:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
42s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 0 unchanged - 3 fixed = 0 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 60m  
3s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 84m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-11103 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837915/HDFS-11103-HDFS-7240.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0cd11d4b33e4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / eb8f2b2 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17465/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17465/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: Cleanup some dependencies
> 
>
> Key: HDFS-11103
> URL: https://issues.apache.org/jira/browse/HDFS-11103
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Trivial
> Fix For: HDFS-7240
>
> Attachments: HDFS-11103-HDFS-7240.001.pa

  1   2   >