[jira] [Updated] (HDFS-13100) Handle IllegalArgumentException when GETSERVERDEFAULTS is not implemented in webhdfs.

2018-03-15 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-13100:
-
Description: 
HDFS-12386 added getserverdefaults call to webhdfs (this method is used by 
HDFS-12396), and expect clusters that don't support this to throw 
UnsupportedOperationException.  However, we are seeing

{code}
hadoop distcp -D ipc.client.fallback-to-simple-auth-allowed=true -m 30 -pb 
-update -skipcrccheck webhdfs://:/fileX 
hdfs://:8020/scale1/fileY

...
18/01/05 10:57:33 ERROR tools.DistCp: Exception encountered 
org.apache.hadoop.ipc.RemoteException(java.lang.IllegalArgumentException): 
Invalid value for webhdfs parameter "op": No enum constant 
org.apache.hadoop.hdfs.web.resources.GetOpParam.Op.GETSERVERDEFAULTS
at 
org.apache.hadoop.hdfs.web.JsonUtilClient.toRemoteException(JsonUtilClient.java:80)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:498)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:126)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:765)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:606)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:637)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:633)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getServerDefaults(WebHdfsFileSystem.java:1807)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProviderUri(WebHdfsFileSystem.java:1825)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProvider(WebHdfsFileSystem.java:1836)
at 
org.apache.hadoop.hdfs.HdfsKMSUtil.addDelegationTokensForKeyProvider(HdfsKMSUtil.java:72)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.addDelegationTokens(WebHdfsFileSystem.java:1627)
at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:139)
at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
at 
org.apache.hadoop.tools.SimpleCopyListing.validatePaths(SimpleCopyListing.java:199)
at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:85)
at 
org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:89)
at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
at 
org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:368)
at org.apache.hadoop.tools.DistCp.prepareFileListing(DistCp.java:96)
at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:205)
at org.apache.hadoop.tools.DistCp.execute(DistCp.java:182)
at org.apache.hadoop.tools.DistCp.run(DistCp.java:153)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
{code}

We either need to make the server throw UnsupportedOperationException, or make 
the client to handle IllegalArgumentException. For backward compatibility and 
easier operation in the field, the latter is preferred.

But we'd better understand why IllegalArgumentException is thrown instead of 
UnsupportedOperationException is thrown. 

The correct way to do is: check if the operation is supported, and throw the 
UnsurportedOperationExcetion if not; then check if parameter is legal, throw 
IllegalArgumentException is it's not legal. We can do that fix as follow-up of 
this jira.



  was:
HDFS-12386 added getserverdefaults call to webhdfs, and expect clusters that 
don't support this to throw UnsupportedOperationException.  However, we are 
seeing

{code}
hadoop distcp -D ipc.client.fallback-to-simple-auth-allowed=true -m 30 -pb 
-update -skipcrccheck webhdfs://:/fileX 
hdfs://:8020/scale1/fileY

...
18/01/05 10:57:33 ERROR tools.DistCp: Exception encountered 
org.apache.hadoop.ipc.RemoteException(java.lang.IllegalArgumentException): 
Invalid value for webhdfs parameter "op": No enum constant 
org.apache.hadoop.hdfs.web.resources.GetOpParam.Op.GETSERVERDEFAULTS
at 
org.apache.hadoop.hdfs.web.JsonUtilClient.toRemoteException(JsonUtilClient.java:80)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:498)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:126)
at 

[jira] [Commented] (HDFS-13217) Log audit event only used last EC policy name when add multiple policies from file

2018-03-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401429#comment-16401429
 ] 

genericqa commented on HDFS-13217:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 44s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}130m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-13217 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914810/HDFS-13217.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f761260f6b80 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4bf6220 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23509/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23509/testReport/ |
| Max. process+thread count | 3835 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDFS-12883) RBF: Document Router and State Store metrics

2018-03-15 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401412#comment-16401412
 ] 

Lei (Eddy) Xu commented on HDFS-12883:
--

Thank you so much [~linyiqun] to help to revert the change!

I think the general rule is that if this incompatible change is not in 2.9 we 
should not put that in 2.9.1.  Same as the 3.0.x line.

This level of incompatible change should be OK between point versions like 
between 2.8 and 2.9.

> RBF: Document Router and State Store metrics
> 
>
> Key: HDFS-12883
> URL: https://issues.apache.org/jira/browse/HDFS-12883
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
>  Labels: RBF, incompatible
> Fix For: 3.1.0, 2.10.0, 2.9.1
>
> Attachments: HDFS-12883.001.patch, HDFS-12883.002.patch, 
> metric-screen-shot.jpg
>
>
> Document Router and State Store metrics in doc. This will be helpful for 
> users to monitor RBF.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12883) RBF: Document Router and State Store metrics

2018-03-15 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12883:
-
Target Version/s: 3.1.0  (was: 2.9.1, 3.0.1)

> RBF: Document Router and State Store metrics
> 
>
> Key: HDFS-12883
> URL: https://issues.apache.org/jira/browse/HDFS-12883
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
>  Labels: RBF, incompatible
> Fix For: 3.1.0, 2.10.0, 2.9.1
>
> Attachments: HDFS-12883.001.patch, HDFS-12883.002.patch, 
> metric-screen-shot.jpg
>
>
> Document Router and State Store metrics in doc. This will be helpful for 
> users to monitor RBF.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12895) RBF: Add ACL support for mount table

2018-03-15 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12895:
-
Target Version/s: 3.1.0  (was: 2.9.0, 3.0.1)

> RBF: Add ACL support for mount table
> 
>
> Key: HDFS-12895
> URL: https://issues.apache.org/jira/browse/HDFS-12895
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
>  Labels: RBF, incompatible
> Fix For: 3.1.0, 2.10.0, 2.9.1
>
> Attachments: HDFS-12895-branch-2.001.patch, HDFS-12895.001.patch, 
> HDFS-12895.002.patch, HDFS-12895.003.patch, HDFS-12895.004.patch, 
> HDFS-12895.005.patch, HDFS-12895.006.patch, HDFS-12895.007.patch
>
>
> Adding ACL support for the Mount Table management. Following is the initial 
> design of ACL control for the mount table management.
> Each mount table has its owner, group name and permission.
> The mount table permissions (FsPermission), here we use 
> {{org.apache.hadoop.fs.permission.FsPermission}} to do the access check:
> # READ permission: you can read the mount table info.
> # WRITE permission: you can add remove or update this mount table info.
> # EXECUTE permission: This won't be used.
> The add command of mount table will be extended like this
> {noformat}
> $HADOOP_HOME/bin/hdfs dfsrouteradmin [-add   
>  [-owner ] [-group ] [-mode ]]
> {noformat}
> * is UNIX-style permissions for the mount table. Permissions are 
> specified in octal, e.g. 0755. By default, this is set to 0755*.
> If we want update the ACL info of specfied mount table, just execute add 
> command again. This command not only adding for new mount talle but also 
> updating mount table once it finds given mount table is existed. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12883) RBF: Document Router and State Store metrics

2018-03-15 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401404#comment-16401404
 ] 

Yiqun Lin edited comment on HDFS-12883 at 3/16/18 2:52 AM:
---

Hi [~eddyxu], I have reverted this and HDFS-12895 in branch-3.0.1. One more 
question: Do we also need to revert these incompatible change in branch-3.0, 
branch-2.9, branch-2.9.1?


was (Author: linyiqun):
Hi [~eddyxu], I have reverted this and HDFS-12895 in branch-3.0.1. One more 
question: Does we also need to revert these incompatible change in branch-3.0, 
branch-2.9, branch-2.9.1?

> RBF: Document Router and State Store metrics
> 
>
> Key: HDFS-12883
> URL: https://issues.apache.org/jira/browse/HDFS-12883
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
>  Labels: RBF, incompatible
> Fix For: 3.1.0, 2.10.0, 2.9.1
>
> Attachments: HDFS-12883.001.patch, HDFS-12883.002.patch, 
> metric-screen-shot.jpg
>
>
> Document Router and State Store metrics in doc. This will be helpful for 
> users to monitor RBF.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12883) RBF: Document Router and State Store metrics

2018-03-15 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401404#comment-16401404
 ] 

Yiqun Lin commented on HDFS-12883:
--

Hi [~eddyxu], I have reverted this and HDFS-12895 in branch-3.0.1. One more 
question: Does we also need to revert these incompatible change in branch-3.0, 
branch-2.9, branch-2.9.1?

> RBF: Document Router and State Store metrics
> 
>
> Key: HDFS-12883
> URL: https://issues.apache.org/jira/browse/HDFS-12883
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
>  Labels: RBF, incompatible
> Fix For: 3.1.0, 2.10.0, 2.9.1
>
> Attachments: HDFS-12883.001.patch, HDFS-12883.002.patch, 
> metric-screen-shot.jpg
>
>
> Document Router and State Store metrics in doc. This will be helpful for 
> users to monitor RBF.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12895) RBF: Add ACL support for mount table

2018-03-15 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12895:
-
Fix Version/s: (was: 3.0.1)

> RBF: Add ACL support for mount table
> 
>
> Key: HDFS-12895
> URL: https://issues.apache.org/jira/browse/HDFS-12895
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
>  Labels: RBF, incompatible
> Fix For: 3.1.0, 2.10.0, 2.9.1
>
> Attachments: HDFS-12895-branch-2.001.patch, HDFS-12895.001.patch, 
> HDFS-12895.002.patch, HDFS-12895.003.patch, HDFS-12895.004.patch, 
> HDFS-12895.005.patch, HDFS-12895.006.patch, HDFS-12895.007.patch
>
>
> Adding ACL support for the Mount Table management. Following is the initial 
> design of ACL control for the mount table management.
> Each mount table has its owner, group name and permission.
> The mount table permissions (FsPermission), here we use 
> {{org.apache.hadoop.fs.permission.FsPermission}} to do the access check:
> # READ permission: you can read the mount table info.
> # WRITE permission: you can add remove or update this mount table info.
> # EXECUTE permission: This won't be used.
> The add command of mount table will be extended like this
> {noformat}
> $HADOOP_HOME/bin/hdfs dfsrouteradmin [-add   
>  [-owner ] [-group ] [-mode ]]
> {noformat}
> * is UNIX-style permissions for the mount table. Permissions are 
> specified in octal, e.g. 0755. By default, this is set to 0755*.
> If we want update the ACL info of specfied mount table, just execute add 
> command again. This command not only adding for new mount talle but also 
> updating mount table once it finds given mount table is existed. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12883) RBF: Document Router and State Store metrics

2018-03-15 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12883:
-
Fix Version/s: (was: 3.0.1)

> RBF: Document Router and State Store metrics
> 
>
> Key: HDFS-12883
> URL: https://issues.apache.org/jira/browse/HDFS-12883
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
>  Labels: RBF, incompatible
> Fix For: 3.1.0, 2.10.0, 2.9.1
>
> Attachments: HDFS-12883.001.patch, HDFS-12883.002.patch, 
> metric-screen-shot.jpg
>
>
> Document Router and State Store metrics in doc. This will be helpful for 
> users to monitor RBF.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12977) Add stateId to RPC headers.

2018-03-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401389#comment-16401389
 ] 

genericqa commented on HDFS-12977:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 12m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
25s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 14s{color} | {color:orange} root: The patch generated 5 new + 670 unchanged 
- 0 fixed = 675 total (was 670) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
56s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 11 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
56s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
42s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}103m 10s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}213m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSClientRetries |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-12977 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914792/HDFS_12977.trunk.006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  

[jira] [Commented] (HDFS-13296) GenericTestUtils#getTempPath and GenericTestUtils#getRandomizedTestDir generate paths with drive letter in Windows, and fail webhdfs related test cases

2018-03-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401372#comment-16401372
 ] 

Íñigo Goiri commented on HDFS-13296:


The tests for commons run without issues but we need to check YARN and 
specially HDFS.
Is there a way to trigger this?

> GenericTestUtils#getTempPath and GenericTestUtils#getRandomizedTestDir 
> generate paths with drive letter in Windows, and fail webhdfs related test 
> cases
> ---
>
> Key: HDFS-13296
> URL: https://issues.apache.org/jira/browse/HDFS-13296
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13296.000.patch
>
>
> In GenericTestUtils#getRandomizedTestDir, getAbsoluteFile is called and will 
> added drive letter to the path in windows, some test cases use the generated 
> path to send webhdfs request, which will fail due to the drive letter in the 
> URI like: "webhdfs://127.0.0.1:18334/D:/target/test/data/vUqZkOrBZa/test"
> GenericTestUtils#getTempPath has the similar issue in Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13217) Log audit event only used last EC policy name when add multiple policies from file

2018-03-15 Thread liaoyuxiangqin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401359#comment-16401359
 ] 

liaoyuxiangqin commented on HDFS-13217:
---

Thanks for your review and give suggestions on this [~eddyxu], a update patch 
have Submitted.

> Log audit event only used last EC policy name when add multiple policies from 
> file 
> ---
>
> Key: HDFS-13217
> URL: https://issues.apache.org/jira/browse/HDFS-13217
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: liaoyuxiangqin
>Assignee: liaoyuxiangqin
>Priority: Major
> Attachments: HDFS-13217.001.patch, HDFS-13217.002.patch, 
> HDFS-13217.003.patch
>
>
> When i read the addErasureCodingPolicies() of FSNamesystem class in namenode, 
> i found the following code only used last ec policy name for  logAuditEvent, 
> i think this audit log can't track whole policies for the add multiple 
> erasure coding policies to the ErasureCodingPolicyManager. Thanks.
> {code:java|title=FSNamesystem.java|borderStyle=solid}
> try {
>   checkOperation(OperationCategory.WRITE);
>   checkNameNodeSafeMode("Cannot add erasure coding policy");
>   for (ErasureCodingPolicy policy : policies) {
> try {
>   ErasureCodingPolicy newPolicy =
>   FSDirErasureCodingOp.addErasureCodingPolicy(this, policy,
>   logRetryCache);
>   addECPolicyName = newPolicy.getName();
>   responses.add(new AddErasureCodingPolicyResponse(newPolicy));
> } catch (HadoopIllegalArgumentException e) {
>   responses.add(new AddErasureCodingPolicyResponse(policy, e));
> }
>   }
>   success = true;
>   return responses.toArray(new AddErasureCodingPolicyResponse[0]);
> } finally {
>   writeUnlock(operationName);
>   if (success) {
> getEditLog().logSync();
>   }
>   logAuditEvent(success, operationName,addECPolicyName, null, null);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13217) Log audit event only used last EC policy name when add multiple policies from file

2018-03-15 Thread liaoyuxiangqin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liaoyuxiangqin updated HDFS-13217:
--
Attachment: HDFS-13217.003.patch

> Log audit event only used last EC policy name when add multiple policies from 
> file 
> ---
>
> Key: HDFS-13217
> URL: https://issues.apache.org/jira/browse/HDFS-13217
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: liaoyuxiangqin
>Assignee: liaoyuxiangqin
>Priority: Major
> Attachments: HDFS-13217.001.patch, HDFS-13217.002.patch, 
> HDFS-13217.003.patch
>
>
> When i read the addErasureCodingPolicies() of FSNamesystem class in namenode, 
> i found the following code only used last ec policy name for  logAuditEvent, 
> i think this audit log can't track whole policies for the add multiple 
> erasure coding policies to the ErasureCodingPolicyManager. Thanks.
> {code:java|title=FSNamesystem.java|borderStyle=solid}
> try {
>   checkOperation(OperationCategory.WRITE);
>   checkNameNodeSafeMode("Cannot add erasure coding policy");
>   for (ErasureCodingPolicy policy : policies) {
> try {
>   ErasureCodingPolicy newPolicy =
>   FSDirErasureCodingOp.addErasureCodingPolicy(this, policy,
>   logRetryCache);
>   addECPolicyName = newPolicy.getName();
>   responses.add(new AddErasureCodingPolicyResponse(newPolicy));
> } catch (HadoopIllegalArgumentException e) {
>   responses.add(new AddErasureCodingPolicyResponse(policy, e));
> }
>   }
>   success = true;
>   return responses.toArray(new AddErasureCodingPolicyResponse[0]);
> } finally {
>   writeUnlock(operationName);
>   if (success) {
> getEditLog().logSync();
>   }
>   logAuditEvent(success, operationName,addECPolicyName, null, null);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13217) Log audit event only used last EC policy name when add multiple policies from file

2018-03-15 Thread liaoyuxiangqin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liaoyuxiangqin updated HDFS-13217:
--
Status: Patch Available  (was: Open)

> Log audit event only used last EC policy name when add multiple policies from 
> file 
> ---
>
> Key: HDFS-13217
> URL: https://issues.apache.org/jira/browse/HDFS-13217
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: liaoyuxiangqin
>Assignee: liaoyuxiangqin
>Priority: Major
> Attachments: HDFS-13217.001.patch, HDFS-13217.002.patch, 
> HDFS-13217.003.patch
>
>
> When i read the addErasureCodingPolicies() of FSNamesystem class in namenode, 
> i found the following code only used last ec policy name for  logAuditEvent, 
> i think this audit log can't track whole policies for the add multiple 
> erasure coding policies to the ErasureCodingPolicyManager. Thanks.
> {code:java|title=FSNamesystem.java|borderStyle=solid}
> try {
>   checkOperation(OperationCategory.WRITE);
>   checkNameNodeSafeMode("Cannot add erasure coding policy");
>   for (ErasureCodingPolicy policy : policies) {
> try {
>   ErasureCodingPolicy newPolicy =
>   FSDirErasureCodingOp.addErasureCodingPolicy(this, policy,
>   logRetryCache);
>   addECPolicyName = newPolicy.getName();
>   responses.add(new AddErasureCodingPolicyResponse(newPolicy));
> } catch (HadoopIllegalArgumentException e) {
>   responses.add(new AddErasureCodingPolicyResponse(policy, e));
> }
>   }
>   success = true;
>   return responses.toArray(new AddErasureCodingPolicyResponse[0]);
> } finally {
>   writeUnlock(operationName);
>   if (success) {
> getEditLog().logSync();
>   }
>   logAuditEvent(success, operationName,addECPolicyName, null, null);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13296) GenericTestUtils#getTempPath and GenericTestUtils#getRandomizedTestDir generate paths with drive letter in Windows, and fail webhdfs related test cases

2018-03-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401366#comment-16401366
 ] 

genericqa commented on HDFS-13296:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 15 unchanged - 1 fixed = 15 total (was 16) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m  
8s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 98m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-13296 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914798/HDFS-13296.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e5ed41fca116 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4bf6220 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23508/testReport/ |
| Max. process+thread count | 1347 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23508/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> GenericTestUtils#getTempPath and GenericTestUtils#getRandomizedTestDir 
> 

[jira] [Updated] (HDFS-13297) Add config validation util

2018-03-15 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-13297:
--
Description: Add a generic util to validate configuration based on TAGS.  
(was: This is broken after merging trunk change HADOOP-15007 into HDFS-7240 
branch. I remove the cmd and related test to have a clean merge. [~ajakumar], 
please fix the cmd and bring back the related test.)

> Add config validation util
> --
>
> Key: HDFS-13297
> URL: https://issues.apache.org/jira/browse/HDFS-13297
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS-7240
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: HDFS-7240
>
>
> Add a generic util to validate configuration based on TAGS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13297) Add config validation util

2018-03-15 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-13297:
--
Description: This is broken after merging trunk change HADOOP-15007 into 
HDFS-7240 branch. I remove the cmd and related test to have a clean merge. 
[~ajakumar], please fix the cmd and bring back the related test.  (was: This is 
broken after merging trunk change HADOOP-15007 into HDFS-7240 branch. I remove 
the cmd and related test to have a clean merge. [~ajakumar], please fix the cmd 
and bring back the related test. )

> Add config validation util
> --
>
> Key: HDFS-13297
> URL: https://issues.apache.org/jira/browse/HDFS-13297
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS-7240
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: HDFS-7240
>
>
> This is broken after merging trunk change HADOOP-15007 into HDFS-7240 branch. 
> I remove the cmd and related test to have a clean merge. [~ajakumar], please 
> fix the cmd and bring back the related test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13297) Add config validation util

2018-03-15 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDFS-13297:
-

 Summary: Add config validation util
 Key: HDFS-13297
 URL: https://issues.apache.org/jira/browse/HDFS-13297
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: HDFS-7240
Reporter: Ajay Kumar
Assignee: Ajay Kumar
 Fix For: HDFS-7240


This is broken after merging trunk change HADOOP-15007 into HDFS-7240 branch. I 
remove the cmd and related test to have a clean merge. [~ajakumar], please fix 
the cmd and bring back the related test. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13217) Log audit event only used last EC policy name when add multiple policies from file

2018-03-15 Thread liaoyuxiangqin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liaoyuxiangqin updated HDFS-13217:
--
Status: Open  (was: Patch Available)

> Log audit event only used last EC policy name when add multiple policies from 
> file 
> ---
>
> Key: HDFS-13217
> URL: https://issues.apache.org/jira/browse/HDFS-13217
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: liaoyuxiangqin
>Assignee: liaoyuxiangqin
>Priority: Major
> Attachments: HDFS-13217.001.patch, HDFS-13217.002.patch
>
>
> When i read the addErasureCodingPolicies() of FSNamesystem class in namenode, 
> i found the following code only used last ec policy name for  logAuditEvent, 
> i think this audit log can't track whole policies for the add multiple 
> erasure coding policies to the ErasureCodingPolicyManager. Thanks.
> {code:java|title=FSNamesystem.java|borderStyle=solid}
> try {
>   checkOperation(OperationCategory.WRITE);
>   checkNameNodeSafeMode("Cannot add erasure coding policy");
>   for (ErasureCodingPolicy policy : policies) {
> try {
>   ErasureCodingPolicy newPolicy =
>   FSDirErasureCodingOp.addErasureCodingPolicy(this, policy,
>   logRetryCache);
>   addECPolicyName = newPolicy.getName();
>   responses.add(new AddErasureCodingPolicyResponse(newPolicy));
> } catch (HadoopIllegalArgumentException e) {
>   responses.add(new AddErasureCodingPolicyResponse(policy, e));
> }
>   }
>   success = true;
>   return responses.toArray(new AddErasureCodingPolicyResponse[0]);
> } finally {
>   writeUnlock(operationName);
>   if (success) {
> getEditLog().logSync();
>   }
>   logAuditEvent(success, operationName,addECPolicyName, null, null);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13296) GenericTestUtils#getTempPath and GenericTestUtils#getRandomizedTestDir generate paths with drive letter in Windows, and fail webhdfs related test cases

2018-03-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401296#comment-16401296
 ] 

Íñigo Goiri commented on HDFS-13296:


This seems to be the cause for most failed unit tests in Windows.
We may want to make it a HADOOP bug more than a HDFS one; let's let Yetus run 
though.

> GenericTestUtils#getTempPath and GenericTestUtils#getRandomizedTestDir 
> generate paths with drive letter in Windows, and fail webhdfs related test 
> cases
> ---
>
> Key: HDFS-13296
> URL: https://issues.apache.org/jira/browse/HDFS-13296
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13296.000.patch
>
>
> In GenericTestUtils#getRandomizedTestDir, getAbsoluteFile is called and will 
> added drive letter to the path in windows, some test cases use the generated 
> path to send webhdfs request, which will fail due to the drive letter in the 
> URI like: "webhdfs://127.0.0.1:18334/D:/target/test/data/vUqZkOrBZa/test"
> GenericTestUtils#getTempPath has the similar issue in Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13296) GenericTestUtils#getTempPath and GenericTestUtils#getRandomizedTestDir generate paths with drive letter in Windows, and fail webhdfs related test cases

2018-03-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13296:
---
Status: Patch Available  (was: Open)

> GenericTestUtils#getTempPath and GenericTestUtils#getRandomizedTestDir 
> generate paths with drive letter in Windows, and fail webhdfs related test 
> cases
> ---
>
> Key: HDFS-13296
> URL: https://issues.apache.org/jira/browse/HDFS-13296
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13296.000.patch
>
>
> In GenericTestUtils#getRandomizedTestDir, getAbsoluteFile is called and will 
> added drive letter to the path in windows, some test cases use the generated 
> path to send webhdfs request, which will fail due to the drive letter in the 
> URI like: "webhdfs://127.0.0.1:18334/D:/target/test/data/vUqZkOrBZa/test"
> GenericTestUtils#getTempPath has the similar issue in Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13296) GenericTestUtils#getTempPath and GenericTestUtils#getRandomizedTestDir generate paths with drive letter in Windows, and fail webhdfs related test cases

2018-03-15 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13296:
--
Attachment: HDFS-13296.000.patch

> GenericTestUtils#getTempPath and GenericTestUtils#getRandomizedTestDir 
> generate paths with drive letter in Windows, and fail webhdfs related test 
> cases
> ---
>
> Key: HDFS-13296
> URL: https://issues.apache.org/jira/browse/HDFS-13296
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13296.000.patch
>
>
> In GenericTestUtils#getRandomizedTestDir, getAbsoluteFile is called and will 
> added drive letter to the path in windows, some test cases use the generated 
> path to send webhdfs request, which will fail due to the drive letter in the 
> URI like: "webhdfs://127.0.0.1:18334/D:/target/test/data/vUqZkOrBZa/test"
> GenericTestUtils#getTempPath has the similar issue in Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13251) Avoid using hard coded datanode data dirs in unit tests

2018-03-15 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401261#comment-16401261
 ] 

Ajay Kumar commented on HDFS-13251:
---

[~xyao] thanks for commit and review.

> Avoid using hard coded datanode data dirs in unit tests
> ---
>
> Key: HDFS-13251
> URL: https://issues.apache.org/jira/browse/HDFS-13251
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HDFS-13251.000.patch, HDFS-13251.001.patch, 
> HDFS-13251.002.patch, HDFS-13251.003.addendum.patch, HDFS-13251.003.patch
>
>
> There are a few unit tests that rely on hard-coded MiniDFSCluster data dir 
> names.
>  
>  * TestDataNodeVolumeFailureToleration
>  * TestDataNodeVolumeFailureReporting
>  * TestDiskBalancerCommand
>  * TestBlockStatsMXBean
>  * TestDataNodeVolumeMetrics
>  * TestDFSAdmin
>  * TestDataNodeHotSwapVolumes
>  * TestDataNodeVolumeFailure
> This ticket is opened to use
> {code:java}
> MiniDFSCluster#getInstanceStorageDir(0, 1);
> instead of like below
> new File(cluster.getDataDirectory(), "data1");{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13284) Adjust criteria for LowRedundancyBlocks.QUEUE_VERY_LOW_REDUNDANCY

2018-03-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401260#comment-16401260
 ] 

genericqa commented on HDFS-13284:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 43s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 5 new + 58 unchanged - 3 fixed = 63 total (was 61) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 55s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}131m 39s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}180m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-13284 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914597/HDFS-13284.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e15576222328 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1976e00 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23506/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 

[jira] [Commented] (HDFS-13265) MiniDFSCluster should set reasonable defaults to reduce resource consumption

2018-03-15 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401256#comment-16401256
 ] 

Chris Douglas commented on HDFS-13265:
--

Excellent, thanks [~xkrogen]. Skimming the commit log, does HADOOP-13597 mean 
we should not backport HDFS-15311 to branch-2 before committing this?

> MiniDFSCluster should set reasonable defaults to reduce resource consumption
> 
>
> Key: HDFS-13265
> URL: https://issues.apache.org/jira/browse/HDFS-13265
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, namenode, test
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13265-branch-2.000.patch, 
> HDFS-13265-branch-2.000.patch, HDFS-13265.000.patch, 
> TestMiniDFSClusterThreads.java
>
>
> MiniDFSCluster takes its defaults from {{DFSConfigKeys}} defaults, but many 
> of these are not suitable for a unit test environment. For example, the 
> default handler thread count of 10 is definitely more than necessary for 
> (almost?) any unit test. We should set reasonable, lower defaults unless a 
> test specifically requires more.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13250) RBF: Router to manage requests across multiple subclusters

2018-03-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401242#comment-16401242
 ] 

Íñigo Goiri commented on HDFS-13250:


Believe it or not... no failed unit tests for HDFS!
It actually run 5649 tests which is pretty much all of them.
I also went through the report and the usual suspects are there.
I can call the bug bash a success :)

> RBF: Router to manage requests across multiple subclusters
> --
>
> Key: HDFS-13250
> URL: https://issues.apache.org/jira/browse/HDFS-13250
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13250.000.patch, HDFS-13250.001.patch, 
> HDFS-13250.002.patch
>
>
> HDFS-13124 introduces the concept of mount points spanning multiple 
> subclusters. The Router should distribute the requests across these 
> subclusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13232) RBF: ConnectionPool should return first usable connection

2018-03-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401240#comment-16401240
 ] 

Íñigo Goiri commented on HDFS-13232:


OK, we'll leave it as is then.

> RBF: ConnectionPool should return first usable connection
> -
>
> Key: HDFS-13232
> URL: https://issues.apache.org/jira/browse/HDFS-13232
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Ekanth S
>Priority: Minor
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.2, 3.2.0
>
> Attachments: HDFS-13232.001.patch, HDFS-13232.002.patch, 
> HDFS-13232.003.patch
>
>
> In current ConnectionPool.getConnection(), it will return the first active 
> connection:
> {code:java}
> for (int i=0; i   int index = (threadIndex + i) % size;
>   conn = tmpConnections.get(index);
>   if (conn != null && !conn.isUsable()) {
> return conn;
>   }
> }
> {code}
> Here "!conn.isUsable()" should be "conn.isUsable()".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12723) TestReadStripedFileWithMissingBlocks#testReadFileWithMissingBlocks failing consistently.

2018-03-15 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401224#comment-16401224
 ] 

Ajay Kumar commented on HDFS-12723:
---

[~elgoiri],thanks for review and commit.

> TestReadStripedFileWithMissingBlocks#testReadFileWithMissingBlocks failing 
> consistently.
> 
>
> Key: HDFS-12723
> URL: https://issues.apache.org/jira/browse/HDFS-12723
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Rushabh S Shah
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 3.1.0, 3.0.2, 3.2.0
>
> Attachments: HDFS-12723.000.patch, HDFS-12723.001.patch, 
> HDFS-12723.002.patch
>
>
> TestReadStripedFileWithMissingBlocks#testReadFileWithMissingBlocks is timing 
> out consistently on my local machine.
> {noformat}
> Running org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 132.405 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks
> testReadFileWithMissingBlocks(org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks)
>   Time elapsed: 132.171 sec  <<< ERROR!
> java.util.concurrent.TimeoutException: Timed out waiting for /foo to have all 
> the internalBlocks
>   at 
> org.apache.hadoop.hdfs.StripedFileTestUtil.waitBlockGroupsReported(StripedFileTestUtil.java:295)
>   at 
> org.apache.hadoop.hdfs.StripedFileTestUtil.waitBlockGroupsReported(StripedFileTestUtil.java:256)
>   at 
> org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks.readFileWithMissingBlocks(TestReadStripedFileWithMissingBlocks.java:98)
>   at 
> org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks.testReadFileWithMissingBlocks(TestReadStripedFileWithMissingBlocks.java:82)
> Results :
> Tests in error: 
>   
> TestReadStripedFileWithMissingBlocks.testReadFileWithMissingBlocks:82->readFileWithMissingBlocks:98
>  » Timeout
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11481) hdfs snapshotDiff /.reserved/raw/... fails on snapshottable directories

2018-03-15 Thread Mavin Martin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401216#comment-16401216
 ] 

Mavin Martin commented on HDFS-11481:
-

Hi [~yzhangal],

Thank you for reviewing this!  We are in the process of verifying the validity 
of this patch and will provide an update next week.

Thanks!

Mavin

> hdfs snapshotDiff /.reserved/raw/... fails on snapshottable directories
> ---
>
> Key: HDFS-11481
> URL: https://issues.apache.org/jira/browse/HDFS-11481
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: Mavin Martin
>Assignee: Mavin Martin
>Priority: Minor
> Attachments: HDFS-11481-branch-2.6.0.001.patch, HDFS-11481.001.patch, 
> HDFS-11481.002.patch
>
>
> Successful command:
> {code}
> #> hdfs snapshotDiff /tmp/dir s1 s2
> Difference between snapshot s1 and snapshot s2 under directory /tmp/dir:
> M   .
> +   ./file1.txt
> {code}
> Unsuccessful command:
> {code}
> #> hdfs snapshotDiff /.reserved/raw/tmp/dir s1 s2
> snapshotDiff: Directory does not exist: /.reserved/raw/tmp/dir
> {code}
> Prefixing with raw path should run successfully and return same output.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13296) GenericTestUtils#getTempPath and GenericTestUtils#getRandomizedTestDir generate paths with drive letter in Windows, and fail webhdfs related test cases

2018-03-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri reassigned HDFS-13296:
--

Assignee: Xiao Liang

> GenericTestUtils#getTempPath and GenericTestUtils#getRandomizedTestDir 
> generate paths with drive letter in Windows, and fail webhdfs related test 
> cases
> ---
>
> Key: HDFS-13296
> URL: https://issues.apache.org/jira/browse/HDFS-13296
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
>
> In GenericTestUtils#getRandomizedTestDir, getAbsoluteFile is called and will 
> added drive letter to the path in windows, some test cases use the generated 
> path to send webhdfs request, which will fail due to the drive letter in the 
> URI like: "webhdfs://127.0.0.1:18334/D:/target/test/data/vUqZkOrBZa/test"
> GenericTestUtils#getTempPath has the similar issue in Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11600) Refactor TestDFSStripedOutputStreamWithFailure test classes

2018-03-15 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401207#comment-16401207
 ] 

Chris Douglas commented on HDFS-11600:
--

Thanks, [~Sammi]!

> Refactor TestDFSStripedOutputStreamWithFailure test classes
> ---
>
> Key: HDFS-11600
> URL: https://issues.apache.org/jira/browse/HDFS-11600
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.0.0-alpha2
>Reporter: Andrew Wang
>Assignee: SammiChen
>Priority: Minor
> Fix For: 3.1.0, 3.0.2
>
> Attachments: HDFS-11600-1.patch, HDFS-11600.002.patch, 
> HDFS-11600.003.patch, HDFS-11600.004.patch, HDFS-11600.005.patch, 
> HDFS-11600.006.patch, HDFS-11600.007.patch
>
>
> TestDFSStripedOutputStreamWithFailure has a great number of subclasses. The 
> tests are parameterized based on the name of these subclasses.
> Seems like we could parameterize these tests with JUnit and then not need all 
> these separate test classes.
> Another note, the tests will randomly return instead of running the test. 
> Using {{Assume}} instead would make it more clear in the test output that 
> these tests were skipped.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13296) GenericTestUtils#getTempPath and GenericTestUtils#getRandomizedTestDir generate paths with drive letter in Windows, and fail webhdfs related test cases

2018-03-15 Thread Xiao Liang (JIRA)
Xiao Liang created HDFS-13296:
-

 Summary: GenericTestUtils#getTempPath and 
GenericTestUtils#getRandomizedTestDir generate paths with drive letter in 
Windows, and fail webhdfs related test cases
 Key: HDFS-13296
 URL: https://issues.apache.org/jira/browse/HDFS-13296
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Xiao Liang


In GenericTestUtils#getRandomizedTestDir, getAbsoluteFile is called and will 
added drive letter to the path in windows, some test cases use the generated 
path to send webhdfs request, which will fail due to the drive letter in the 
URI like: "webhdfs://127.0.0.1:18334/D:/target/test/data/vUqZkOrBZa/test"

GenericTestUtils#getTempPath has the similar issue in Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12977) Add stateId to RPC headers.

2018-03-15 Thread Plamen Jeliazkov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401205#comment-16401205
 ] 

Plamen Jeliazkov commented on HDFS-12977:
-

Thanks for the thorough review Konstantin.

I believe I have addressed your points and have attached a new patch 
(trunk.006) with all fixes in place.
I took a look at checkstyle and addressed all the ones I could find.
I also believe to have now addressed the whitespace concerns from the previous 
Jenkins run. 

I also took the liberty of making the unit tests more clear and deliberately 
comparing dfs.lastSeenStateId vs namesystem.getLastWrittenTransactionId and 
ensuring client was "catching up" in alignment state as intended.

> Add stateId to RPC headers.
> ---
>
> Key: HDFS-12977
> URL: https://issues.apache.org/jira/browse/HDFS-12977
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ipc, namenode
>Reporter: Konstantin Shvachko
>Assignee: Plamen Jeliazkov
>Priority: Major
> Attachments: HDFS_12977.trunk.001.patch, HDFS_12977.trunk.002.patch, 
> HDFS_12977.trunk.003.patch, HDFS_12977.trunk.004.patch, 
> HDFS_12977.trunk.005.patch, HDFS_12977.trunk.006.patch
>
>
> stateId is a new field in the RPC headers of NameNode proto calls.
> stateId is the journal transaction Id, which represents LastSeenId for the 
> clients and LastWrittenId for NameNodes. See more in [reads from Standby 
> design 
> doc|https://issues.apache.org/jira/secure/attachment/12902925/ConsistentReadsFromStandbyNode.pdf].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13232) RBF: ConnectionPool should return first usable connection

2018-03-15 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401200#comment-16401200
 ] 

Chris Douglas commented on HDFS-13232:
--

bq. I committed HDFS-13230 but I messed up the message and committed it as 
HDFS-13232, any action here?
Sorry for the delay. Other than reverting and recommitting, no. We'd need to 
file a ticket with INFRA to unlock the branch and rewrite history. Probably not 
worth it.

> RBF: ConnectionPool should return first usable connection
> -
>
> Key: HDFS-13232
> URL: https://issues.apache.org/jira/browse/HDFS-13232
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Ekanth S
>Priority: Minor
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.2, 3.2.0
>
> Attachments: HDFS-13232.001.patch, HDFS-13232.002.patch, 
> HDFS-13232.003.patch
>
>
> In current ConnectionPool.getConnection(), it will return the first active 
> connection:
> {code:java}
> for (int i=0; i   int index = (threadIndex + i) % size;
>   conn = tmpConnections.get(index);
>   if (conn != null && !conn.isUsable()) {
> return conn;
>   }
> }
> {code}
> Here "!conn.isUsable()" should be "conn.isUsable()".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12977) Add stateId to RPC headers.

2018-03-15 Thread Plamen Jeliazkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Plamen Jeliazkov updated HDFS-12977:

Attachment: (was: HDFS_12977_trunk_006.patch)

> Add stateId to RPC headers.
> ---
>
> Key: HDFS-12977
> URL: https://issues.apache.org/jira/browse/HDFS-12977
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ipc, namenode
>Reporter: Konstantin Shvachko
>Assignee: Plamen Jeliazkov
>Priority: Major
> Attachments: HDFS_12977.trunk.001.patch, HDFS_12977.trunk.002.patch, 
> HDFS_12977.trunk.003.patch, HDFS_12977.trunk.004.patch, 
> HDFS_12977.trunk.005.patch, HDFS_12977.trunk.006.patch
>
>
> stateId is a new field in the RPC headers of NameNode proto calls.
> stateId is the journal transaction Id, which represents LastSeenId for the 
> clients and LastWrittenId for NameNodes. See more in [reads from Standby 
> design 
> doc|https://issues.apache.org/jira/secure/attachment/12902925/ConsistentReadsFromStandbyNode.pdf].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12977) Add stateId to RPC headers.

2018-03-15 Thread Plamen Jeliazkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Plamen Jeliazkov updated HDFS-12977:

Attachment: HDFS_12977.trunk.006.patch

> Add stateId to RPC headers.
> ---
>
> Key: HDFS-12977
> URL: https://issues.apache.org/jira/browse/HDFS-12977
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ipc, namenode
>Reporter: Konstantin Shvachko
>Assignee: Plamen Jeliazkov
>Priority: Major
> Attachments: HDFS_12977.trunk.001.patch, HDFS_12977.trunk.002.patch, 
> HDFS_12977.trunk.003.patch, HDFS_12977.trunk.004.patch, 
> HDFS_12977.trunk.005.patch, HDFS_12977.trunk.006.patch
>
>
> stateId is a new field in the RPC headers of NameNode proto calls.
> stateId is the journal transaction Id, which represents LastSeenId for the 
> clients and LastWrittenId for NameNodes. See more in [reads from Standby 
> design 
> doc|https://issues.apache.org/jira/secure/attachment/12902925/ConsistentReadsFromStandbyNode.pdf].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12977) Add stateId to RPC headers.

2018-03-15 Thread Plamen Jeliazkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Plamen Jeliazkov updated HDFS-12977:

Attachment: HDFS_12977_trunk_006.patch

> Add stateId to RPC headers.
> ---
>
> Key: HDFS-12977
> URL: https://issues.apache.org/jira/browse/HDFS-12977
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ipc, namenode
>Reporter: Konstantin Shvachko
>Assignee: Plamen Jeliazkov
>Priority: Major
> Attachments: HDFS_12977.trunk.001.patch, HDFS_12977.trunk.002.patch, 
> HDFS_12977.trunk.003.patch, HDFS_12977.trunk.004.patch, 
> HDFS_12977.trunk.005.patch, HDFS_12977_trunk_006.patch
>
>
> stateId is a new field in the RPC headers of NameNode proto calls.
> stateId is the journal transaction Id, which represents LastSeenId for the 
> clients and LastWrittenId for NameNodes. See more in [reads from Standby 
> design 
> doc|https://issues.apache.org/jira/secure/attachment/12902925/ConsistentReadsFromStandbyNode.pdf].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13250) RBF: Router to manage requests across multiple subclusters

2018-03-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401190#comment-16401190
 ] 

genericqa commented on HDFS-13250:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}100m 
22s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}160m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-13250 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914771/HDFS-13250.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4bfbeadd2671 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1976e00 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23505/testReport/ |
| Max. process+thread count | 3210 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23505/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RBF: Router to manage requests across multiple subclusters
> --
>
> Key: HDFS-13250
> URL: 

[jira] [Commented] (HDFS-12422) Replace DataNode in Pipeline when waiting for Last Packet fails

2018-03-15 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401170#comment-16401170
 ] 

Chris Douglas commented on HDFS-12422:
--

bq. do you know anybody fit for reviewing this?
[~shv], if he has cycles.

> Replace DataNode in Pipeline when waiting for Last Packet fails
> ---
>
> Key: HDFS-12422
> URL: https://issues.apache.org/jira/browse/HDFS-12422
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, hdfs-client
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
>  Labels: hdfs
> Attachments: HDFS-12422.001.patch, HDFS-12422.002.patch
>
>
> # Create a file with replicationFactor = 4, minReplicas = 2
> # Fail waiting for the last packet, followed by 2 exceptions when recovering 
> the leftover pipeline
> # The leftover pipeline will only have one DN and NN will never close such 
> block, resulting in failure to write
> The block will stay there forever, unable to be replicated, ultimately going 
> missing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13163) Move invalidated blocks to replica-trash with disk layout based on timestamp

2018-03-15 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-13163:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-12996
   Status: Resolved  (was: Patch Available)

I've committed this. Thanks [~bharatviswa].

> Move invalidated blocks to replica-trash with disk layout based on timestamp
> 
>
> Key: HDFS-13163
> URL: https://issues.apache.org/jira/browse/HDFS-13163
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: HDFS-12996
>
> Attachments: HDFS-13163-HDFS-12996.00.patch, 
> HDFS-13163-HDFS-12996.01.patch, HDFS-13163-HDFS-12996.02.patch, 
> HDFS-13163-HDFS-12996.03.patch, HDFS-13163-HDFS-12996.04.patch
>
>
> When Blocks are invalidated, move the blocks to replica-trash directory and 
> place it in the folder when the invalidate is received from the namenode
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12895) RBF: Add ACL support for mount table

2018-03-15 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401124#comment-16401124
 ] 

Lei (Eddy) Xu edited comment on HDFS-12895 at 3/15/18 9:44 PM:
---

Hi, [~linyiqun] 

Similar to HDFS-12883, should we revert this change from 3.0.1 release? 


was (Author: eddyxu):
Hi, [~linyiqun] 

Similar to HDFS-12895, should we revert this change from 3.0.1 release? 

> RBF: Add ACL support for mount table
> 
>
> Key: HDFS-12895
> URL: https://issues.apache.org/jira/browse/HDFS-12895
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
>  Labels: RBF, incompatible
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1
>
> Attachments: HDFS-12895-branch-2.001.patch, HDFS-12895.001.patch, 
> HDFS-12895.002.patch, HDFS-12895.003.patch, HDFS-12895.004.patch, 
> HDFS-12895.005.patch, HDFS-12895.006.patch, HDFS-12895.007.patch
>
>
> Adding ACL support for the Mount Table management. Following is the initial 
> design of ACL control for the mount table management.
> Each mount table has its owner, group name and permission.
> The mount table permissions (FsPermission), here we use 
> {{org.apache.hadoop.fs.permission.FsPermission}} to do the access check:
> # READ permission: you can read the mount table info.
> # WRITE permission: you can add remove or update this mount table info.
> # EXECUTE permission: This won't be used.
> The add command of mount table will be extended like this
> {noformat}
> $HADOOP_HOME/bin/hdfs dfsrouteradmin [-add   
>  [-owner ] [-group ] [-mode ]]
> {noformat}
> * is UNIX-style permissions for the mount table. Permissions are 
> specified in octal, e.g. 0755. By default, this is set to 0755*.
> If we want update the ACL info of specfied mount table, just execute add 
> command again. This command not only adding for new mount talle but also 
> updating mount table once it finds given mount table is existed. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12895) RBF: Add ACL support for mount table

2018-03-15 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401124#comment-16401124
 ] 

Lei (Eddy) Xu commented on HDFS-12895:
--

Hi, [~linyiqun] 

Similar to HDFS-12895, should we revert this change from 3.0.1 release? 

> RBF: Add ACL support for mount table
> 
>
> Key: HDFS-12895
> URL: https://issues.apache.org/jira/browse/HDFS-12895
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
>  Labels: RBF, incompatible
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1
>
> Attachments: HDFS-12895-branch-2.001.patch, HDFS-12895.001.patch, 
> HDFS-12895.002.patch, HDFS-12895.003.patch, HDFS-12895.004.patch, 
> HDFS-12895.005.patch, HDFS-12895.006.patch, HDFS-12895.007.patch
>
>
> Adding ACL support for the Mount Table management. Following is the initial 
> design of ACL control for the mount table management.
> Each mount table has its owner, group name and permission.
> The mount table permissions (FsPermission), here we use 
> {{org.apache.hadoop.fs.permission.FsPermission}} to do the access check:
> # READ permission: you can read the mount table info.
> # WRITE permission: you can add remove or update this mount table info.
> # EXECUTE permission: This won't be used.
> The add command of mount table will be extended like this
> {noformat}
> $HADOOP_HOME/bin/hdfs dfsrouteradmin [-add   
>  [-owner ] [-group ] [-mode ]]
> {noformat}
> * is UNIX-style permissions for the mount table. Permissions are 
> specified in octal, e.g. 0755. By default, this is set to 0755*.
> If we want update the ACL info of specfied mount table, just execute add 
> command again. This command not only adding for new mount talle but also 
> updating mount table once it finds given mount table is existed. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11043) TestWebHdfsTimeouts fails

2018-03-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401123#comment-16401123
 ] 

genericqa commented on HDFS-11043:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
57s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}102m 30s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}161m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestReencryption |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-11043 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914761/HDFS-11043.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d948ea341049 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1976e00 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23504/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23504/testReport/ |
| Max. process+thread count | 3499 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23504/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This 

[jira] [Commented] (HDFS-12883) RBF: Document Router and State Store metrics

2018-03-15 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401122#comment-16401122
 ] 

Lei (Eddy) Xu commented on HDFS-12883:
--

Hi, [~linyiqun] 

Sorry that I am late to this. Is this incompatible change necessary to be in 
3.0.1,  as 3.0.0 does not have it.  In general we should only have bug fix in 
3.0.x releases.  

> RBF: Document Router and State Store metrics
> 
>
> Key: HDFS-12883
> URL: https://issues.apache.org/jira/browse/HDFS-12883
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
>  Labels: RBF, incompatible
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1
>
> Attachments: HDFS-12883.001.patch, HDFS-12883.002.patch, 
> metric-screen-shot.jpg
>
>
> Document Router and State Store metrics in doc. This will be helpful for 
> users to monitor RBF.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12003) Ozone: Misc : Cleanup error messages

2018-03-15 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401096#comment-16401096
 ] 

Elek, Marton commented on HDFS-12003:
-

Not an error message but a log which is not so meaningfull.

{code}
datanode_1  | 2018-03-15 20:43:37 INFO  VolumeProcessTemplate:96 - Success
{code}

Maybe we need a separated jira for cleanup log messages...

> Ozone: Misc : Cleanup error messages
> 
>
> Key: HDFS-12003
> URL: https://issues.apache.org/jira/browse/HDFS-12003
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Priority: Major
>  Labels: OzonePostMerge
>
> Many error messages thrown from ozone are written for developers by 
> developers. We need to review all publicly visible error messages to make 
> sure it correct, includes enough context (stack traces do not count) and 
> makes sense for the reader.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13284) Adjust criteria for LowRedundancyBlocks.QUEUE_VERY_LOW_REDUNDANCY

2018-03-15 Thread Lukas Majercak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HDFS-13284:
--
Status: Patch Available  (was: Open)

> Adjust criteria for LowRedundancyBlocks.QUEUE_VERY_LOW_REDUNDANCY
> -
>
> Key: HDFS-13284
> URL: https://issues.apache.org/jira/browse/HDFS-13284
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-13284.000.patch
>
>
> LowRedundancyBlocks currently has 5 priority queues:
> QUEUE_HIGHEST_PRIORITY = 0                         - reserved for last 
> replica blocks
>  QUEUE_VERY_LOW_REDUNDANCY = 1             - *if ((curReplicas * 3) < 
> expectedReplicas)*
>  QUEUE_LOW_REDUNDANCY = 2                         - the rest
>  QUEUE_REPLICAS_BADLY_DISTRIBUTED = 3
>  QUEUE_WITH_CORRUPT_BLOCKS = 4
> The problem lies in  QUEUE_VERY_LOW_REDUNDANCY. Currently, a block that has 
> curReplicas=2 and expectedReplicas=4 is treated the same as a block with 
> curReplicas=3 and expectedReplicas=4. A block with 2/3 replicas is also put 
> into QUEUE_LOW_REDUNDANCY. 
> The proposal is to change the *{{if ((curReplicas * 3) < expectedReplicas)}}* 
> check to *{{if ((curReplicas * 2) <= expectedReplicas || curReplicas == 2)}}*
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-13284) Adjust criteria for LowRedundancyBlocks.QUEUE_VERY_LOW_REDUNDANCY

2018-03-15 Thread Lukas Majercak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-13284 started by Lukas Majercak.
-
> Adjust criteria for LowRedundancyBlocks.QUEUE_VERY_LOW_REDUNDANCY
> -
>
> Key: HDFS-13284
> URL: https://issues.apache.org/jira/browse/HDFS-13284
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-13284.000.patch
>
>
> LowRedundancyBlocks currently has 5 priority queues:
> QUEUE_HIGHEST_PRIORITY = 0                         - reserved for last 
> replica blocks
>  QUEUE_VERY_LOW_REDUNDANCY = 1             - *if ((curReplicas * 3) < 
> expectedReplicas)*
>  QUEUE_LOW_REDUNDANCY = 2                         - the rest
>  QUEUE_REPLICAS_BADLY_DISTRIBUTED = 3
>  QUEUE_WITH_CORRUPT_BLOCKS = 4
> The problem lies in  QUEUE_VERY_LOW_REDUNDANCY. Currently, a block that has 
> curReplicas=2 and expectedReplicas=4 is treated the same as a block with 
> curReplicas=3 and expectedReplicas=4. A block with 2/3 replicas is also put 
> into QUEUE_LOW_REDUNDANCY. 
> The proposal is to change the *{{if ((curReplicas * 3) < expectedReplicas)}}* 
> check to *{{if ((curReplicas * 2) <= expectedReplicas || curReplicas == 2)}}*
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work stopped] (HDFS-13284) Adjust criteria for LowRedundancyBlocks.QUEUE_VERY_LOW_REDUNDANCY

2018-03-15 Thread Lukas Majercak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-13284 stopped by Lukas Majercak.
-
> Adjust criteria for LowRedundancyBlocks.QUEUE_VERY_LOW_REDUNDANCY
> -
>
> Key: HDFS-13284
> URL: https://issues.apache.org/jira/browse/HDFS-13284
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-13284.000.patch
>
>
> LowRedundancyBlocks currently has 5 priority queues:
> QUEUE_HIGHEST_PRIORITY = 0                         - reserved for last 
> replica blocks
>  QUEUE_VERY_LOW_REDUNDANCY = 1             - *if ((curReplicas * 3) < 
> expectedReplicas)*
>  QUEUE_LOW_REDUNDANCY = 2                         - the rest
>  QUEUE_REPLICAS_BADLY_DISTRIBUTED = 3
>  QUEUE_WITH_CORRUPT_BLOCKS = 4
> The problem lies in  QUEUE_VERY_LOW_REDUNDANCY. Currently, a block that has 
> curReplicas=2 and expectedReplicas=4 is treated the same as a block with 
> curReplicas=3 and expectedReplicas=4. A block with 2/3 replicas is also put 
> into QUEUE_LOW_REDUNDANCY. 
> The proposal is to change the *{{if ((curReplicas * 3) < expectedReplicas)}}* 
> check to *{{if ((curReplicas * 2) <= expectedReplicas || curReplicas == 2)}}*
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13250) RBF: Router to manage requests across multiple subclusters

2018-03-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13250:
---
Status: Patch Available  (was: Open)

> RBF: Router to manage requests across multiple subclusters
> --
>
> Key: HDFS-13250
> URL: https://issues.apache.org/jira/browse/HDFS-13250
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13250.000.patch, HDFS-13250.001.patch, 
> HDFS-13250.002.patch
>
>
> HDFS-13124 introduces the concept of mount points spanning multiple 
> subclusters. The Router should distribute the requests across these 
> subclusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13250) RBF: Router to manage requests across multiple subclusters

2018-03-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401018#comment-16401018
 ] 

Íñigo Goiri commented on HDFS-13250:


Thanks [~linyiqun] for the comments, I posted  [^HDFS-13250.002.patch] with 
most of the fixes.
A couple comments:
* I didn't quite get the comment about {{getFileInfoAll()}}, we first check for 
the number of directories and then we check for the files separately, do you 
mean in 1165 to check if it's a file?
* The {{append()}} will already find the subcluster where the file is and 
append to that one; I added a unit test for that.

> RBF: Router to manage requests across multiple subclusters
> --
>
> Key: HDFS-13250
> URL: https://issues.apache.org/jira/browse/HDFS-13250
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13250.000.patch, HDFS-13250.001.patch, 
> HDFS-13250.002.patch
>
>
> HDFS-13124 introduces the concept of mount points spanning multiple 
> subclusters. The Router should distribute the requests across these 
> subclusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13250) RBF: Router to manage requests across multiple subclusters

2018-03-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13250:
---
Attachment: HDFS-13250.002.patch

> RBF: Router to manage requests across multiple subclusters
> --
>
> Key: HDFS-13250
> URL: https://issues.apache.org/jira/browse/HDFS-13250
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13250.000.patch, HDFS-13250.001.patch, 
> HDFS-13250.002.patch
>
>
> HDFS-13124 introduces the concept of mount points spanning multiple 
> subclusters. The Router should distribute the requests across these 
> subclusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13295) Namenode doesn't leave safemode if dfs.namenode.safemode.replication.min set < dfs.namenode.replication.min

2018-03-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400984#comment-16400984
 ] 

genericqa commented on HDFS-13295:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  2s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 112 unchanged - 0 fixed = 114 total (was 112) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 13s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}156m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithQJM |
|   | hadoop.hdfs.TestDFSInotifyEventInputStream |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
|   | hadoop.hdfs.server.namenode.TestCheckpoint |
|   | hadoop.hdfs.TestDFSUpgradeFromImage |
|   | hadoop.hdfs.tools.TestGetGroups |
|   | hadoop.hdfs.server.namenode.TestStartup |
|   | hadoop.hdfs.TestDFSStorageStateRecovery |
|   | hadoop.hdfs.server.mover.TestMover |
|   | hadoop.hdfs.server.namenode.ha.TestInitializeSharedEdits |
|   | hadoop.hdfs.server.namenode.TestNameNodeRecovery |
|   | hadoop.hdfs.server.namenode.ha.TestFailureOfSharedDir |
|   | hadoop.hdfs.server.namenode.TestFSEditLogLoader |
|   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
|   | hadoop.hdfs.tools.TestDelegationTokenFetcher |
|   | hadoop.hdfs.TestSetTimes |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
|   | hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA |
|   | 

[jira] [Commented] (HDFS-13040) Kerberized inotify client fails despite kinit properly

2018-03-15 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400979#comment-16400979
 ] 

Xiao Chen commented on HDFS-13040:
--

For reference [the test error with UGI mentioned 
above|https://issues.apache.org/jira/browse/HDFS-13040?focusedCommentId=16378191=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16378191]
 is HADOOP-14699, fixed by HADOOP-9747.

 

Not clear why branch-2 didn't fail. Guessing UGI behavior changed :)

> Kerberized inotify client fails despite kinit properly
> --
>
> Key: HDFS-13040
> URL: https://issues.apache.org/jira/browse/HDFS-13040
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
> Environment: Kerberized, HA cluster, iNotify client, CDH5.10.2
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 3.0.2
>
> Attachments: HDFS-13040.001.patch, HDFS-13040.02.patch, 
> HDFS-13040.03.patch, HDFS-13040.04.patch, HDFS-13040.05.patch, 
> HDFS-13040.06.patch, HDFS-13040.07.patch, HDFS-13040.branch-2.01.patch, 
> HDFS-13040.half.test.patch, TestDFSInotifyEventInputStreamKerberized.java, 
> TransactionReader.java
>
>
> This issue is similar to HDFS-10799.
> HDFS-10799 turned out to be a client side issue where client is responsible 
> for renewing kerberos ticket actively.
> However we found in a slightly setup even if client has valid Kerberos 
> credentials, inotify still fails.
> Suppose client uses principal h...@example.com, 
>  namenode 1 uses server principal hdfs/nn1.example@example.com
>  namenode 2 uses server principal hdfs/nn2.example@example.com
> *After Namenodes starts for longer than kerberos ticket lifetime*, the client 
> fails with the following error:
> {noformat}
> 18/01/19 11:23:02 WARN security.UserGroupInformation: 
> PriviledgedActionException as:h...@gce.cloudera.com (auth:KERBEROS) 
> cause:org.apache.hadoop.ipc.RemoteException(java.io.IOException): We 
> encountered an error reading 
> https://nn2.example.com:8481/getJournal?jid=ns1=8662=-60%3A353531113%3A0%3Acluster3,
>  
> https://nn1.example.com:8481/getJournal?jid=ns1=8662=-60%3A353531113%3A0%3Acluster3.
>   During automatic edit log failover, we noticed that all of the remaining 
> edit log streams are shorter than the current one!  The best remaining edit 
> log ends at transaction 8683, but we thought we could read up to transaction 
> 8684.  If you continue, metadata will be lost forever!
> at 
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:213)
> at 
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.readOp(NameNodeRpcServer.java:1701)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getEditsFromTxid(NameNodeRpcServer.java:1763)
> at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getEditsFromTxid(AuthorizationProviderProxyClientProtocol.java:1011)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getEditsFromTxid(ClientNamenodeProtocolServerSideTranslatorPB.java:1490)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2216)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2212)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2210)
> {noformat}
> Typically if NameNode has an expired Kerberos ticket, the error handling for 
> the typical edit log tailing would let NameNode to relogin with its own 
> Kerberos principal. However, when inotify uses the same code path to retrieve 
> edits, since the current user is the inotify client's principal, unless 
> client uses the same principal as the NameNode, NameNode can't do it on 
> behalf of the client.
> Therefore, a more appropriate approach is to use proxy user so that NameNode 
> can retrieving edits on behalf of the client.
> I will attach a patch to fix it. This patch has been verified to work for a 
> CDH5.10.2 cluster, however it seems 

[jira] [Commented] (HDFS-12618) fsck -includeSnapshots reports wrong amount of total blocks

2018-03-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400978#comment-16400978
 ] 

genericqa commented on HDFS-12618:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 52s{color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs generated 3 new + 390 unchanged - 
3 fixed = 393 total (was 393) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 48s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 112 unchanged - 4 fixed = 115 total (was 116) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 15s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
24s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}132m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-12618 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12904662/HDFS-12618.006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e4c39aff985d 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5e013d5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23503/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23503/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 

[jira] [Updated] (HDFS-13217) Log audit event only used last EC policy name when add multiple policies from file

2018-03-15 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-13217:
-
Target Version/s: 3.1.0, 3.0.2  (was: 3.0.1, 3.0.2)

> Log audit event only used last EC policy name when add multiple policies from 
> file 
> ---
>
> Key: HDFS-13217
> URL: https://issues.apache.org/jira/browse/HDFS-13217
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: liaoyuxiangqin
>Assignee: liaoyuxiangqin
>Priority: Major
> Attachments: HDFS-13217.001.patch, HDFS-13217.002.patch
>
>
> When i read the addErasureCodingPolicies() of FSNamesystem class in namenode, 
> i found the following code only used last ec policy name for  logAuditEvent, 
> i think this audit log can't track whole policies for the add multiple 
> erasure coding policies to the ErasureCodingPolicyManager. Thanks.
> {code:java|title=FSNamesystem.java|borderStyle=solid}
> try {
>   checkOperation(OperationCategory.WRITE);
>   checkNameNodeSafeMode("Cannot add erasure coding policy");
>   for (ErasureCodingPolicy policy : policies) {
> try {
>   ErasureCodingPolicy newPolicy =
>   FSDirErasureCodingOp.addErasureCodingPolicy(this, policy,
>   logRetryCache);
>   addECPolicyName = newPolicy.getName();
>   responses.add(new AddErasureCodingPolicyResponse(newPolicy));
> } catch (HadoopIllegalArgumentException e) {
>   responses.add(new AddErasureCodingPolicyResponse(policy, e));
> }
>   }
>   success = true;
>   return responses.toArray(new AddErasureCodingPolicyResponse[0]);
> } finally {
>   writeUnlock(operationName);
>   if (success) {
> getEditLog().logSync();
>   }
>   logAuditEvent(success, operationName,addECPolicyName, null, null);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13224) RBF: Resolvers to support mount points across multiple subclusters

2018-03-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13224:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   3.0.2
   2.9.1
   2.10.0
   3.1.0
   Status: Resolved  (was: Patch Available)

> RBF: Resolvers to support mount points across multiple subclusters
> --
>
> Key: HDFS-13224
> URL: https://issues.apache.org/jira/browse/HDFS-13224
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.2, 3.2.0
>
> Attachments: HDFS-13224.000.patch, HDFS-13224.001.patch, 
> HDFS-13224.002.patch, HDFS-13224.003.patch, HDFS-13224.004.patch, 
> HDFS-13224.005.patch, HDFS-13224.006.patch, HDFS-13224.007.patch, 
> HDFS-13224.008.patch, HDFS-13224.009.patch, HDFS-13224.010.patch
>
>
> Currently, a mount point points to a single subcluster. We should be able to 
> spread files in a mount point across subclusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13284) Adjust criteria for LowRedundancyBlocks.QUEUE_VERY_LOW_REDUNDANCY

2018-03-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400955#comment-16400955
 ] 

Íñigo Goiri commented on HDFS-13284:


Thanks [~lukmajercak], this makes sense; I would push this.
[~ste...@apache.org], you were the committer for HDFS-2485 which touched this 
code but after looking more closely that was a refactor.


> Adjust criteria for LowRedundancyBlocks.QUEUE_VERY_LOW_REDUNDANCY
> -
>
> Key: HDFS-13284
> URL: https://issues.apache.org/jira/browse/HDFS-13284
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-13284.000.patch
>
>
> LowRedundancyBlocks currently has 5 priority queues:
> QUEUE_HIGHEST_PRIORITY = 0                         - reserved for last 
> replica blocks
>  QUEUE_VERY_LOW_REDUNDANCY = 1             - *if ((curReplicas * 3) < 
> expectedReplicas)*
>  QUEUE_LOW_REDUNDANCY = 2                         - the rest
>  QUEUE_REPLICAS_BADLY_DISTRIBUTED = 3
>  QUEUE_WITH_CORRUPT_BLOCKS = 4
> The problem lies in  QUEUE_VERY_LOW_REDUNDANCY. Currently, a block that has 
> curReplicas=2 and expectedReplicas=4 is treated the same as a block with 
> curReplicas=3 and expectedReplicas=4. A block with 2/3 replicas is also put 
> into QUEUE_LOW_REDUNDANCY. 
> The proposal is to change the *{{if ((curReplicas * 3) < expectedReplicas)}}* 
> check to *{{if ((curReplicas * 2) <= expectedReplicas || curReplicas == 2)}}*
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11043) TestWebHdfsTimeouts fails

2018-03-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-11043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400949#comment-16400949
 ] 

Íñigo Goiri commented on HDFS-11043:


So the current plan is to not run TestWebHdfsTimeout for qbt?
I would prefer to fix it for Linux (which is the one that runs "daily") and 
skip those for the other platforms.

> TestWebHdfsTimeouts fails
> -
>
> Key: HDFS-11043
> URL: https://issues.apache.org/jira/browse/HDFS-11043
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-11043.000.patch, 
> org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts.txt
>
>
> I'm seeing reproducible test failures for TestWebHdfsTimeouts locally, at 
> least on trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13217) Log audit event only used last EC policy name when add multiple policies from file

2018-03-15 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400946#comment-16400946
 ] 

Lei (Eddy) Xu commented on HDFS-13217:
--

Thanks a lot for the contribution [~liaoyuxiangqin].  It looks great overall.

One minor comment:

{code}
logAuditEvent(success, operationName, Arrays.toString(
 addECPolicyNames.toArray(new String[0])), null, null);
{code}

{{addECPolicyNames}} is a List, you can directly use 
{{addECPolicyNames.toString()}} instead of casting it to raw array. 

+1 pending addressing this change.

> Log audit event only used last EC policy name when add multiple policies from 
> file 
> ---
>
> Key: HDFS-13217
> URL: https://issues.apache.org/jira/browse/HDFS-13217
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: liaoyuxiangqin
>Assignee: liaoyuxiangqin
>Priority: Major
> Attachments: HDFS-13217.001.patch, HDFS-13217.002.patch
>
>
> When i read the addErasureCodingPolicies() of FSNamesystem class in namenode, 
> i found the following code only used last ec policy name for  logAuditEvent, 
> i think this audit log can't track whole policies for the add multiple 
> erasure coding policies to the ErasureCodingPolicyManager. Thanks.
> {code:java|title=FSNamesystem.java|borderStyle=solid}
> try {
>   checkOperation(OperationCategory.WRITE);
>   checkNameNodeSafeMode("Cannot add erasure coding policy");
>   for (ErasureCodingPolicy policy : policies) {
> try {
>   ErasureCodingPolicy newPolicy =
>   FSDirErasureCodingOp.addErasureCodingPolicy(this, policy,
>   logRetryCache);
>   addECPolicyName = newPolicy.getName();
>   responses.add(new AddErasureCodingPolicyResponse(newPolicy));
> } catch (HadoopIllegalArgumentException e) {
>   responses.add(new AddErasureCodingPolicyResponse(policy, e));
> }
>   }
>   success = true;
>   return responses.toArray(new AddErasureCodingPolicyResponse[0]);
> } finally {
>   writeUnlock(operationName);
>   if (success) {
> getEditLog().logSync();
>   }
>   logAuditEvent(success, operationName,addECPolicyName, null, null);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11043) TestWebHdfsTimeouts fails

2018-03-15 Thread Chao Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400944#comment-16400944
 ] 

Chao Sun edited comment on HDFS-11043 at 3/15/18 6:43 PM:
--

[~elgoiri]: yes I believe so - from the uname in the report you can see it's on 
Ubuntu.


was (Author: csun):
[~elgoiri]]: yes I believe so - from the uname in the report you can see it's 
on Ubuntu.

> TestWebHdfsTimeouts fails
> -
>
> Key: HDFS-11043
> URL: https://issues.apache.org/jira/browse/HDFS-11043
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-11043.000.patch, 
> org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts.txt
>
>
> I'm seeing reproducible test failures for TestWebHdfsTimeouts locally, at 
> least on trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11043) TestWebHdfsTimeouts fails

2018-03-15 Thread Chao Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400944#comment-16400944
 ] 

Chao Sun edited comment on HDFS-11043 at 3/15/18 6:43 PM:
--

[~elgoiri]]: yes I believe so - from the uname in the report you can see it's 
on Ubuntu.


was (Author: csun):
[~goiri]: yes I believe so - from the uname in the report you can see it's on 
Ubuntu.

> TestWebHdfsTimeouts fails
> -
>
> Key: HDFS-11043
> URL: https://issues.apache.org/jira/browse/HDFS-11043
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-11043.000.patch, 
> org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts.txt
>
>
> I'm seeing reproducible test failures for TestWebHdfsTimeouts locally, at 
> least on trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11043) TestWebHdfsTimeouts fails

2018-03-15 Thread Chao Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400944#comment-16400944
 ] 

Chao Sun commented on HDFS-11043:
-

[~goiri]: yes I believe so - from the uname in the report you can see it's on 
Ubuntu.

> TestWebHdfsTimeouts fails
> -
>
> Key: HDFS-11043
> URL: https://issues.apache.org/jira/browse/HDFS-11043
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-11043.000.patch, 
> org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts.txt
>
>
> I'm seeing reproducible test failures for TestWebHdfsTimeouts locally, at 
> least on trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11043) TestWebHdfsTimeouts fails

2018-03-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-11043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400931#comment-16400931
 ] 

Íñigo Goiri commented on HDFS-11043:


AFAIK, the qbt runs on Linux, right?
The build from yesterday had this test broken too.

> TestWebHdfsTimeouts fails
> -
>
> Key: HDFS-11043
> URL: https://issues.apache.org/jira/browse/HDFS-11043
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-11043.000.patch, 
> org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts.txt
>
>
> I'm seeing reproducible test failures for TestWebHdfsTimeouts locally, at 
> least on trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12919) RBF: Support erasure coding methods in RouterRpcServer

2018-03-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400922#comment-16400922
 ] 

Íñigo Goiri commented on HDFS-12919:


This JIRA added a few methods to RouterRpcServer and RouterRpcClient in 
addition to EC changes.
This made branch-2 hard to maintain, I committed  
[^HDFS-12919-branch-2.000.patch] with those fixes.

> RBF: Support erasure coding methods in RouterRpcServer
> --
>
> Key: HDFS-12919
> URL: https://issues.apache.org/jira/browse/HDFS-12919
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Critical
>  Labels: RBF
> Fix For: 3.1.0, 3.0.1
>
> Attachments: HDFS-12919-branch-2.000.patch, 
> HDFS-12919-branch-3.001.patch, HDFS-12919-branch-3.002.patch, 
> HDFS-12919-branch-3.003.patch, HDFS-12919.000.patch, HDFS-12919.001.patch, 
> HDFS-12919.002.patch, HDFS-12919.003.patch, HDFS-12919.004.patch, 
> HDFS-12919.005.patch, HDFS-12919.006.patch, HDFS-12919.007.patch, 
> HDFS-12919.008.patch, HDFS-12919.009.patch, HDFS-12919.010.patch, 
> HDFS-12919.011.patch, HDFS-12919.012.patch, HDFS-12919.013.patch, 
> HDFS-12919.013.patch, HDFS-12919.014.patch, HDFS-12919.015.patch, 
> HDFS-12919.016.patch, HDFS-12919.017.patch, HDFS-12919.018.patch, 
> HDFS-12919.019.patch, HDFS-12919.020.patch, HDFS-12919.021.patch, 
> HDFS-12919.022.patch, HDFS-12919.023.patch
>
>
> MAPREDUCE-6954 started to tune the erasure coding settings for staging files. 
> However, the {{Router}} does not support this operation and throws:
> {code}
> 17/12/12 14:36:07 INFO mapreduce.JobSubmitter: Cleaning up the staging area 
> /tmp/hadoop-yarn/staging/hadoop/.staging/job_1513116010218_0002
> org.apache.hadoop.ipc.RemoteException(java.lang.UnsupportedOperationException):
>  Operation "setErasureCodingPolicy" is not supported
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.checkOperation(RouterRpcServer.java:368)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.setErasureCodingPolicy(RouterRpcServer.java:1805)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13224) RBF: Resolvers to support mount points across multiple subclusters

2018-03-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400909#comment-16400909
 ] 

Hudson commented on HDFS-13224:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13845 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13845/])
HDFS-13224. RBF: Resolvers to support mount points across multiple (inigoiri: 
rev e71bc00a471422ddb26dd54e706f09f0fe09925c)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdmin.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/proto/FederationProtocol.proto
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/order/TestLocalResolver.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/utils/package-info.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/order/RandomResolver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/impl/pb/MountTablePBImpl.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MultipleDestinationMountTableResolver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/order/DestinationOrder.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/utils/ConsistentHashRing.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/order/OrderedResolver.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMultipleDestinationResolver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/order/HashFirstResolver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/order/LocalResolver.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/order/HashResolver.java


> RBF: Resolvers to support mount points across multiple subclusters
> --
>
> Key: HDFS-13224
> URL: https://issues.apache.org/jira/browse/HDFS-13224
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13224.000.patch, HDFS-13224.001.patch, 
> HDFS-13224.002.patch, HDFS-13224.003.patch, HDFS-13224.004.patch, 
> HDFS-13224.005.patch, HDFS-13224.006.patch, HDFS-13224.007.patch, 
> HDFS-13224.008.patch, HDFS-13224.009.patch, HDFS-13224.010.patch
>
>
> Currently, a mount point points to a single subcluster. We should be able to 
> spread files in a mount point across subclusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13280) WebHDFS: Fix NPE in get snasphottable directory list call

2018-03-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400910#comment-16400910
 ] 

Hudson commented on HDFS-13280:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13845 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13845/])
HDFS-13280. WebHDFS: Fix NPE in get snasphottable directory list call. (xyao: 
rev 78b05fde6c41f7a6b2dc2d99b435d1d83242590c)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java


> WebHDFS: Fix NPE in get snasphottable directory list call
> -
>
> Key: HDFS-13280
> URL: https://issues.apache.org/jira/browse/HDFS-13280
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HDFS-13280.001.patch, HDFS-13280.002.patch, 
> HDFS-13280.003.patch
>
>
> WebHdfs throws NPE when snapshottable directory status list is null.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12919) RBF: Support erasure coding methods in RouterRpcServer

2018-03-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12919:
---
Attachment: HDFS-12919-branch-2.000.patch

> RBF: Support erasure coding methods in RouterRpcServer
> --
>
> Key: HDFS-12919
> URL: https://issues.apache.org/jira/browse/HDFS-12919
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Critical
>  Labels: RBF
> Fix For: 3.1.0, 3.0.1
>
> Attachments: HDFS-12919-branch-2.000.patch, 
> HDFS-12919-branch-3.001.patch, HDFS-12919-branch-3.002.patch, 
> HDFS-12919-branch-3.003.patch, HDFS-12919.000.patch, HDFS-12919.001.patch, 
> HDFS-12919.002.patch, HDFS-12919.003.patch, HDFS-12919.004.patch, 
> HDFS-12919.005.patch, HDFS-12919.006.patch, HDFS-12919.007.patch, 
> HDFS-12919.008.patch, HDFS-12919.009.patch, HDFS-12919.010.patch, 
> HDFS-12919.011.patch, HDFS-12919.012.patch, HDFS-12919.013.patch, 
> HDFS-12919.013.patch, HDFS-12919.014.patch, HDFS-12919.015.patch, 
> HDFS-12919.016.patch, HDFS-12919.017.patch, HDFS-12919.018.patch, 
> HDFS-12919.019.patch, HDFS-12919.020.patch, HDFS-12919.021.patch, 
> HDFS-12919.022.patch, HDFS-12919.023.patch
>
>
> MAPREDUCE-6954 started to tune the erasure coding settings for staging files. 
> However, the {{Router}} does not support this operation and throws:
> {code}
> 17/12/12 14:36:07 INFO mapreduce.JobSubmitter: Cleaning up the staging area 
> /tmp/hadoop-yarn/staging/hadoop/.staging/job_1513116010218_0002
> org.apache.hadoop.ipc.RemoteException(java.lang.UnsupportedOperationException):
>  Operation "setErasureCodingPolicy" is not supported
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.checkOperation(RouterRpcServer.java:368)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.setErasureCodingPolicy(RouterRpcServer.java:1805)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11043) TestWebHdfsTimeouts fails

2018-03-15 Thread Chao Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400898#comment-16400898
 ] 

Chao Sun edited comment on HDFS-11043 at 3/15/18 6:26 PM:
--

Submitted patch v0. Tested on both Mac and Linux. [~xyao]: could you review it?


was (Author: csun):
Submitted patch v0. [~xyao]: could you review it?

> TestWebHdfsTimeouts fails
> -
>
> Key: HDFS-11043
> URL: https://issues.apache.org/jira/browse/HDFS-11043
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-11043.000.patch, 
> org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts.txt
>
>
> I'm seeing reproducible test failures for TestWebHdfsTimeouts locally, at 
> least on trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11043) TestWebHdfsTimeouts fails

2018-03-15 Thread Chao Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-11043:

Assignee: Chao Sun  (was: John Zhuge)
  Status: Patch Available  (was: Open)

Submitted patch v0. [~xyao]: could you review it?

> TestWebHdfsTimeouts fails
> -
>
> Key: HDFS-11043
> URL: https://issues.apache.org/jira/browse/HDFS-11043
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-11043.000.patch, 
> org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts.txt
>
>
> I'm seeing reproducible test failures for TestWebHdfsTimeouts locally, at 
> least on trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11043) TestWebHdfsTimeouts fails

2018-03-15 Thread Chao Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-11043:

Attachment: HDFS-11043.000.patch

> TestWebHdfsTimeouts fails
> -
>
> Key: HDFS-11043
> URL: https://issues.apache.org/jira/browse/HDFS-11043
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: John Zhuge
>Priority: Major
> Attachments: HDFS-11043.000.patch, 
> org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts.txt
>
>
> I'm seeing reproducible test failures for TestWebHdfsTimeouts locally, at 
> least on trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13280) WebHDFS: Fix NPE in get snasphottable directory list call

2018-03-15 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-13280:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   3.1.0
   Status: Resolved  (was: Patch Available)

Thanks, [~ljain] for the contribution. I've committed the patch to trunk and 
branch-3.1

> WebHDFS: Fix NPE in get snasphottable directory list call
> -
>
> Key: HDFS-13280
> URL: https://issues.apache.org/jira/browse/HDFS-13280
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HDFS-13280.001.patch, HDFS-13280.002.patch, 
> HDFS-13280.003.patch
>
>
> WebHdfs throws NPE when snapshottable directory status list is null.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13281) Namenode#createFile should be /.reserved/raw/ aware.

2018-03-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400858#comment-16400858
 ] 

genericqa commented on HDFS-13281:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
40s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m  5s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}117m 37s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}163m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
|   | hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport |
|   | hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-13281 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914714/HDFS-13281.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b11d41c61cff 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5e013d5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 

[jira] [Commented] (HDFS-13284) Adjust criteria for LowRedundancyBlocks.QUEUE_VERY_LOW_REDUNDANCY

2018-03-15 Thread Lukas Majercak (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400851#comment-16400851
 ] 

Lukas Majercak commented on HDFS-13284:
---

Added the description [~elgoiri].

> Adjust criteria for LowRedundancyBlocks.QUEUE_VERY_LOW_REDUNDANCY
> -
>
> Key: HDFS-13284
> URL: https://issues.apache.org/jira/browse/HDFS-13284
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-13284.000.patch
>
>
> LowRedundancyBlocks currently has 5 priority queues:
> QUEUE_HIGHEST_PRIORITY = 0                         - reserved for last 
> replica blocks
>  QUEUE_VERY_LOW_REDUNDANCY = 1             - *if ((curReplicas * 3) < 
> expectedReplicas)*
>  QUEUE_LOW_REDUNDANCY = 2                         - the rest
>  QUEUE_REPLICAS_BADLY_DISTRIBUTED = 3
>  QUEUE_WITH_CORRUPT_BLOCKS = 4
> The problem lies in  QUEUE_VERY_LOW_REDUNDANCY. Currently, a block that has 
> curReplicas=2 and expectedReplicas=4 is treated the same as a block with 
> curReplicas=3 and expectedReplicas=4. A block with 2/3 replicas is also put 
> into QUEUE_LOW_REDUNDANCY. 
> The proposal is to change the *{{if ((curReplicas * 3) < expectedReplicas)}}* 
> check to *{{if ((curReplicas * 2) <= expectedReplicas || curReplicas == 2)}}*
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13284) Adjust criteria for LowRedundancyBlocks.QUEUE_VERY_LOW_REDUNDANCY

2018-03-15 Thread Lukas Majercak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HDFS-13284:
--
Description: 
LowRedundancyBlocks currently has 5 priority queues:

QUEUE_HIGHEST_PRIORITY = 0                         - reserved for last replica 
blocks
 QUEUE_VERY_LOW_REDUNDANCY = 1             - *if ((curReplicas * 3) < 
expectedReplicas)*
 QUEUE_LOW_REDUNDANCY = 2                         - the rest
 QUEUE_REPLICAS_BADLY_DISTRIBUTED = 3
 QUEUE_WITH_CORRUPT_BLOCKS = 4

The problem lies in  QUEUE_VERY_LOW_REDUNDANCY. Currently, a block that has 
curReplicas=2 and expectedReplicas=4 is treated the same as a block with 
curReplicas=3 and expectedReplicas=4. A block with 2/3 replicas is also put 
into QUEUE_LOW_REDUNDANCY. 

The proposal is to change the *{{if ((curReplicas * 3) < expectedReplicas)}}* 
check to *{{if ((curReplicas * 2) <= expectedReplicas || curReplicas == 2)}}*

 

  was:
LowRedundancyBlocks currently has 5 priority queues:

QUEUE_HIGHEST_PRIORITY = 0                         - reserved for last replica 
blocks
QUEUE_VERY_LOW_REDUNDANCY = 1             - 
QUEUE_LOW_REDUNDANCY = 2
QUEUE_REPLICAS_BADLY_DISTRIBUTED = 3
QUEUE_WITH_CORRUPT_BLOCKS = 4

 


> Adjust criteria for LowRedundancyBlocks.QUEUE_VERY_LOW_REDUNDANCY
> -
>
> Key: HDFS-13284
> URL: https://issues.apache.org/jira/browse/HDFS-13284
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-13284.000.patch
>
>
> LowRedundancyBlocks currently has 5 priority queues:
> QUEUE_HIGHEST_PRIORITY = 0                         - reserved for last 
> replica blocks
>  QUEUE_VERY_LOW_REDUNDANCY = 1             - *if ((curReplicas * 3) < 
> expectedReplicas)*
>  QUEUE_LOW_REDUNDANCY = 2                         - the rest
>  QUEUE_REPLICAS_BADLY_DISTRIBUTED = 3
>  QUEUE_WITH_CORRUPT_BLOCKS = 4
> The problem lies in  QUEUE_VERY_LOW_REDUNDANCY. Currently, a block that has 
> curReplicas=2 and expectedReplicas=4 is treated the same as a block with 
> curReplicas=3 and expectedReplicas=4. A block with 2/3 replicas is also put 
> into QUEUE_LOW_REDUNDANCY. 
> The proposal is to change the *{{if ((curReplicas * 3) < expectedReplicas)}}* 
> check to *{{if ((curReplicas * 2) <= expectedReplicas || curReplicas == 2)}}*
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13215) RBF: Move Router to its own module

2018-03-15 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400845#comment-16400845
 ] 

Wei Yan commented on HDFS-13215:


{quote}It looks like Yetus didn't run the unit tests for RBF
{quote}
Found this following message in 
[https://builds.apache.org/job/PreCommit-HDFS-Build/23499/artifact/out/patch-unit-root.txt.]
 
{noformat}
[INFO]  
[INFO] Skipping Apache Hadoop HDFS-RBF [INFO] This project has been banned from 
the build due to previous failures.{noformat}
It looks like Yestus skipped some HDFS components due to the test failures in 
hadoop-hdfs itself.

> RBF: Move Router to its own module
> --
>
> Key: HDFS-13215
> URL: https://issues.apache.org/jira/browse/HDFS-13215
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Wei Yan
>Priority: Major
> Attachments: HDFS-13215.000.patch, HDFS-13215.001.patch, 
> HDFS-13215.002.patch, HDFS-13215.003.patch, HDFS-13215.004.patch
>
>
> We are splitting the HDFS client code base and potentially Router-based 
> Federation is also independent enough to be in its own package.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13284) Adjust criteria for LowRedundancyBlocks.QUEUE_VERY_LOW_REDUNDANCY

2018-03-15 Thread Lukas Majercak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HDFS-13284:
--
Description: 
LowRedundancyBlocks currently has 5 priority queues:

QUEUE_HIGHEST_PRIORITY = 0                         - reserved for last replica 
blocks
QUEUE_VERY_LOW_REDUNDANCY = 1             - 
QUEUE_LOW_REDUNDANCY = 2
QUEUE_REPLICAS_BADLY_DISTRIBUTED = 3
QUEUE_WITH_CORRUPT_BLOCKS = 4

 

> Adjust criteria for LowRedundancyBlocks.QUEUE_VERY_LOW_REDUNDANCY
> -
>
> Key: HDFS-13284
> URL: https://issues.apache.org/jira/browse/HDFS-13284
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-13284.000.patch
>
>
> LowRedundancyBlocks currently has 5 priority queues:
> QUEUE_HIGHEST_PRIORITY = 0                         - reserved for last 
> replica blocks
> QUEUE_VERY_LOW_REDUNDANCY = 1             - 
> QUEUE_LOW_REDUNDANCY = 2
> QUEUE_REPLICAS_BADLY_DISTRIBUTED = 3
> QUEUE_WITH_CORRUPT_BLOCKS = 4
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13251) Avoid using hard coded datanode data dirs in unit tests

2018-03-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400838#comment-16400838
 ] 

Hudson commented on HDFS-13251:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13844 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13844/])
HDFS-13251. Avoid using hard coded datanode data dirs in unit (xyao: rev 
da777a5498e73f9a44e810dc6771e5c8fe37b6f6)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureToleration.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdmin.java


> Avoid using hard coded datanode data dirs in unit tests
> ---
>
> Key: HDFS-13251
> URL: https://issues.apache.org/jira/browse/HDFS-13251
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HDFS-13251.000.patch, HDFS-13251.001.patch, 
> HDFS-13251.002.patch, HDFS-13251.003.addendum.patch, HDFS-13251.003.patch
>
>
> There are a few unit tests that rely on hard-coded MiniDFSCluster data dir 
> names.
>  
>  * TestDataNodeVolumeFailureToleration
>  * TestDataNodeVolumeFailureReporting
>  * TestDiskBalancerCommand
>  * TestBlockStatsMXBean
>  * TestDataNodeVolumeMetrics
>  * TestDFSAdmin
>  * TestDataNodeHotSwapVolumes
>  * TestDataNodeVolumeFailure
> This ticket is opened to use
> {code:java}
> MiniDFSCluster#getInstanceStorageDir(0, 1);
> instead of like below
> new File(cluster.getDataDirectory(), "data1");{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12723) TestReadStripedFileWithMissingBlocks#testReadFileWithMissingBlocks failing consistently.

2018-03-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400837#comment-16400837
 ] 

Hudson commented on HDFS-12723:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13844 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13844/])
HDFS-12723. (inigoiri: rev 6de135169eaaba9a4707d2bef380793ef91478d7)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReadStripedFileWithMissingBlocks.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java


> TestReadStripedFileWithMissingBlocks#testReadFileWithMissingBlocks failing 
> consistently.
> 
>
> Key: HDFS-12723
> URL: https://issues.apache.org/jira/browse/HDFS-12723
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Rushabh S Shah
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 3.1.0, 3.0.2, 3.2.0
>
> Attachments: HDFS-12723.000.patch, HDFS-12723.001.patch, 
> HDFS-12723.002.patch
>
>
> TestReadStripedFileWithMissingBlocks#testReadFileWithMissingBlocks is timing 
> out consistently on my local machine.
> {noformat}
> Running org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 132.405 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks
> testReadFileWithMissingBlocks(org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks)
>   Time elapsed: 132.171 sec  <<< ERROR!
> java.util.concurrent.TimeoutException: Timed out waiting for /foo to have all 
> the internalBlocks
>   at 
> org.apache.hadoop.hdfs.StripedFileTestUtil.waitBlockGroupsReported(StripedFileTestUtil.java:295)
>   at 
> org.apache.hadoop.hdfs.StripedFileTestUtil.waitBlockGroupsReported(StripedFileTestUtil.java:256)
>   at 
> org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks.readFileWithMissingBlocks(TestReadStripedFileWithMissingBlocks.java:98)
>   at 
> org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks.testReadFileWithMissingBlocks(TestReadStripedFileWithMissingBlocks.java:82)
> Results :
> Tests in error: 
>   
> TestReadStripedFileWithMissingBlocks.testReadFileWithMissingBlocks:82->readFileWithMissingBlocks:98
>  » Timeout
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11043) TestWebHdfsTimeouts fails

2018-03-15 Thread Chao Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400835#comment-16400835
 ] 

Chao Sun commented on HDFS-11043:
-

Thanks [~xyao] for the suggestion! yes this is a much better solution. Will 
attach a patch soon.

> TestWebHdfsTimeouts fails
> -
>
> Key: HDFS-11043
> URL: https://issues.apache.org/jira/browse/HDFS-11043
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: John Zhuge
>Priority: Major
> Attachments: org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts.txt
>
>
> I'm seeing reproducible test failures for TestWebHdfsTimeouts locally, at 
> least on trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13280) WebHDFS: Fix NPE in get snasphottable directory list call

2018-03-15 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400809#comment-16400809
 ] 

Xiaoyu Yao commented on HDFS-13280:
---

+1 for the v3 patch. I will commit it shortly. 

> WebHDFS: Fix NPE in get snasphottable directory list call
> -
>
> Key: HDFS-13280
> URL: https://issues.apache.org/jira/browse/HDFS-13280
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDFS-13280.001.patch, HDFS-13280.002.patch, 
> HDFS-13280.003.patch
>
>
> WebHdfs throws NPE when snapshottable directory status list is null.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13258) Ozone: restructure Hdsl/Ozone code to separated maven subprojects

2018-03-15 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-13258:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7240
   Status: Resolved  (was: Patch Available)

[~elek], [~xyao], [~ajayydv], [~nandakumar131], [~msingh], [~ljain] thanks for 
the contributions and getting this huge reorg done. I have committed this to 
the feature branch.

> Ozone: restructure Hdsl/Ozone code to separated maven subprojects
> -
>
> Key: HDFS-13258
> URL: https://issues.apache.org/jira/browse/HDFS-13258
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: HDFS-7240
>
> Attachments: HDFS-13258-HDFS-7240.001.patch, 
> HDFS-13258-HDFS-7240.002.patch, HDFS-13258-HDFS-7240.003.patch, 
> HDFS-13258-HDFS-7240.004.patch, HDFS-13258-HDFS-7240.005.patch, 
> HDFS-13258-HDFS-7240.006.patch, HDFS-13258-HDFS-7240.007.patch
>
>
> According to the merge disucssion at hdfs-dev/hadoop-dev list it would be 
> easier to review/maintain hdsl/ozone code if it would be:
> 1. separated from hadoop-hdfs/hadoop-common
> 2. would be more structured
> This jira is about moving out all the hdsl/ozone code to separated maven 
> projects. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11043) TestWebHdfsTimeouts fails

2018-03-15 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400803#comment-16400803
 ] 

Xiaoyu Yao commented on HDFS-11043:
---

Thanks [~csun]  for looking into this. Since the test works in OS specific 
environment, maybe we could add {{assumeTrue(!Shell.LINUX); for this test 
instead of removing it completely. }}

 

 

> TestWebHdfsTimeouts fails
> -
>
> Key: HDFS-11043
> URL: https://issues.apache.org/jira/browse/HDFS-11043
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: John Zhuge
>Priority: Major
> Attachments: org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts.txt
>
>
> I'm seeing reproducible test failures for TestWebHdfsTimeouts locally, at 
> least on trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11043) TestWebHdfsTimeouts fails

2018-03-15 Thread Chao Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400766#comment-16400766
 ] 

Chao Sun edited comment on HDFS-11043 at 3/15/18 5:26 PM:
--

Since the backlog queue implementation is rather OS-specific (different 
behaviors in BSD and Linux), I'd suggest to remove the relevant tests on 
connection timeout. cc [~chris.douglas], [~xyao], [~goiri] for your input.


was (Author: csun):
Since the backlog queue implementation is rather OS-specific (different 
behaviors in BSD and Linux), I'd suggest to remove the relevant tests on 
connection timeout. cc [~chris.douglas], [~xyao] for your input.

> TestWebHdfsTimeouts fails
> -
>
> Key: HDFS-11043
> URL: https://issues.apache.org/jira/browse/HDFS-11043
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: John Zhuge
>Priority: Major
> Attachments: org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts.txt
>
>
> I'm seeing reproducible test failures for TestWebHdfsTimeouts locally, at 
> least on trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11043) TestWebHdfsTimeouts fails

2018-03-15 Thread Chao Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400766#comment-16400766
 ] 

Chao Sun commented on HDFS-11043:
-

Since the backlog queue implementation is rather OS-specific (different 
behaviors in BSD and Linux), I'd suggest to remove the relevant tests on 
connection timeout. cc [~chris.douglas], [~xyao] for your input.

> TestWebHdfsTimeouts fails
> -
>
> Key: HDFS-11043
> URL: https://issues.apache.org/jira/browse/HDFS-11043
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: John Zhuge
>Priority: Major
> Attachments: org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts.txt
>
>
> I'm seeing reproducible test failures for TestWebHdfsTimeouts locally, at 
> least on trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13251) Avoid using hard coded datanode data dirs in unit tests

2018-03-15 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-13251:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks [~ajayydv] for the addendum patch. I've committed it to the trunk and 
branch-3.1. 

> Avoid using hard coded datanode data dirs in unit tests
> ---
>
> Key: HDFS-13251
> URL: https://issues.apache.org/jira/browse/HDFS-13251
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 3.1.0, 3.2.0
>
> Attachments: HDFS-13251.000.patch, HDFS-13251.001.patch, 
> HDFS-13251.002.patch, HDFS-13251.003.addendum.patch, HDFS-13251.003.patch
>
>
> There are a few unit tests that rely on hard-coded MiniDFSCluster data dir 
> names.
>  
>  * TestDataNodeVolumeFailureToleration
>  * TestDataNodeVolumeFailureReporting
>  * TestDiskBalancerCommand
>  * TestBlockStatsMXBean
>  * TestDataNodeVolumeMetrics
>  * TestDFSAdmin
>  * TestDataNodeHotSwapVolumes
>  * TestDataNodeVolumeFailure
> This ticket is opened to use
> {code:java}
> MiniDFSCluster#getInstanceStorageDir(0, 1);
> instead of like below
> new File(cluster.getDataDirectory(), "data1");{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12723) TestReadStripedFileWithMissingBlocks#testReadFileWithMissingBlocks failing consistently.

2018-03-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400758#comment-16400758
 ] 

Íñigo Goiri commented on HDFS-12723:


Thanks [~ajayydv] for the fix.
I committed to trunk, branch-3.1 and branch-3.0.

> TestReadStripedFileWithMissingBlocks#testReadFileWithMissingBlocks failing 
> consistently.
> 
>
> Key: HDFS-12723
> URL: https://issues.apache.org/jira/browse/HDFS-12723
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Rushabh S Shah
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 3.1.0, 3.0.2, 3.2.0
>
> Attachments: HDFS-12723.000.patch, HDFS-12723.001.patch, 
> HDFS-12723.002.patch
>
>
> TestReadStripedFileWithMissingBlocks#testReadFileWithMissingBlocks is timing 
> out consistently on my local machine.
> {noformat}
> Running org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 132.405 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks
> testReadFileWithMissingBlocks(org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks)
>   Time elapsed: 132.171 sec  <<< ERROR!
> java.util.concurrent.TimeoutException: Timed out waiting for /foo to have all 
> the internalBlocks
>   at 
> org.apache.hadoop.hdfs.StripedFileTestUtil.waitBlockGroupsReported(StripedFileTestUtil.java:295)
>   at 
> org.apache.hadoop.hdfs.StripedFileTestUtil.waitBlockGroupsReported(StripedFileTestUtil.java:256)
>   at 
> org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks.readFileWithMissingBlocks(TestReadStripedFileWithMissingBlocks.java:98)
>   at 
> org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks.testReadFileWithMissingBlocks(TestReadStripedFileWithMissingBlocks.java:82)
> Results :
> Tests in error: 
>   
> TestReadStripedFileWithMissingBlocks.testReadFileWithMissingBlocks:82->readFileWithMissingBlocks:98
>  » Timeout
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12723) TestReadStripedFileWithMissingBlocks#testReadFileWithMissingBlocks failing consistently.

2018-03-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12723:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   3.0.2
   3.1.0
   Status: Resolved  (was: Patch Available)

> TestReadStripedFileWithMissingBlocks#testReadFileWithMissingBlocks failing 
> consistently.
> 
>
> Key: HDFS-12723
> URL: https://issues.apache.org/jira/browse/HDFS-12723
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Rushabh S Shah
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 3.1.0, 3.0.2, 3.2.0
>
> Attachments: HDFS-12723.000.patch, HDFS-12723.001.patch, 
> HDFS-12723.002.patch
>
>
> TestReadStripedFileWithMissingBlocks#testReadFileWithMissingBlocks is timing 
> out consistently on my local machine.
> {noformat}
> Running org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 132.405 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks
> testReadFileWithMissingBlocks(org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks)
>   Time elapsed: 132.171 sec  <<< ERROR!
> java.util.concurrent.TimeoutException: Timed out waiting for /foo to have all 
> the internalBlocks
>   at 
> org.apache.hadoop.hdfs.StripedFileTestUtil.waitBlockGroupsReported(StripedFileTestUtil.java:295)
>   at 
> org.apache.hadoop.hdfs.StripedFileTestUtil.waitBlockGroupsReported(StripedFileTestUtil.java:256)
>   at 
> org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks.readFileWithMissingBlocks(TestReadStripedFileWithMissingBlocks.java:98)
>   at 
> org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks.testReadFileWithMissingBlocks(TestReadStripedFileWithMissingBlocks.java:82)
> Results :
> Tests in error: 
>   
> TestReadStripedFileWithMissingBlocks.testReadFileWithMissingBlocks:82->readFileWithMissingBlocks:98
>  » Timeout
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13215) RBF: Move Router to its own module

2018-03-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400751#comment-16400751
 ] 

Íñigo Goiri commented on HDFS-13215:


It looks like Yetus didn't run the unit tests for RBF: 
[here|https://builds.apache.org/job/PreCommit-HDFS-Build/23499/testReport/].
Not sure what's missing though.

I would call it {{RBFConfigKeys}} instead of {{RbfConfigKeys}} (similar for 
TestRbfConfigFields).

I would avoid making changes like the ones in FederationMetrics.
We should do a separate JIRA for those.
Can we do a separate JIRA for:
* 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/package-info.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java
 (javadoc comment)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterQuota.java
 (javadoc comment)

Similarly, we should avoid the space changes in {{DFSConfigKeys}} (towards the 
end of the diff).

Thanks for taking this [~ywskycn], this is a lot of work.

> RBF: Move Router to its own module
> --
>
> Key: HDFS-13215
> URL: https://issues.apache.org/jira/browse/HDFS-13215
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Wei Yan
>Priority: Major
> Attachments: HDFS-13215.000.patch, HDFS-13215.001.patch, 
> HDFS-13215.002.patch, HDFS-13215.003.patch, HDFS-13215.004.patch
>
>
> We are splitting the HDFS client code base and potentially Router-based 
> Federation is also independent enough to be in its own package.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11481) hdfs snapshotDiff /.reserved/raw/... fails on snapshottable directories

2018-03-15 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400741#comment-16400741
 ] 

Yongjun Zhang commented on HDFS-11481:
--

HDFS-10997 code change:

{code}
  static SnapshotDiffReport getSnapshotDiffReport(FSDirectory fsd,
  SnapshotManager snapshotManager, String path,
  String fromSnapshot, String toSnapshot) throws IOException {
SnapshotDiffReport diffs;
final FSPermissionChecker pc = fsd.getPermissionChecker();
fsd.readLock();
try {
  INodesInPath iip = fsd.resolvePath(pc, path, DirOp.READ); <==
  if (fsd.isPermissionEnabled()) {
checkSubtreeReadPermission(fsd, pc, path, fromSnapshot);
checkSubtreeReadPermission(fsd, pc, path, toSnapshot);
  }

  diffs = snapshotManager.diff(iip, path, fromSnapshot, toSnapshot);
} finally {
  fsd.readUnlock();
}
return diffs;
  }
 {code}

> hdfs snapshotDiff /.reserved/raw/... fails on snapshottable directories
> ---
>
> Key: HDFS-11481
> URL: https://issues.apache.org/jira/browse/HDFS-11481
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: Mavin Martin
>Assignee: Mavin Martin
>Priority: Minor
> Attachments: HDFS-11481-branch-2.6.0.001.patch, HDFS-11481.001.patch, 
> HDFS-11481.002.patch
>
>
> Successful command:
> {code}
> #> hdfs snapshotDiff /tmp/dir s1 s2
> Difference between snapshot s1 and snapshot s2 under directory /tmp/dir:
> M   .
> +   ./file1.txt
> {code}
> Unsuccessful command:
> {code}
> #> hdfs snapshotDiff /.reserved/raw/tmp/dir s1 s2
> snapshotDiff: Directory does not exist: /.reserved/raw/tmp/dir
> {code}
> Prefixing with raw path should run successfully and return same output.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13289) RBF: TestConnectionManager#testCleanup() test case need correction

2018-03-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13289:
---
Summary: RBF: TestConnectionManager#testCleanup() test case need correction 
 (was: TestConnectionManager#testCleanup() test case need correction)

> RBF: TestConnectionManager#testCleanup() test case need correction
> --
>
> Key: HDFS-13289
> URL: https://issues.apache.org/jira/browse/HDFS-13289
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Priority: Minor
>
> In TestConnectionManager#testCleanup() 
>  
> {code:java}
> // Make sure the number of connections doesn't go below minSize
> ConnectionPool pool3 = new ConnectionPool(
> conf, TEST_NN_ADDRESS, TEST_USER3, 2, 10);
> addConnectionsToPool(pool3, 10, 0);
> poolMap.put(new ConnectionPoolId(TEST_USER2, TEST_NN_ADDRESS), pool3);
> connManager.cleanup(pool3);
> checkPoolConnections(TEST_USER3, 2, 0);
> {code}
> this part need correction
> Here new ConnectionPoolId is created with TEST_USER2 but checkPoolConnections 
> is done using TEST_USER3. 
> In checkPoolConnections method 
> {code:java}
> if (e.getKey().getUgi() == ugi)
> {code}
> then only it will validate numOfConns and numOfActiveConns. In this case for 
> TEST_USER3  ' *if*  'condition is returning *false* and if you pass any value 
> to the checkPoolConnections method, the test case will pass.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13289) RBF: TestConnectionManager#testCleanup() test case need correction

2018-03-15 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400686#comment-16400686
 ] 

Wei Yan commented on HDFS-13289:


[~dibyendu_hadoop] Thanks for reporting. Feel free to put a patch.

> RBF: TestConnectionManager#testCleanup() test case need correction
> --
>
> Key: HDFS-13289
> URL: https://issues.apache.org/jira/browse/HDFS-13289
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Minor
>
> In TestConnectionManager#testCleanup() 
>  
> {code:java}
> // Make sure the number of connections doesn't go below minSize
> ConnectionPool pool3 = new ConnectionPool(
> conf, TEST_NN_ADDRESS, TEST_USER3, 2, 10);
> addConnectionsToPool(pool3, 10, 0);
> poolMap.put(new ConnectionPoolId(TEST_USER2, TEST_NN_ADDRESS), pool3);
> connManager.cleanup(pool3);
> checkPoolConnections(TEST_USER3, 2, 0);
> {code}
> this part need correction
> Here new ConnectionPoolId is created with TEST_USER2 but checkPoolConnections 
> is done using TEST_USER3. 
> In checkPoolConnections method 
> {code:java}
> if (e.getKey().getUgi() == ugi)
> {code}
> then only it will validate numOfConns and numOfActiveConns. In this case for 
> TEST_USER3  ' *if*  'condition is returning *false* and if you pass any value 
> to the checkPoolConnections method, the test case will pass.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12618) fsck -includeSnapshots reports wrong amount of total blocks

2018-03-15 Thread Wellington Chevreuil (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400670#comment-16400670
 ] 

Wellington Chevreuil commented on HDFS-12618:
-

Any comments on the last patch proposed?

> fsck -includeSnapshots reports wrong amount of total blocks
> ---
>
> Key: HDFS-12618
> URL: https://issues.apache.org/jira/browse/HDFS-12618
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Minor
> Attachments: HDFS-121618.initial, HDFS-12618.001.patch, 
> HDFS-12618.002.patch, HDFS-12618.003.patch, HDFS-12618.004.patch, 
> HDFS-12618.005.patch, HDFS-12618.006.patch
>
>
> When snapshot is enabled, if a file is deleted but is contained by a 
> snapshot, *fsck* will not reported blocks for such file, showing different 
> number of *total blocks* than what is exposed in the Web UI. 
> This should be fine, as *fsck* provides *-includeSnapshots* option. The 
> problem is that *-includeSnapshots* option causes *fsck* to count blocks for 
> every occurrence of a file on snapshots, which is wrong because these blocks 
> should be counted only once (for instance, if a 100MB file is present on 3 
> snapshots, it would still map to one block only in hdfs). This causes fsck to 
> report much more blocks than what actually exist in hdfs and is reported in 
> the Web UI.
> Here's an example:
> 1) HDFS has two files of 2 blocks each:
> {noformat}
> $ hdfs dfs -ls -R /
> drwxr-xr-x   - root supergroup  0 2017-10-07 21:21 /snap-test
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:16 /snap-test/file1
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:17 /snap-test/file2
> drwxr-xr-x   - root supergroup  0 2017-05-13 13:03 /test
> {noformat} 
> 2) There are two snapshots, with the two files present on each of the 
> snapshots:
> {noformat}
> $ hdfs dfs -ls -R /snap-test/.snapshot
> drwxr-xr-x   - root supergroup  0 2017-10-07 21:21 
> /snap-test/.snapshot/snap1
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:16 
> /snap-test/.snapshot/snap1/file1
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:17 
> /snap-test/.snapshot/snap1/file2
> drwxr-xr-x   - root supergroup  0 2017-10-07 21:21 
> /snap-test/.snapshot/snap2
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:16 
> /snap-test/.snapshot/snap2/file1
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:17 
> /snap-test/.snapshot/snap2/file2
> {noformat}
> 3) *fsck -includeSnapshots* reports 12 blocks in total (4 blocks for the 
> normal file path, plus 4 blocks for each snapshot path):
> {noformat}
> $ hdfs fsck / -includeSnapshots
> FSCK started by root (auth:SIMPLE) from /127.0.0.1 for path / at Mon Oct 09 
> 15:15:36 BST 2017
> Status: HEALTHY
>  Number of data-nodes:1
>  Number of racks: 1
>  Total dirs:  6
>  Total symlinks:  0
> Replicated Blocks:
>  Total size:  1258291200 B
>  Total files: 6
>  Total blocks (validated):12 (avg. block size 104857600 B)
>  Minimally replicated blocks: 12 (100.0 %)
>  Over-replicated blocks:  0 (0.0 %)
>  Under-replicated blocks: 0 (0.0 %)
>  Mis-replicated blocks:   0 (0.0 %)
>  Default replication factor:  1
>  Average block replication:   1.0
>  Missing blocks:  0
>  Corrupt blocks:  0
>  Missing replicas:0 (0.0 %)
> {noformat}
> 4) Web UI shows the correct number (4 blocks only):
> {noformat}
> Security is off.
> Safemode is off.
> 5 files and directories, 4 blocks = 9 total filesystem object(s).
> {noformat}
> I would like to work on this solution, will propose an initial solution 
> shortly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13293) RBF: The RouterRPCServer should transfer CallerContext and client ip to NamenodeRpcServer

2018-03-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400679#comment-16400679
 ] 

Íñigo Goiri commented on HDFS-13293:


Is this something along the lines of HDFS-13248?

> RBF: The RouterRPCServer should transfer CallerContext and client ip to 
> NamenodeRpcServer
> -
>
> Key: HDFS-13293
> URL: https://issues.apache.org/jira/browse/HDFS-13293
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: maobaolong
>Priority: Major
>
> Otherwise, the namenode don't know the client's callerContext



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13289) RBF: TestConnectionManager#testCleanup() test case need correction

2018-03-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400675#comment-16400675
 ] 

Íñigo Goiri commented on HDFS-13289:


Thanks [~dibyendu_hadoop] for catching this.
I added you to the list of contributors and assigned the JIRA to you.
CC [~ekanth] [~ywskycn] [~csun] for awareness.

> RBF: TestConnectionManager#testCleanup() test case need correction
> --
>
> Key: HDFS-13289
> URL: https://issues.apache.org/jira/browse/HDFS-13289
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Minor
>
> In TestConnectionManager#testCleanup() 
>  
> {code:java}
> // Make sure the number of connections doesn't go below minSize
> ConnectionPool pool3 = new ConnectionPool(
> conf, TEST_NN_ADDRESS, TEST_USER3, 2, 10);
> addConnectionsToPool(pool3, 10, 0);
> poolMap.put(new ConnectionPoolId(TEST_USER2, TEST_NN_ADDRESS), pool3);
> connManager.cleanup(pool3);
> checkPoolConnections(TEST_USER3, 2, 0);
> {code}
> this part need correction
> Here new ConnectionPoolId is created with TEST_USER2 but checkPoolConnections 
> is done using TEST_USER3. 
> In checkPoolConnections method 
> {code:java}
> if (e.getKey().getUgi() == ugi)
> {code}
> then only it will validate numOfConns and numOfActiveConns. In this case for 
> TEST_USER3  ' *if*  'condition is returning *false* and if you pass any value 
> to the checkPoolConnections method, the test case will pass.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13295) Namenode doesn't leave safemode if dfs.namenode.safemode.replication.min set < dfs.namenode.replication.min

2018-03-15 Thread Nicolas Fraison (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Fraison updated HDFS-13295:
---
Attachment: HDFS-13295.patch

> Namenode doesn't leave safemode if dfs.namenode.safemode.replication.min set 
> < dfs.namenode.replication.min
> ---
>
> Key: HDFS-13295
> URL: https://issues.apache.org/jira/browse/HDFS-13295
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
> Environment: CDH 5.11 with HDFS-8716 backported.
> dfs.namenode.replication.min=2
> dfs.namenode.safemode.replication.min=1
>  
>Reporter: Nicolas Fraison
>Priority: Major
> Attachments: HDFS-13295.patch
>
>
> When we set dfs.namenode.safemode.replication.min < 
> dfs.namenode.replication.min from HDFS-8716 patch the number of replica for 
> which it will increase the safe block count
> must be equal to dfs.namenode.safemode.replication.min in 
> `FSNamesystem.incrementSafeBlockCount`
> When reading modification from edits, the replica number for new blocks is 
> set at min(numNodes,
> dfs.namenode.replication.min) in BlockManager.completeBlock which is greater 
> than dfs.namenode.safemode.replication.min.
> Due to that safe block count never reach number of available blocks and 
> namenode doesn't leave automatically the safemode



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13295) Namenode doesn't leave safemode if dfs.namenode.safemode.replication.min set < dfs.namenode.replication.min

2018-03-15 Thread Nicolas Fraison (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Fraison updated HDFS-13295:
---
Assignee: Nicolas Fraison
  Status: Patch Available  (was: Open)

> Namenode doesn't leave safemode if dfs.namenode.safemode.replication.min set 
> < dfs.namenode.replication.min
> ---
>
> Key: HDFS-13295
> URL: https://issues.apache.org/jira/browse/HDFS-13295
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
> Environment: CDH 5.11 with HDFS-8716 backported.
> dfs.namenode.replication.min=2
> dfs.namenode.safemode.replication.min=1
>  
>Reporter: Nicolas Fraison
>Assignee: Nicolas Fraison
>Priority: Major
> Attachments: HDFS-13295.patch
>
>
> When we set dfs.namenode.safemode.replication.min < 
> dfs.namenode.replication.min from HDFS-8716 patch the number of replica for 
> which it will increase the safe block count
> must be equal to dfs.namenode.safemode.replication.min in 
> `FSNamesystem.incrementSafeBlockCount`
> When reading modification from edits, the replica number for new blocks is 
> set at min(numNodes,
> dfs.namenode.replication.min) in BlockManager.completeBlock which is greater 
> than dfs.namenode.safemode.replication.min.
> Due to that safe block count never reach number of available blocks and 
> namenode doesn't leave automatically the safemode



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13289) RBF: TestConnectionManager#testCleanup() test case need correction

2018-03-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri reassigned HDFS-13289:
--

Assignee: Dibyendu Karmakar

> RBF: TestConnectionManager#testCleanup() test case need correction
> --
>
> Key: HDFS-13289
> URL: https://issues.apache.org/jira/browse/HDFS-13289
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Minor
>
> In TestConnectionManager#testCleanup() 
>  
> {code:java}
> // Make sure the number of connections doesn't go below minSize
> ConnectionPool pool3 = new ConnectionPool(
> conf, TEST_NN_ADDRESS, TEST_USER3, 2, 10);
> addConnectionsToPool(pool3, 10, 0);
> poolMap.put(new ConnectionPoolId(TEST_USER2, TEST_NN_ADDRESS), pool3);
> connManager.cleanup(pool3);
> checkPoolConnections(TEST_USER3, 2, 0);
> {code}
> this part need correction
> Here new ConnectionPoolId is created with TEST_USER2 but checkPoolConnections 
> is done using TEST_USER3. 
> In checkPoolConnections method 
> {code:java}
> if (e.getKey().getUgi() == ugi)
> {code}
> then only it will validate numOfConns and numOfActiveConns. In this case for 
> TEST_USER3  ' *if*  'condition is returning *false* and if you pass any value 
> to the checkPoolConnections method, the test case will pass.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10285) Storage Policy Satisfier in Namenode

2018-03-15 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400665#comment-16400665
 ] 

Daryn Sharp commented on HDFS-10285:


To summarize main _implementation_ issues from an offline meeting:
* NN context abstraction is violated by having internal/external 
implementations.  There should be completely common implementations.  Only the 
context impl differs.
* No DN changes should be required.  DN should be “dumb” and just move blocks 
around.  It already has that support.
* Separate jira can add the transfer block optimization to just move the block 
w/o a transceiver when the target is the node itself.  Not strictly required by 
SPS.

I also have _design_ issues.  We also explored a better design to leverage 
existing NN replication behavior.  The SPS should not require so much code that 
it will be a maintenance burden for future development.

Let’s understand what motivates this feature.  Replication monitoring is not 
working.  Why?  There are two distinct criteria for a block to be correctly 
replicated:
# Are there enough replicas?
# Are the replicas correctly placed?  Ie. Rack placement.  Technically, the 
storage policy (SP) is no different.

The NN already handles storage policies during placement decisions.  Ie.  
Creating files and correcting mis-replication (over/under).  If #1 is true, #2 
is “short-circuited” (beyond racks > 1) based on assumption #2 was satisfied by 
choices to correct #1.  The “short-circuit” avoids a heavy performance penalty 
to FBRs and is why the NN fails to perform what should be a basic duty (always 
honoring SP).

So how can we leverage the replication monitor while maintaining the 
“short-circuit”?  I think it might be as simple as:
# Replication queue initialization should not short-circuit.  The performance 
penalty to check SP is absorbed by the background initialization thread.
# Replication monitor must not short-circuit when computing work.  Must assume 
“something” is wrong with the block if it’s in the queue, which allows the 
queue init to work and SPS to work.

Benefits:
# No xattrs.  Replication queue init handles resumption after failover/restart.
# SPS simply scans the tree and adds blocks (with flow-control) to the 
replication queue.  That’s all.
# No split-brain between replication monitor and SPS.
# SP moves are scheduled with respect to normal replication instead of spliced 
into the node’s work queue.

I also think forcing users to use an explicit “satisfy” operation is broken.  
We don’t have setReplication/satisfyReplication.  Deferring the satisfy to an 
indeterminate future time is a specious use case which burdens all callers.  We 
can’t expect users to implement special retry logic to ensure the satisfy 
occurs, persist pending satisfy operations to issue after a crash/restart, etc. 
 Inevitably the path of least resistance is scheduling a task 
(cron/oozie/whatever) to call satisfy on large trees, if not the whole 
namespace, and then complain that hdfs performance sucks.

> Storage Policy Satisfier in Namenode
> 
>
> Key: HDFS-10285
> URL: https://issues.apache.org/jira/browse/HDFS-10285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
> Attachments: HDFS-10285-consolidated-merge-patch-00.patch, 
> HDFS-10285-consolidated-merge-patch-01.patch, 
> HDFS-10285-consolidated-merge-patch-02.patch, 
> HDFS-10285-consolidated-merge-patch-03.patch, 
> HDFS-10285-consolidated-merge-patch-04.patch, 
> HDFS-10285-consolidated-merge-patch-05.patch, 
> HDFS-SPS-TestReport-20170708.pdf, SPS Modularization.pdf, 
> Storage-Policy-Satisfier-in-HDFS-June-20-2017.pdf, 
> Storage-Policy-Satisfier-in-HDFS-May10.pdf, 
> Storage-Policy-Satisfier-in-HDFS-Oct-26-2017.pdf
>
>
> Heterogeneous storage in HDFS introduced the concept of storage policy. These 
> policies can be set on directory/file to specify the user preference, where 
> to store the physical block. When user set the storage policy before writing 
> data, then the blocks could take advantage of storage policy preferences and 
> stores physical block accordingly. 
> If user set the storage policy after writing and completing the file, then 
> the blocks would have been written with default storage policy (nothing but 
> DISK). User has to run the ‘Mover tool’ explicitly by specifying all such 
> file names as a list. In some distributed system scenarios (ex: HBase) it 
> would be difficult to collect all the files and run the tool as different 
> nodes can write files separately and file can have different paths.
> Another scenarios is, when user rename the files from one effected storage 
> policy file (inherited 

[jira] [Created] (HDFS-13295) Namenode doesn't leave safemode if dfs.namenode.safemode.replication.min set < dfs.namenode.replication.min

2018-03-15 Thread Nicolas Fraison (JIRA)
Nicolas Fraison created HDFS-13295:
--

 Summary: Namenode doesn't leave safemode if 
dfs.namenode.safemode.replication.min set < dfs.namenode.replication.min
 Key: HDFS-13295
 URL: https://issues.apache.org/jira/browse/HDFS-13295
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
 Environment: CDH 5.11 with HDFS-8716 backported.

dfs.namenode.replication.min=2
dfs.namenode.safemode.replication.min=1

 
Reporter: Nicolas Fraison


When we set dfs.namenode.safemode.replication.min < 
dfs.namenode.replication.min from HDFS-8716 patch the number of replica for 
which it will increase the safe block count
must be equal to dfs.namenode.safemode.replication.min in 
`FSNamesystem.incrementSafeBlockCount`

When reading modification from edits, the replica number for new blocks is set 
at min(numNodes,
dfs.namenode.replication.min) in BlockManager.completeBlock which is greater 
than dfs.namenode.safemode.replication.min.
Due to that safe block count never reach number of available blocks and 
namenode doesn't leave automatically the safemode



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13281) Namenode#createFile should be /.reserved/raw/ aware.

2018-03-15 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400562#comment-16400562
 ] 

Rushabh S Shah commented on HDFS-13281:
---

Added a check to skip creating \{{FileEncryptionInfo}} if the path is 
/.reserved/raw.

> Namenode#createFile should be /.reserved/raw/ aware.
> 
>
> Key: HDFS-13281
> URL: https://issues.apache.org/jira/browse/HDFS-13281
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.8.3
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Critical
> Attachments: HDFS-13281.001.patch
>
>
> If I want to write to /.reserved/raw/ and if that directory happens to 
> be in EZ, then namenode *should not* create edek and just copy the raw bytes 
> from the source.
>  Namenode#startFileInt should be /.reserved/raw/ aware.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >