[jira] [Commented] (HDFS-6294) Use INode IDs to avoid conflicts when a file open for write is renamed

2014-05-06 Thread Charles Lamb (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13991418#comment-13991418
 ] 

Charles Lamb commented on HDFS-6294:


NameNodeRpcServer.java: it looks like you can get rid of the // TODO comment.

TestLease.java:

  Assert.assertEquals(contents1,  DFSTestUtil.readFile(fs3, path2));
  Assert.assertEquals(contents2,  DFSTestUtil.readFile(fs3, path1));
 
There are two spaces after the comma in contentsN,
In the same file, I assume the out.close that you're adding was a random 
missing close() that you happened upon?

TestSetrep(Inc/Dec)reasing.java: are the timeouts you are adding related to the 
rest of the changes or just opportunistic/necessary?


> Use INode IDs to avoid conflicts when a file open for write is renamed
> --
>
> Key: HDFS-6294
> URL: https://issues.apache.org/jira/browse/HDFS-6294
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 0.20.1
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-6294.001.patch, HDFS-6294.002.patch
>
>
> Now that we have a unique INode ID for each INode, clients with files that 
> are open for write can use this unique ID rather than a file path when they 
> are requesting more blocks or closing the open file.  This will avoid 
> conflicts when a file which is open for write is renamed, and another file 
> with that name is created.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6340) DN can't finalize upgarde

2014-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13991417#comment-13991417
 ] 

Hadoop QA commented on HDFS-6340:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12643612/HDFS-6340.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6840//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6840//console

This message is automatically generated.

> DN can't finalize upgarde
> -
>
> Key: HDFS-6340
> URL: https://issues.apache.org/jira/browse/HDFS-6340
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.4.0
>Reporter: Rahul Singhal
>Priority: Blocker
> Attachments: HDFS-6340-branch-2.4.0.patch, HDFS-6340.patch
>
>
> I upgraded a (NN) HA cluster from 2.2.0 to 2.4.0. After I issued the 
> '-finalizeUpgarde' command, NN was able to finalize the upgrade but DN 
> couldn't (I waited for the next block report).
> I think I have found the problem to be due to HDFS-5153. I will attach a 
> proposed fix.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6329) WebHdfs does not work if HA is enabled on NN but logical URI is not configured.

2014-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13991416#comment-13991416
 ] 

Hadoop QA commented on HDFS-6329:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12643640/HDFS-6329.v3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandby
  
org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints
  org.apache.hadoop.hdfs.server.namenode.TestNameNodeRecovery
  org.apache.hadoop.hdfs.server.namenode.TestEditLogAutoroll
  org.apache.hadoop.hdfs.TestDistributedFileSystem

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6842//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6842//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6842//console

This message is automatically generated.

> WebHdfs does not work if HA is enabled on NN but logical URI is not 
> configured.
> ---
>
> Key: HDFS-6329
> URL: https://issues.apache.org/jira/browse/HDFS-6329
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Blocker
> Attachments: HDFS-6329.patch, HDFS-6329.patch, HDFS-6329.v2.patch, 
> HDFS-6329.v3.patch
>
>
> After HDFS-6100, namenode unconditionally puts the logical name (name service 
> id) as the token service when redirecting webhdfs requests to datanodes, if 
> it detects HA.
> For HA configurations with no client-side failover proxy provider (e.g. IP 
> failover), webhdfs does not work since the clients do not use logical name.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-4913) Deleting file through fuse-dfs when using trash fails requiring root permissions

2014-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13991406#comment-13991406
 ] 

Hadoop QA commented on HDFS-4913:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12643626/HDFS-4913.003.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6839//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6839//console

This message is automatically generated.

> Deleting file through fuse-dfs when using trash fails requiring root 
> permissions
> 
>
> Key: HDFS-4913
> URL: https://issues.apache.org/jira/browse/HDFS-4913
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.0.3-alpha
>Reporter: Stephen Chu
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-4913.002.patch, HDFS-4913.003.patch
>
>
> As _root_, I mounted HDFS with fuse-dfs using the -ousetrash option.
> As _testuser_, I cd into the mount and touch a test file at 
> _/user/testuser/testFile1_. As the same user, I try to rm the file and run 
> into an error:
> {code}
> [testuser@hdfs-vanilla-1 ~]$ cd /hdfs_mnt/user/testuser
> [testuser@hdfs-vanilla-1 testuser]$ touch testFile1
> [testuser@hdfs-vanilla-1 testuser]$ rm testFile1
> rm: cannot remove `testFile1': Unknown error 255
> {code}
> I check the fuse-dfs debug output, and it shows that we attempt to mkdir 
> /user/root/.Trash, which testuser doesn't have permissions to.
> Ideally, we'd be able to remove testFile1 and have testFile1 be put into 
> /user/testuser/.Trash instead of /user/root/.Trash.
> Error in debug:
> {code}
> unlink /user/testuser/testFile1
> hdfsCreateDirectory(/user/root/.Trash/Current/user/testuser): 
> FileSystem#mkdirs error:
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=testuser, access=WRITE, inode="/user/root":root:supergroup:drwxr-xr-x
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:224)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:204)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:149)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4716)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4698)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4672)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3035)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2999)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2980)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:648)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.ja

[jira] [Assigned] (HDFS-6345) DFS.listCacheDirectives() should allow filtering based on cache directive ID

2014-05-06 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang reassigned HDFS-6345:
-

Assignee: Andrew Wang

> DFS.listCacheDirectives() should allow filtering based on cache directive ID
> 
>
> Key: HDFS-6345
> URL: https://issues.apache.org/jira/browse/HDFS-6345
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 2.4.0
>Reporter: Lenni Kuff
>Assignee: Andrew Wang
>
> DFS.listCacheDirectives() should allow filtering based on cache directive ID. 
> Currently it throws an exception.
> For example:
> {code}
> long directiveId = ;
> CacheDirectiveInfo filter = new CacheDirectiveInfo.Builder()  
> .setId(directiveId)
> .build();
> RemoteIterator itr = dfs.listCacheDirectives(filter);
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6345) DFS.listCacheDirectives() should allow filtering based on cache directive ID

2014-05-06 Thread Lenni Kuff (JIRA)
Lenni Kuff created HDFS-6345:


 Summary: DFS.listCacheDirectives() should allow filtering based on 
cache directive ID
 Key: HDFS-6345
 URL: https://issues.apache.org/jira/browse/HDFS-6345
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: caching
Affects Versions: 2.4.0
Reporter: Lenni Kuff


DFS.listCacheDirectives() should allow filtering based on cache directive ID. 
Currently it throws an exception.

For example:
{code}
long directiveId = ;
CacheDirectiveInfo filter = new CacheDirectiveInfo.Builder()
.setId(directiveId)
.build();
RemoteIterator itr = dfs.listCacheDirectives(filter);
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6202) Replace direct usage of ProxyUser Constants with function

2014-05-06 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HDFS-6202:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Resolving this jira as this patch was included in HADOOP-10471

> Replace direct usage of ProxyUser Constants with function
> -
>
> Key: HDFS-6202
> URL: https://issues.apache.org/jira/browse/HDFS-6202
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: security
>Reporter: Benoy Antony
>Assignee: Benoy Antony
>Priority: Minor
> Attachments: HDFS-6202.patch
>
>
> This is a pre-requisite for HADOOP-10471  to reduce the visibility of 
> constants of ProxyUsers.
> The _TestJspHelper_ directly uses the constants in _ProxyUsers_. 
> This can be replaced with the corresponding functions.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6336) Cannot download file via webhdfs when wildcard is enabled

2014-05-06 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-6336:
-

Resolution: Not a Problem
Status: Resolved  (was: Patch Available)

> Cannot download file via webhdfs when wildcard is enabled
> -
>
> Key: HDFS-6336
> URL: https://issues.apache.org/jira/browse/HDFS-6336
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, webhdfs
>Affects Versions: 2.4.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-6336.001.patch, HDFS-6336.001.patch, 
> HDFS-6336.001.patch
>
>
> With wildcard is enabled, issuing a webhdfs command like
> {code}
> http://yjztvm2.private:50070/webhdfs/v1/tmp?op=OPEN
> {code}
> would give
> {code}
> http://yjztvm3.private:50075/webhdfs/v1/tmp?op=OPEN&namenoderpcaddress=0.0.0.0:8020&offset=0
> {"RemoteException":{"exception":"ConnectException","javaClassName":"java.net.ConnectException","message":"Call
>  From yjztvm3.private/192.168.142.230 to 0.0.0.0:8020 failed on connection 
> exception: java.net.ConnectException: Connection refused; For more details 
> see:  http://wiki.apache.org/hadoop/ConnectionRefused"}}
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6336) Cannot download file via webhdfs when wildcard is enabled

2014-05-06 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13991312#comment-13991312
 ] 

Yongjun Zhang commented on HDFS-6336:
-

Thank you all for the review and comments!

I guess I mistook Haohui's point a bit earlier. I tested HA/NonHA for this jira 
issue, but not failover, which issue is handled by HDFS-6329 as Daryn pointed 
out.

My {code}fs.defaultFs{code} was set to hdfs://real_nn_host_name:8020 as you 
suggested, however,  dfs.namenode.rpc-address was set to 0.0.0.0:8020 and I 
don't see dfs.namenode.rpc-bind-host was set, . I tried to set 
{code}rpc-address{code} to  nn_host_name:8020, and {code}rpc-bind-host{code} to 
0.0.0.0, and the problem is gone. So this is a config issue. Thanks for 
pointing it out! 

I will do some more testing. 


> Cannot download file via webhdfs when wildcard is enabled
> -
>
> Key: HDFS-6336
> URL: https://issues.apache.org/jira/browse/HDFS-6336
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, webhdfs
>Affects Versions: 2.4.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-6336.001.patch, HDFS-6336.001.patch, 
> HDFS-6336.001.patch
>
>
> With wildcard is enabled, issuing a webhdfs command like
> {code}
> http://yjztvm2.private:50070/webhdfs/v1/tmp?op=OPEN
> {code}
> would give
> {code}
> http://yjztvm3.private:50075/webhdfs/v1/tmp?op=OPEN&namenoderpcaddress=0.0.0.0:8020&offset=0
> {"RemoteException":{"exception":"ConnectException","javaClassName":"java.net.ConnectException","message":"Call
>  From yjztvm3.private/192.168.142.230 to 0.0.0.0:8020 failed on connection 
> exception: java.net.ConnectException: Connection refused; For more details 
> see:  http://wiki.apache.org/hadoop/ConnectionRefused"}}
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6328) Simplify code in FSDirectory

2014-05-06 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13991271#comment-13991271
 ] 

Jing Zhao commented on HDFS-6328:
-

bq. Is there a reason why this loop needed to become more complicated? At this 
point I believe it's guaranteed that the src & dest are not identical, nor is 
the src a subdir of the dest?
Good catch, [~daryn]. I should catch this in my first review.

bq. As a general statement, I'm not sure there's a lot of value add in the 
changes like altering whitespace and moving methods. 
Although some changes like changing for loop to while loop is not necessary, I 
think this is still good cleanup, especially for those changes in the two 
unprotectedRename methods. One more thing we should do is to put common code of 
the two unprotectedRename methods into separate methods. But I'm fine with 
doing this in a separate jira.

For the latest patch, the following change is unnecessary:
{code}
-src.endsWith(HdfsConstants.SEPARATOR_DOT_SNAPSHOT_DIR), 
-"%s does not end with %s", src, 
HdfsConstants.SEPARATOR_DOT_SNAPSHOT_DIR);
+src.endsWith(HdfsConstants.SEPARATOR_DOT_SNAPSHOT_DIR),
+"%s does not end with %s", src, 
HdfsConstants.SEPARATOR_DOT_SNAPSHOT_DIR);
{code}
Also, maybe we do not need to move the set of getINode methods. As [~daryn] 
pointed out, these changes can make merging harder (e.g., merging changes from 
2.x to 0.23).


> Simplify code in FSDirectory
> 
>
> Key: HDFS-6328
> URL: https://issues.apache.org/jira/browse/HDFS-6328
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-6328.000.patch, HDFS-6328.001.patch, 
> HDFS-6328.002.patch
>
>
> This jira proposes:
> # Cleaning up dead code in FSDirectory.
> # Simplify the control flows that IntelliJ flags as warnings.
> # Move functions related to resolving paths into one place.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6328) Simplify code in FSDirectory

2014-05-06 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13991268#comment-13991268
 ] 

Haohui Mai commented on HDFS-6328:
--

Thanks [~daryn] for looking at this. The v2 patch addresses your comments about 
the loop. Does it look okay to you? I'd like to move forward to get both 
HDFS-6330 and HDFS-6315 going.

> Simplify code in FSDirectory
> 
>
> Key: HDFS-6328
> URL: https://issues.apache.org/jira/browse/HDFS-6328
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-6328.000.patch, HDFS-6328.001.patch, 
> HDFS-6328.002.patch
>
>
> This jira proposes:
> # Cleaning up dead code in FSDirectory.
> # Simplify the control flows that IntelliJ flags as warnings.
> # Move functions related to resolving paths into one place.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6326) WebHdfs ACL compatibility is broken

2014-05-06 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13991262#comment-13991262
 ] 

Haohui Mai commented on HDFS-6326:
--

Just copy past from my previous comments in HDFS-5923:

{quote}
The v0 patch takes a more aggressive approach, which removes the ACL bit 
completely. The rationale is the following:
Some applications might assume that FsPermission stay within the range of 
0~0777. Changing FsPermission might lead to unexpected issues.
There are not many users care about whether the file has ACL except for ls. 
Since ls is not in the critical path, ls can make a separate getAclStatus() 
call to calculate the ACL bit.
{quote}

The user shouldn't care about whether the file has ACL in most cases, since the 
server side enforces the ACL. The only two meaningful exception that I can 
think of right now are (1) to perform ls, and (2) to do {{distcp -p}}.

Note that in the second case but it requires calling {{getAclStatus()}} 
explicitly anyway. Given the issues that Chris has pointed out, I still think 
that it might be worthwhile to put down some hacks in ls to make it work (since 
{{ls}} is not on the critical path). Any thoughts?

> WebHdfs ACL compatibility is broken
> ---
>
> Key: HDFS-6326
> URL: https://issues.apache.org/jira/browse/HDFS-6326
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Daryn Sharp
>Assignee: Chris Nauroth
>Priority: Blocker
> Attachments: HDFS-6326.1.patch, HDFS-6326.2.patch
>
>
> 2.4 ACL support is completely incompatible with <2.4 webhdfs servers.  The NN 
> throws an {{IllegalArgumentException}} exception.
> {code}
> hadoop fs -ls webhdfs://nn/
> Found 21 items
> ls: Invalid value for webhdfs parameter "op": No enum constant 
> org.apache.hadoop.hdfs.web.resources.GetOpParam.Op.GETACLSTATUS
> [... 20 more times...]
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6230) Expose upgrade status through NameNode web UI

2014-05-06 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13991252#comment-13991252
 ] 

Haohui Mai commented on HDFS-6230:
--

Sure. I'll take a look.

> Expose upgrade status through NameNode web UI
> -
>
> Key: HDFS-6230
> URL: https://issues.apache.org/jira/browse/HDFS-6230
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.4.0
>Reporter: Arpit Agarwal
>Assignee: Mit Desai
> Attachments: HDFS-6230-NoUpgradesInProgress.png, 
> HDFS-6230-UpgradeInProgress.jpg, HDFS-6230.patch
>
>
> The NameNode web UI does not show upgrade information anymore. Hadoop 2.0 
> also does not have the _hadoop dfsadmin -upgradeProgress_ command to check 
> the upgrade status.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6329) WebHdfs does not work if HA is enabled on NN but logical URI is not configured.

2014-05-06 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-6329:
-

Attachment: HDFS-6329.v3.patch

Refined & simplified the initial check in {{setClientNamenodeAddress()}}.

> WebHdfs does not work if HA is enabled on NN but logical URI is not 
> configured.
> ---
>
> Key: HDFS-6329
> URL: https://issues.apache.org/jira/browse/HDFS-6329
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Blocker
> Attachments: HDFS-6329.patch, HDFS-6329.patch, HDFS-6329.v2.patch, 
> HDFS-6329.v3.patch
>
>
> After HDFS-6100, namenode unconditionally puts the logical name (name service 
> id) as the token service when redirecting webhdfs requests to datanodes, if 
> it detects HA.
> For HA configurations with no client-side failover proxy provider (e.g. IP 
> failover), webhdfs does not work since the clients do not use logical name.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6336) Cannot download file via webhdfs when wildcard is enabled

2014-05-06 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13991097#comment-13991097
 ] 

Kihwal Lee commented on HDFS-6336:
--

This is a config issue.  Are you setting {{fs.defaultFs}} to "0.0.0.0:8020"?  
That will  causes a lot of problems.
If you want namenode to bind to a wildcard address, try following:
{{fs.defaultFs}} = hdfs://real_nn_host_name_or_ip_addr:8020
{{dfs.namenode.rpc-bind-host}} = 0.0.0.0



> Cannot download file via webhdfs when wildcard is enabled
> -
>
> Key: HDFS-6336
> URL: https://issues.apache.org/jira/browse/HDFS-6336
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, webhdfs
>Affects Versions: 2.4.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-6336.001.patch, HDFS-6336.001.patch, 
> HDFS-6336.001.patch
>
>
> With wildcard is enabled, issuing a webhdfs command like
> {code}
> http://yjztvm2.private:50070/webhdfs/v1/tmp?op=OPEN
> {code}
> would give
> {code}
> http://yjztvm3.private:50075/webhdfs/v1/tmp?op=OPEN&namenoderpcaddress=0.0.0.0:8020&offset=0
> {"RemoteException":{"exception":"ConnectException","javaClassName":"java.net.ConnectException","message":"Call
>  From yjztvm3.private/192.168.142.230 to 0.0.0.0:8020 failed on connection 
> exception: java.net.ConnectException: Connection refused; For more details 
> see:  http://wiki.apache.org/hadoop/ConnectionRefused"}}
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6328) Simplify code in FSDirectory

2014-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13991157#comment-13991157
 ] 

Hadoop QA commented on HDFS-6328:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12643597/HDFS-6328.002.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6838//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6838//console

This message is automatically generated.

> Simplify code in FSDirectory
> 
>
> Key: HDFS-6328
> URL: https://issues.apache.org/jira/browse/HDFS-6328
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-6328.000.patch, HDFS-6328.001.patch, 
> HDFS-6328.002.patch
>
>
> This jira proposes:
> # Cleaning up dead code in FSDirectory.
> # Simplify the control flows that IntelliJ flags as warnings.
> # Move functions related to resolving paths into one place.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6344) Maximum limit on the size of an xattr and number of xattrs

2014-05-06 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-6344:
-

 Summary: Maximum limit on the size of an xattr and number of xattrs
 Key: HDFS-6344
 URL: https://issues.apache.org/jira/browse/HDFS-6344
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS XAttrs (HDFS-2006)
Reporter: Andrew Wang


We should support limits on the maximum size of an xattr name/value, as well as 
the number of xattrs per file.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-4913) Deleting file through fuse-dfs when using trash fails requiring root permissions

2014-05-06 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4913:
---

Attachment: HDFS-4913.003.patch

Here's a new version which uses asprintf rather than creating my own function 
to do the same thing

> Deleting file through fuse-dfs when using trash fails requiring root 
> permissions
> 
>
> Key: HDFS-4913
> URL: https://issues.apache.org/jira/browse/HDFS-4913
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.0.3-alpha
>Reporter: Stephen Chu
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-4913.002.patch, HDFS-4913.003.patch
>
>
> As _root_, I mounted HDFS with fuse-dfs using the -ousetrash option.
> As _testuser_, I cd into the mount and touch a test file at 
> _/user/testuser/testFile1_. As the same user, I try to rm the file and run 
> into an error:
> {code}
> [testuser@hdfs-vanilla-1 ~]$ cd /hdfs_mnt/user/testuser
> [testuser@hdfs-vanilla-1 testuser]$ touch testFile1
> [testuser@hdfs-vanilla-1 testuser]$ rm testFile1
> rm: cannot remove `testFile1': Unknown error 255
> {code}
> I check the fuse-dfs debug output, and it shows that we attempt to mkdir 
> /user/root/.Trash, which testuser doesn't have permissions to.
> Ideally, we'd be able to remove testFile1 and have testFile1 be put into 
> /user/testuser/.Trash instead of /user/root/.Trash.
> Error in debug:
> {code}
> unlink /user/testuser/testFile1
> hdfsCreateDirectory(/user/root/.Trash/Current/user/testuser): 
> FileSystem#mkdirs error:
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=testuser, access=WRITE, inode="/user/root":root:supergroup:drwxr-xr-x
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:224)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:204)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:149)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4716)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4698)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4672)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3035)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2999)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2980)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:648)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:419)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44970)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>  at 
> org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>  at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1701)
>  at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1697)
>  at 
> java.security.AccessController.doPrivileged(Native Method)
>  at 
> javax.security.auth.Subject.doAs(Subject.java:396)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>  at 
> org.apache.hadoop.ipc.Server$Handler.run(Server.java:1695)
>  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>  a

[jira] [Commented] (HDFS-6336) Cannot download file via webhdfs when wildcard is enabled

2014-05-06 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13991047#comment-13991047
 ] 

Daryn Sharp commented on HDFS-6336:
---

I think you can get the host from the URL w/o duping it.  I believe you can get 
the actual bind address for the NN's rpc server instead of pulling it from the 
conf.  That should address the wildcard addr.

> Cannot download file via webhdfs when wildcard is enabled
> -
>
> Key: HDFS-6336
> URL: https://issues.apache.org/jira/browse/HDFS-6336
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, webhdfs
>Affects Versions: 2.4.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-6336.001.patch, HDFS-6336.001.patch, 
> HDFS-6336.001.patch
>
>
> With wildcard is enabled, issuing a webhdfs command like
> {code}
> http://yjztvm2.private:50070/webhdfs/v1/tmp?op=OPEN
> {code}
> would give
> {code}
> http://yjztvm3.private:50075/webhdfs/v1/tmp?op=OPEN&namenoderpcaddress=0.0.0.0:8020&offset=0
> {"RemoteException":{"exception":"ConnectException","javaClassName":"java.net.ConnectException","message":"Call
>  From yjztvm3.private/192.168.142.230 to 0.0.0.0:8020 failed on connection 
> exception: java.net.ConnectException: Connection refused; For more details 
> see:  http://wiki.apache.org/hadoop/ConnectionRefused"}}
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6336) Cannot download file via webhdfs when wildcard is enabled

2014-05-06 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13991048#comment-13991048
 ] 

Daryn Sharp commented on HDFS-6336:
---

Kihwal's HDFS-6329 should address the HA vs. non-HA issue.

> Cannot download file via webhdfs when wildcard is enabled
> -
>
> Key: HDFS-6336
> URL: https://issues.apache.org/jira/browse/HDFS-6336
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, webhdfs
>Affects Versions: 2.4.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-6336.001.patch, HDFS-6336.001.patch, 
> HDFS-6336.001.patch
>
>
> With wildcard is enabled, issuing a webhdfs command like
> {code}
> http://yjztvm2.private:50070/webhdfs/v1/tmp?op=OPEN
> {code}
> would give
> {code}
> http://yjztvm3.private:50075/webhdfs/v1/tmp?op=OPEN&namenoderpcaddress=0.0.0.0:8020&offset=0
> {"RemoteException":{"exception":"ConnectException","javaClassName":"java.net.ConnectException","message":"Call
>  From yjztvm3.private/192.168.142.230 to 0.0.0.0:8020 failed on connection 
> exception: java.net.ConnectException: Connection refused; For more details 
> see:  http://wiki.apache.org/hadoop/ConnectionRefused"}}
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6326) WebHdfs ACL compatibility is broken

2014-05-06 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13991038#comment-13991038
 ] 

Chris Nauroth commented on HDFS-6326:
-

Daryn, thank you for taking a look at this.

bq. It's probably more appropriate for WebHdfsFileSystem to detect if acls are 
supported upon the first call. It keeps the ugliness out of FsShell. It would 
be great to also push the hdfs-specific exception handling into 
DistributedFileSystem.

I agree this would look cleaner, but then I would worry about a long-lived 
client process recording that ACLs aren't supported, then a NameNode gets 
upgraded/reconfigured to support ACLs, but the client continues to think it's 
unsupported.  This isn't a problem for the shell, because the process is 
short-lived.  Maybe allow for a periodic retry to check for ACL support again?

bq. Given that we now have extensible protobufs and json, is there any reason 
why FileStatus, or FSPermission, etc don't return a boolean if there's an acl 
on the file? Then an acl call is may only be made when necessary.

I discussed this in my first comment on this issue.  Here are some more details.

Regarding {{FileStatus}}, the same question came up on MAPREDUCE-5809 today.  
Here is a copy-paste of my response:

We considered adding the ACLs to {{FileStatus}}, but this would have been a 
backwards-incompatible change. {{FileStatus}} implements {{Writable}} 
serialization, which is more brittle to version compared to something like 
protobuf. {{FileStatus#write}} doesn't embed any kind of version number, so 
there is no reliable way to tell at runtime if we are deserializing a pre-ACLs 
{{FileStatus}} or a post-ACLs {{FileStatus}}. This would have had a high risk 
of breaking downstream code or mixed versions that had used the {{Writable}} 
serialization. An alternative would have been to skip serializing ACLs in 
{{FileStatus#write}}, but then there would have been a risk of NPE for clients 
expecting a fully serialized object. This is discussed further in the HDFS-4685 
design doc on page 12:

https://issues.apache.org/jira/secure/attachment/12627729/HDFS-ACLs-Design-3.pdf

Regarding {{FsPermission}}, at one point the HDFS-4685 feature branch was using 
a previously unused bit in the {{FsPermssion}} short as an ACL indicator.  This 
got pushback though on HDFS-5923, and the ACL bit changes were backed out 
completely.  Here is a relevant comment:

https://issues.apache.org/jira/browse/HDFS-5923?focusedCommentId=13898370&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13898370

If you look at my attached HDFS-6326.1.patch, it reintroduces the ACL bit as a 
transient field, toggled on the outbound {{FileStatus}} responses from 
NameNode, but not persisted to fsimage.  Making it transient addresses some of 
the objections in HDFS-5923, but still leaves the specific objections mentioned 
in the linked comment.

[~daryn] and [~wheat9], the two of you have been most involved in the debate on 
this, so would you please review this comment, reconsider the trade-offs, and 
post your current opinion?  At this point, considering all of the problems, I'm 
in favor of reintroducing the ACL bit (patch v1).  The one problem with this 
approach is that if a 2.4.1 shell talks to a 2.4.0 NameNode, then it wouldn't 
display the '+', because the 2.4.0 NameNode wouldn't be toggling the ACL bit.  
I think it's worth accepting this relatively minor bug for one release to get 
us past the larger issues.

> WebHdfs ACL compatibility is broken
> ---
>
> Key: HDFS-6326
> URL: https://issues.apache.org/jira/browse/HDFS-6326
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Daryn Sharp
>Assignee: Chris Nauroth
>Priority: Blocker
> Attachments: HDFS-6326.1.patch, HDFS-6326.2.patch
>
>
> 2.4 ACL support is completely incompatible with <2.4 webhdfs servers.  The NN 
> throws an {{IllegalArgumentException}} exception.
> {code}
> hadoop fs -ls webhdfs://nn/
> Found 21 items
> ls: Invalid value for webhdfs parameter "op": No enum constant 
> org.apache.hadoop.hdfs.web.resources.GetOpParam.Op.GETACLSTATUS
> [... 20 more times...]
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6326) WebHdfs ACL compatibility is broken

2014-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13991029#comment-13991029
 ] 

Hadoop QA commented on HDFS-6326:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12643408/HDFS-6326.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The following test timeouts occurred in 
hadoop-common-project/hadoop-common:

org.apache.hadoop.http.TestHttpServer

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6837//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6837//console

This message is automatically generated.

> WebHdfs ACL compatibility is broken
> ---
>
> Key: HDFS-6326
> URL: https://issues.apache.org/jira/browse/HDFS-6326
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Daryn Sharp
>Assignee: Chris Nauroth
>Priority: Blocker
> Attachments: HDFS-6326.1.patch, HDFS-6326.2.patch
>
>
> 2.4 ACL support is completely incompatible with <2.4 webhdfs servers.  The NN 
> throws an {{IllegalArgumentException}} exception.
> {code}
> hadoop fs -ls webhdfs://nn/
> Found 21 items
> ls: Invalid value for webhdfs parameter "op": No enum constant 
> org.apache.hadoop.hdfs.web.resources.GetOpParam.Op.GETACLSTATUS
> [... 20 more times...]
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6340) DN can't finalize upgarde

2014-05-06 Thread Rahul Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13990999#comment-13990999
 ] 

Rahul Singhal commented on HDFS-6340:
-

One more question:

Currently, the {{FinalizeCommand}} will be sent with every block report even 
after the upgrade is finalized. Although it will have no effect, would it be 
better (& worth it) to have some kind of check to prevent this?

> DN can't finalize upgarde
> -
>
> Key: HDFS-6340
> URL: https://issues.apache.org/jira/browse/HDFS-6340
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.4.0
>Reporter: Rahul Singhal
>Priority: Blocker
> Attachments: HDFS-6340-branch-2.4.0.patch, HDFS-6340.patch
>
>
> I upgraded a (NN) HA cluster from 2.2.0 to 2.4.0. After I issued the 
> '-finalizeUpgarde' command, NN was able to finalize the upgrade but DN 
> couldn't (I waited for the next block report).
> I think I have found the problem to be due to HDFS-5153. I will attach a 
> proposed fix.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6340) DN can't finalize upgarde

2014-05-06 Thread Rahul Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rahul Singhal updated HDFS-6340:


Attachment: HDFS-6340.patch

Rebased to branch 'trunk' with Kihwal's suggestions.

Sorry for the delay but I was trying to wrap my head around the 
{{nn.isStandbyState()}} issue. [~kihwal], can you please file new issue? (I 
feel you would do more justice to its description)

> DN can't finalize upgarde
> -
>
> Key: HDFS-6340
> URL: https://issues.apache.org/jira/browse/HDFS-6340
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.4.0
>Reporter: Rahul Singhal
>Priority: Blocker
> Attachments: HDFS-6340-branch-2.4.0.patch, HDFS-6340.patch
>
>
> I upgraded a (NN) HA cluster from 2.2.0 to 2.4.0. After I issued the 
> '-finalizeUpgarde' command, NN was able to finalize the upgrade but DN 
> couldn't (I waited for the next block report).
> I think I have found the problem to be due to HDFS-5153. I will attach a 
> proposed fix.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6326) WebHdfs ACL compatibility is broken

2014-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13990986#comment-13990986
 ] 

Hadoop QA commented on HDFS-6326:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12643408/HDFS-6326.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6836//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6836//console

This message is automatically generated.

> WebHdfs ACL compatibility is broken
> ---
>
> Key: HDFS-6326
> URL: https://issues.apache.org/jira/browse/HDFS-6326
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Daryn Sharp
>Assignee: Chris Nauroth
>Priority: Blocker
> Attachments: HDFS-6326.1.patch, HDFS-6326.2.patch
>
>
> 2.4 ACL support is completely incompatible with <2.4 webhdfs servers.  The NN 
> throws an {{IllegalArgumentException}} exception.
> {code}
> hadoop fs -ls webhdfs://nn/
> Found 21 items
> ls: Invalid value for webhdfs parameter "op": No enum constant 
> org.apache.hadoop.hdfs.web.resources.GetOpParam.Op.GETACLSTATUS
> [... 20 more times...]
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6326) WebHdfs ACL compatibility is broken

2014-05-06 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13990958#comment-13990958
 ] 

Chris Nauroth commented on HDFS-6326:
-

I needed to trigger Jenkins manually:

https://builds.apache.org/job/PreCommit-HDFS-Build/6836/


> WebHdfs ACL compatibility is broken
> ---
>
> Key: HDFS-6326
> URL: https://issues.apache.org/jira/browse/HDFS-6326
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Daryn Sharp
>Assignee: Chris Nauroth
>Priority: Blocker
> Attachments: HDFS-6326.1.patch, HDFS-6326.2.patch
>
>
> 2.4 ACL support is completely incompatible with <2.4 webhdfs servers.  The NN 
> throws an {{IllegalArgumentException}} exception.
> {code}
> hadoop fs -ls webhdfs://nn/
> Found 21 items
> ls: Invalid value for webhdfs parameter "op": No enum constant 
> org.apache.hadoop.hdfs.web.resources.GetOpParam.Op.GETACLSTATUS
> [... 20 more times...]
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6230) Expose upgrade status through NameNode web UI

2014-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13990961#comment-13990961
 ] 

Hadoop QA commented on HDFS-6230:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12643388/HDFS-6230.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6835//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6835//console

This message is automatically generated.

> Expose upgrade status through NameNode web UI
> -
>
> Key: HDFS-6230
> URL: https://issues.apache.org/jira/browse/HDFS-6230
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.4.0
>Reporter: Arpit Agarwal
>Assignee: Mit Desai
> Attachments: HDFS-6230-NoUpgradesInProgress.png, 
> HDFS-6230-UpgradeInProgress.jpg, HDFS-6230.patch
>
>
> The NameNode web UI does not show upgrade information anymore. Hadoop 2.0 
> also does not have the _hadoop dfsadmin -upgradeProgress_ command to check 
> the upgrade status.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5381) ExtendedBlock#hashCode should use both blockId and block pool ID

2014-05-06 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13990951#comment-13990951
 ] 

Benoy Antony commented on HDFS-5381:


This makes sense.

But should we use more standard approach as below ? (auto generated from 
eclipse)

{code}
  @Override // Object
  public int hashCode() {
final int prime = 31;
int result = 1;
result = prime * result + ((block == null) ? 0 : block.hashCode());
result = prime * result + ((poolId == null) ? 0 : poolId.hashCode());
return result;
  }
{code}



> ExtendedBlock#hashCode should use both blockId and block pool ID
> 
>
> Key: HDFS-5381
> URL: https://issues.apache.org/jira/browse/HDFS-5381
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation
>Affects Versions: 2.3.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-5381.001.patch
>
>
> {{ExtendedBlock#hashCode}} contains both a block pool ID and a block ID.  The 
> {{equals}} function checks both.  However, {{hashCode}} only uses block ID.  
> Since HDFS-4645, block IDs are now allocated sequentially.  This means that 
> there will be a lot of hash collisions when federation is in use.  We should 
> use both block ID and block pool ID in {{hashCode}} to prevent this.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6329) WebHdfs does not work if HA is enabled on NN but logical URI is not configured.

2014-05-06 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-6329:
-

Attachment: HDFS-6329.v2.patch

The new patch fixes another issue in MiniDFSCluster.  One conf was kept resued 
for creating NN, which internally sets the default file system. 

> WebHdfs does not work if HA is enabled on NN but logical URI is not 
> configured.
> ---
>
> Key: HDFS-6329
> URL: https://issues.apache.org/jira/browse/HDFS-6329
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Blocker
> Attachments: HDFS-6329.patch, HDFS-6329.patch, HDFS-6329.v2.patch
>
>
> After HDFS-6100, namenode unconditionally puts the logical name (name service 
> id) as the token service when redirecting webhdfs requests to datanodes, if 
> it detects HA.
> For HA configurations with no client-side failover proxy provider (e.g. IP 
> failover), webhdfs does not work since the clients do not use logical name.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6305) WebHdfs response decoding may throw RuntimeExceptions

2014-05-06 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-6305:


Hadoop Flags: Reviewed

Got it.  Thanks for the explanation.  +1 for this patch, pending Jenkins.

> WebHdfs response decoding may throw RuntimeExceptions
> -
>
> Key: HDFS-6305
> URL: https://issues.apache.org/jira/browse/HDFS-6305
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-6305.patch
>
>
> WebHdfs does not guard against exceptions while decoding the response 
> payload.  The json parser will throw RunTime exceptions on malformed 
> responses.  The json decoding routines do not validate the expected fields 
> are present which may cause NPEs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6319) Various syntax and style cleanups

2014-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13990936#comment-13990936
 ] 

Hadoop QA commented on HDFS-6319:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12643571/HDFS-6319.8.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl
  org.apache.hadoop.hdfs.TestDistributedFileSystem

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6833//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6833//console

This message is automatically generated.

> Various syntax and style cleanups
> -
>
> Key: HDFS-6319
> URL: https://issues.apache.org/jira/browse/HDFS-6319
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Charles Lamb
>Assignee: Charles Lamb
> Attachments: HDFS-6319.1.patch, HDFS-6319.2.patch, HDFS-6319.3.patch, 
> HDFS-6319.4.patch, HDFS-6319.6.patch, HDFS-6319.7.patch, HDFS-6319.8.patch, 
> HDFS-6319.8.patch
>
>
> Fix various style issues like if(, while(, [i.e. lack of a space after the 
> keyword],
> Extra whitespace and newlines
> if (...) return ... [lack of {}'s]



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6250) TestBalancerWithNodeGroup.testBalancerWithRackLocality fails

2014-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13990925#comment-13990925
 ] 

Hadoop QA commented on HDFS-6250:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12643526/HDFS-6250-v3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6834//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6834//console

This message is automatically generated.

> TestBalancerWithNodeGroup.testBalancerWithRackLocality fails
> 
>
> Key: HDFS-6250
> URL: https://issues.apache.org/jira/browse/HDFS-6250
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Chen He
> Attachments: HDFS-6250-v2.patch, HDFS-6250-v3.patch, HDFS-6250.patch, 
> test_log.txt
>
>
> It was seen in https://builds.apache.org/job/PreCommit-HDFS-Build/6669/
> {panel}
> java.lang.AssertionError: expected:<1800> but was:<1810>
>   at org.junit.Assert.fail(Assert.java:93)
>   at org.junit.Assert.failNotEquals(Assert.java:647)
>   at org.junit.Assert.assertEquals(Assert.java:128)
>   at org.junit.Assert.assertEquals(Assert.java:147)
>   at org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup
>  .testBalancerWithRackLocality(TestBalancerWithNodeGroup.java:253)
> {panel}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6133) Make Balancer support exclude specified path

2014-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13990894#comment-13990894
 ] 

Hadoop QA commented on HDFS-6133:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12643535/HDFS-6133-1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 8 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.TestDatanodeConfig

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6832//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6832//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6832//console

This message is automatically generated.

> Make Balancer support exclude specified path
> 
>
> Key: HDFS-6133
> URL: https://issues.apache.org/jira/browse/HDFS-6133
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer, namenode
>Reporter: zhaoyunjiong
>Assignee: zhaoyunjiong
> Attachments: HDFS-6133-1.patch, HDFS-6133.patch
>
>
> Currently, run Balancer will destroying Regionserver's data locality.
> If getBlocks could exclude blocks belongs to files which have specific path 
> prefix, like "/hbase", then we can run Balancer without destroying 
> Regionserver's data locality.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6326) WebHdfs ACL compatibility is broken

2014-05-06 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13990882#comment-13990882
 ] 

Daryn Sharp commented on HDFS-6326:
---

Catching all exceptions bothers me a bit that if the call fails for other 
transient errors, that acls are assumed to not be supported.  Maybe it doesn't 
seem like a big deal today because it's "only" for one command, but it will be 
if I ever get around to adding readline support to FsShell...

It's probably more appropriate for WebHdfsFileSystem to detect if acls are 
supported upon the first call.  It keeps the ugliness out of FsShell.  It would 
be great to also push the hdfs-specific exception handling into 
DistributedFileSystem.

Given that we now have extensible protobufs and json, is there any reason why 
FileStatus, or FSPermission, etc don't return a boolean if there's an acl on 
the file?  Then an acl call is may only be made when necessary.  The 
performance impact of doubling the rpc/webhdfs calls for ls concerns me, 
especially for recursive ls on a large directory, when acls are likely to be 
the exception rather than the common case.

> WebHdfs ACL compatibility is broken
> ---
>
> Key: HDFS-6326
> URL: https://issues.apache.org/jira/browse/HDFS-6326
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Daryn Sharp
>Assignee: Chris Nauroth
>Priority: Blocker
> Attachments: HDFS-6326.1.patch, HDFS-6326.2.patch
>
>
> 2.4 ACL support is completely incompatible with <2.4 webhdfs servers.  The NN 
> throws an {{IllegalArgumentException}} exception.
> {code}
> hadoop fs -ls webhdfs://nn/
> Found 21 items
> ls: Invalid value for webhdfs parameter "op": No enum constant 
> org.apache.hadoop.hdfs.web.resources.GetOpParam.Op.GETACLSTATUS
> [... 20 more times...]
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6336) Cannot download file via webhdfs when wildcard is enabled

2014-05-06 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13990877#comment-13990877
 ] 

Yongjun Zhang commented on HDFS-6336:
-

BTW, the TestBalancerWithNodeGroup failure appears to be 
https://issues.apache.org/jira/browse/HDFS-6250.


> Cannot download file via webhdfs when wildcard is enabled
> -
>
> Key: HDFS-6336
> URL: https://issues.apache.org/jira/browse/HDFS-6336
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, webhdfs
>Affects Versions: 2.4.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-6336.001.patch, HDFS-6336.001.patch, 
> HDFS-6336.001.patch
>
>
> With wildcard is enabled, issuing a webhdfs command like
> {code}
> http://yjztvm2.private:50070/webhdfs/v1/tmp?op=OPEN
> {code}
> would give
> {code}
> http://yjztvm3.private:50075/webhdfs/v1/tmp?op=OPEN&namenoderpcaddress=0.0.0.0:8020&offset=0
> {"RemoteException":{"exception":"ConnectException","javaClassName":"java.net.ConnectException","message":"Call
>  From yjztvm3.private/192.168.142.230 to 0.0.0.0:8020 failed on connection 
> exception: java.net.ConnectException: Connection refused; For more details 
> see:  http://wiki.apache.org/hadoop/ConnectionRefused"}}
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6133) Make Balancer support exclude specified path

2014-05-06 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13990874#comment-13990874
 ] 

Daryn Sharp commented on HDFS-6133:
---

Upon a cursory review, I think the patch is promising.  I wonder if favored 
nodes and pinning should be combined, but it stands to reason that if you want 
specific nodes that you probably don't want the blocks to move.  Plus it's 
least disruptive to apis.  I haven't crawled through the favored node code, but 
I'm assuming the NN may not grant all the requested/favored nodes?  If yes, 
does it make sense to only pin the blocks on the favored nodes that were 
granted?

A small quick question: Do you need to use a Boolean instead a primitive 
boolean?  

> Make Balancer support exclude specified path
> 
>
> Key: HDFS-6133
> URL: https://issues.apache.org/jira/browse/HDFS-6133
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer, namenode
>Reporter: zhaoyunjiong
>Assignee: zhaoyunjiong
> Attachments: HDFS-6133-1.patch, HDFS-6133.patch
>
>
> Currently, run Balancer will destroying Regionserver's data locality.
> If getBlocks could exclude blocks belongs to files which have specific path 
> prefix, like "/hbase", then we can run Balancer without destroying 
> Regionserver's data locality.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6202) Replace direct usage of ProxyUser Constants with function

2014-05-06 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13990858#comment-13990858
 ] 

Daryn Sharp commented on HDFS-6202:
---

+1  Simple enough change

> Replace direct usage of ProxyUser Constants with function
> -
>
> Key: HDFS-6202
> URL: https://issues.apache.org/jira/browse/HDFS-6202
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: security
>Reporter: Benoy Antony
>Assignee: Benoy Antony
>Priority: Minor
> Attachments: HDFS-6202.patch
>
>
> This is a pre-requisite for HADOOP-10471  to reduce the visibility of 
> constants of ProxyUsers.
> The _TestJspHelper_ directly uses the constants in _ProxyUsers_. 
> This can be replaced with the corresponding functions.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6328) Simplify code in FSDirectory

2014-05-06 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-6328:
-

Attachment: HDFS-6328.002.patch

> Simplify code in FSDirectory
> 
>
> Key: HDFS-6328
> URL: https://issues.apache.org/jira/browse/HDFS-6328
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-6328.000.patch, HDFS-6328.001.patch, 
> HDFS-6328.002.patch
>
>
> This jira proposes:
> # Cleaning up dead code in FSDirectory.
> # Simplify the control flows that IntelliJ flags as warnings.
> # Move functions related to resolving paths into one place.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6305) WebHdfs response decoding may throw RuntimeExceptions

2014-05-06 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-6305:
--

Status: Patch Available  (was: Open)

Re-kicking precommit again.

> WebHdfs response decoding may throw RuntimeExceptions
> -
>
> Key: HDFS-6305
> URL: https://issues.apache.org/jira/browse/HDFS-6305
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-6305.patch
>
>
> WebHdfs does not guard against exceptions while decoding the response 
> payload.  The json parser will throw RunTime exceptions on malformed 
> responses.  The json decoding routines do not validate the expected fields 
> are present which may cause NPEs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6305) WebHdfs response decoding may throw RuntimeExceptions

2014-05-06 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-6305:
--

Status: Open  (was: Patch Available)

> WebHdfs response decoding may throw RuntimeExceptions
> -
>
> Key: HDFS-6305
> URL: https://issues.apache.org/jira/browse/HDFS-6305
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-6305.patch
>
>
> WebHdfs does not guard against exceptions while decoding the response 
> payload.  The json parser will throw RunTime exceptions on malformed 
> responses.  The json decoding routines do not validate the expected fields 
> are present which may cause NPEs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6305) WebHdfs response decoding may throw RuntimeExceptions

2014-05-06 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13990848#comment-13990848
 ] 

Daryn Sharp commented on HDFS-6305:
---

Rather than change all of the decoding methods (yet), I went the quick route of 
converting any non-IOException during decoding to IOExceptions.

{code}
+  catch (Exception e) { // catch json parser errors
+final IOException ioe =
+new IOException("Response decoding failure: "+e.toString(), e);
{code}

This also handles the case where the json parser itself throws runtime 
exceptions if the json is incomplete or malformed as stressed in the test cases.

> WebHdfs response decoding may throw RuntimeExceptions
> -
>
> Key: HDFS-6305
> URL: https://issues.apache.org/jira/browse/HDFS-6305
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-6305.patch
>
>
> WebHdfs does not guard against exceptions while decoding the response 
> payload.  The json parser will throw RunTime exceptions on malformed 
> responses.  The json decoding routines do not validate the expected fields 
> are present which may cause NPEs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6343) fix TestNamenodeRetryCache and TestRetryCacheWithHA failures

2014-05-06 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-6343:


Hadoop Flags: Reviewed

+1 for the patch.  Thank you, Uma.

> fix TestNamenodeRetryCache and TestRetryCacheWithHA failures
> 
>
> Key: HDFS-6343
> URL: https://issues.apache.org/jira/browse/HDFS-6343
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6343.patch
>
>
> Due to the increase of Xattrs oprations in DFSTestUtil.runOperations, cache 
> set size assertion also be increased.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6336) Cannot download file via webhdfs when wildcard is enabled

2014-05-06 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13990790#comment-13990790
 ] 

Yongjun Zhang commented on HDFS-6336:
-

Thanks [~wheat9], I actually did test both HA and NonHA. For HA, 
rpcAddr.getAddress().isAnyLocalAddress() won't be true, thus it still works 
correctly.


> Cannot download file via webhdfs when wildcard is enabled
> -
>
> Key: HDFS-6336
> URL: https://issues.apache.org/jira/browse/HDFS-6336
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, webhdfs
>Affects Versions: 2.4.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-6336.001.patch, HDFS-6336.001.patch, 
> HDFS-6336.001.patch
>
>
> With wildcard is enabled, issuing a webhdfs command like
> {code}
> http://yjztvm2.private:50070/webhdfs/v1/tmp?op=OPEN
> {code}
> would give
> {code}
> http://yjztvm3.private:50075/webhdfs/v1/tmp?op=OPEN&namenoderpcaddress=0.0.0.0:8020&offset=0
> {"RemoteException":{"exception":"ConnectException","javaClassName":"java.net.ConnectException","message":"Call
>  From yjztvm3.private/192.168.142.230 to 0.0.0.0:8020 failed on connection 
> exception: java.net.ConnectException: Connection refused; For more details 
> see:  http://wiki.apache.org/hadoop/ConnectionRefused"}}
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6340) DN can't finalize upgarde

2014-05-06 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13990780#comment-13990780
 ] 

Arpit Agarwal commented on HDFS-6340:
-

Thanks [~kihwal].

[~rahulsinghal.iitd], could you please rebase the patch to trunk?

> DN can't finalize upgarde
> -
>
> Key: HDFS-6340
> URL: https://issues.apache.org/jira/browse/HDFS-6340
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.4.0
>Reporter: Rahul Singhal
>Priority: Blocker
> Attachments: HDFS-6340-branch-2.4.0.patch
>
>
> I upgraded a (NN) HA cluster from 2.2.0 to 2.4.0. After I issued the 
> '-finalizeUpgarde' command, NN was able to finalize the upgrade but DN 
> couldn't (I waited for the next block report).
> I think I have found the problem to be due to HDFS-5153. I will attach a 
> proposed fix.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6250) TestBalancerWithNodeGroup.testBalancerWithRackLocality fails

2014-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13990778#comment-13990778
 ] 

Hadoop QA commented on HDFS-6250:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12643526/HDFS-6250-v3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6831//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6831//console

This message is automatically generated.

> TestBalancerWithNodeGroup.testBalancerWithRackLocality fails
> 
>
> Key: HDFS-6250
> URL: https://issues.apache.org/jira/browse/HDFS-6250
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Chen He
> Attachments: HDFS-6250-v2.patch, HDFS-6250-v3.patch, HDFS-6250.patch, 
> test_log.txt
>
>
> It was seen in https://builds.apache.org/job/PreCommit-HDFS-Build/6669/
> {panel}
> java.lang.AssertionError: expected:<1800> but was:<1810>
>   at org.junit.Assert.fail(Assert.java:93)
>   at org.junit.Assert.failNotEquals(Assert.java:647)
>   at org.junit.Assert.assertEquals(Assert.java:128)
>   at org.junit.Assert.assertEquals(Assert.java:147)
>   at org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup
>  .testBalancerWithRackLocality(TestBalancerWithNodeGroup.java:253)
> {panel}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6342) TestBalancerWithNodeGroup.testBalancerWithRackLocality may fail if balancer.id file is huge

2014-05-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-6342:
-

Resolution: Duplicate
Status: Resolved  (was: Patch Available)

Ok. Let's end up discussion here and get focus on HDFS-6250. Thanks!

> TestBalancerWithNodeGroup.testBalancerWithRackLocality may fail if 
> balancer.id file is huge
> ---
>
> Key: HDFS-6342
> URL: https://issues.apache.org/jira/browse/HDFS-6342
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen He
>Assignee: Chen He
> Attachments: HDFS-6342.patch
>
>
> The testBalancerWithRackLocality mehtod is to test balancer moving data 
> blocks with rack locality consideration. 
> It crates two nodes cluster. One node belongs to rack0nodeGroup0, theother 
> node blongs to rack1nodeGroup1. In this 2 datanodes minicluster, block size 
> is 10B and total cluster capacity is 6000B ( 3000B on each datanodes). It 
> create 180 data blocks with replication factor 2. Then, a node datanode is 
> created (in rack1nodeGroup2) and balancer starts to balancing the cluster.
> It expects there is only data blocks moving within rack1. After balancer is 
> done, it assumes the data size on both racks is the same. It will break
> if balancer.id file is huge and there is inter-rack data block moving.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6230) Expose upgrade status through NameNode web UI

2014-05-06 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13990682#comment-13990682
 ] 

Kihwal Lee commented on HDFS-6230:
--

[~wheat9], you might be the best person to review this. Would you mind taking a 
look?

> Expose upgrade status through NameNode web UI
> -
>
> Key: HDFS-6230
> URL: https://issues.apache.org/jira/browse/HDFS-6230
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.4.0
>Reporter: Arpit Agarwal
>Assignee: Mit Desai
> Attachments: HDFS-6230-NoUpgradesInProgress.png, 
> HDFS-6230-UpgradeInProgress.jpg, HDFS-6230.patch
>
>
> The NameNode web UI does not show upgrade information anymore. Hadoop 2.0 
> also does not have the _hadoop dfsadmin -upgradeProgress_ command to check 
> the upgrade status.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6342) TestBalancerWithNodeGroup.testBalancerWithRackLocality may fail if balancer.id file is huge

2014-05-06 Thread Binglin Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13990668#comment-13990668
 ] 

Binglin Chang commented on HDFS-6342:
-

Hi [~airbots], I agree with not changing balancer.id file, my new patch doesn't 
change it, pls see my new comments in HDFS-6250.

> TestBalancerWithNodeGroup.testBalancerWithRackLocality may fail if 
> balancer.id file is huge
> ---
>
> Key: HDFS-6342
> URL: https://issues.apache.org/jira/browse/HDFS-6342
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen He
>Assignee: Chen He
> Attachments: HDFS-6342.patch
>
>
> The testBalancerWithRackLocality mehtod is to test balancer moving data 
> blocks with rack locality consideration. 
> It crates two nodes cluster. One node belongs to rack0nodeGroup0, theother 
> node blongs to rack1nodeGroup1. In this 2 datanodes minicluster, block size 
> is 10B and total cluster capacity is 6000B ( 3000B on each datanodes). It 
> create 180 data blocks with replication factor 2. Then, a node datanode is 
> created (in rack1nodeGroup2) and balancer starts to balancing the cluster.
> It expects there is only data blocks moving within rack1. After balancer is 
> done, it assumes the data size on both racks is the same. It will break
> if balancer.id file is huge and there is inter-rack data block moving.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6340) DN can't finalize upgarde

2014-05-06 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13990656#comment-13990656
 ] 

Kihwal Lee commented on HDFS-6340:
--

[~arpitagarwal]: I am fine with applying simple fix and addressing the HA state 
checking in a separate jira.

> DN can't finalize upgarde
> -
>
> Key: HDFS-6340
> URL: https://issues.apache.org/jira/browse/HDFS-6340
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.4.0
>Reporter: Rahul Singhal
>Priority: Blocker
> Attachments: HDFS-6340-branch-2.4.0.patch
>
>
> I upgraded a (NN) HA cluster from 2.2.0 to 2.4.0. After I issued the 
> '-finalizeUpgarde' command, NN was able to finalize the upgrade but DN 
> couldn't (I waited for the next block report).
> I think I have found the problem to be due to HDFS-5153. I will attach a 
> proposed fix.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6342) TestBalancerWithNodeGroup.testBalancerWithRackLocality may fail if balancer.id file is huge

2014-05-06 Thread Chen He (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13990655#comment-13990655
 ] 

Chen He commented on HDFS-6342:
---

Hi [~djp], I am OK about merging HDFS-6250 and HDFS-6342.

> TestBalancerWithNodeGroup.testBalancerWithRackLocality may fail if 
> balancer.id file is huge
> ---
>
> Key: HDFS-6342
> URL: https://issues.apache.org/jira/browse/HDFS-6342
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen He
>Assignee: Chen He
> Attachments: HDFS-6342.patch
>
>
> The testBalancerWithRackLocality mehtod is to test balancer moving data 
> blocks with rack locality consideration. 
> It crates two nodes cluster. One node belongs to rack0nodeGroup0, theother 
> node blongs to rack1nodeGroup1. In this 2 datanodes minicluster, block size 
> is 10B and total cluster capacity is 6000B ( 3000B on each datanodes). It 
> create 180 data blocks with replication factor 2. Then, a node datanode is 
> created (in rack1nodeGroup2) and balancer starts to balancing the cluster.
> It expects there is only data blocks moving within rack1. After balancer is 
> done, it assumes the data size on both racks is the same. It will break
> if balancer.id file is huge and there is inter-rack data block moving.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6342) TestBalancerWithNodeGroup.testBalancerWithRackLocality may fail if balancer.id file is huge

2014-05-06 Thread Chen He (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13990654#comment-13990654
 ] 

Chen He commented on HDFS-6342:
---

Hi [~decster]
1) For the null balancer.id file, I disagree with you. Sometimes, 
administrators or users want to know which node runs the balancer. Hostname in 
the balancer.id file can provide location information if cluster becomes large. 
 
2) For the timeout issue, in this patch, I did not change the timeout, it is 
still 40 seconds. Average execution time of this method is about 31 seconds 
(average of 40 times).


> TestBalancerWithNodeGroup.testBalancerWithRackLocality may fail if 
> balancer.id file is huge
> ---
>
> Key: HDFS-6342
> URL: https://issues.apache.org/jira/browse/HDFS-6342
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen He
>Assignee: Chen He
> Attachments: HDFS-6342.patch
>
>
> The testBalancerWithRackLocality mehtod is to test balancer moving data 
> blocks with rack locality consideration. 
> It crates two nodes cluster. One node belongs to rack0nodeGroup0, theother 
> node blongs to rack1nodeGroup1. In this 2 datanodes minicluster, block size 
> is 10B and total cluster capacity is 6000B ( 3000B on each datanodes). It 
> create 180 data blocks with replication factor 2. Then, a node datanode is 
> created (in rack1nodeGroup2) and balancer starts to balancing the cluster.
> It expects there is only data blocks moving within rack1. After balancer is 
> done, it assumes the data size on both racks is the same. It will break
> if balancer.id file is huge and there is inter-rack data block moving.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6319) Various syntax and style cleanups

2014-05-06 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb updated HDFS-6319:
---

Attachment: HDFS-6319.8.patch

> Various syntax and style cleanups
> -
>
> Key: HDFS-6319
> URL: https://issues.apache.org/jira/browse/HDFS-6319
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Charles Lamb
>Assignee: Charles Lamb
> Attachments: HDFS-6319.1.patch, HDFS-6319.2.patch, HDFS-6319.3.patch, 
> HDFS-6319.4.patch, HDFS-6319.6.patch, HDFS-6319.7.patch, HDFS-6319.8.patch, 
> HDFS-6319.8.patch
>
>
> Fix various style issues like if(, while(, [i.e. lack of a space after the 
> keyword],
> Extra whitespace and newlines
> if (...) return ... [lack of {}'s]



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6250) TestBalancerWithNodeGroup.testBalancerWithRackLocality fails

2014-05-06 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13990619#comment-13990619
 ] 

Junping Du commented on HDFS-6250:
--

The new patch sounds a good direction for me. Thanks [~decster] and [~airbots] 
for the effort here. Kick off Jenkins test manually.

> TestBalancerWithNodeGroup.testBalancerWithRackLocality fails
> 
>
> Key: HDFS-6250
> URL: https://issues.apache.org/jira/browse/HDFS-6250
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Chen He
> Attachments: HDFS-6250-v2.patch, HDFS-6250-v3.patch, HDFS-6250.patch, 
> test_log.txt
>
>
> It was seen in https://builds.apache.org/job/PreCommit-HDFS-Build/6669/
> {panel}
> java.lang.AssertionError: expected:<1800> but was:<1810>
>   at org.junit.Assert.fail(Assert.java:93)
>   at org.junit.Assert.failNotEquals(Assert.java:647)
>   at org.junit.Assert.assertEquals(Assert.java:128)
>   at org.junit.Assert.assertEquals(Assert.java:147)
>   at org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup
>  .testBalancerWithRackLocality(TestBalancerWithNodeGroup.java:253)
> {panel}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6342) TestBalancerWithNodeGroup.testBalancerWithRackLocality may fail if balancer.id file is huge

2014-05-06 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13990613#comment-13990613
 ] 

Junping Du commented on HDFS-6342:
--

Hi [~airbots] and [~decster], can we move all discussions and patch effort to 
HDFS-6250? Fixing test failure of TestBalancerWithNodeGroup doesn't sounds like 
a work need sub JIRA.

> TestBalancerWithNodeGroup.testBalancerWithRackLocality may fail if 
> balancer.id file is huge
> ---
>
> Key: HDFS-6342
> URL: https://issues.apache.org/jira/browse/HDFS-6342
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen He
>Assignee: Chen He
> Attachments: HDFS-6342.patch
>
>
> The testBalancerWithRackLocality mehtod is to test balancer moving data 
> blocks with rack locality consideration. 
> It crates two nodes cluster. One node belongs to rack0nodeGroup0, theother 
> node blongs to rack1nodeGroup1. In this 2 datanodes minicluster, block size 
> is 10B and total cluster capacity is 6000B ( 3000B on each datanodes). It 
> create 180 data blocks with replication factor 2. Then, a node datanode is 
> created (in rack1nodeGroup2) and balancer starts to balancing the cluster.
> It expects there is only data blocks moving within rack1. After balancer is 
> done, it assumes the data size on both racks is the same. It will break
> if balancer.id file is huge and there is inter-rack data block moving.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6133) Make Balancer support exclude specified path

2014-05-06 Thread zhaoyunjiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhaoyunjiong updated HDFS-6133:
---

Attachment: (was: HDFS-6133.patch.1)

> Make Balancer support exclude specified path
> 
>
> Key: HDFS-6133
> URL: https://issues.apache.org/jira/browse/HDFS-6133
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer, namenode
>Reporter: zhaoyunjiong
>Assignee: zhaoyunjiong
> Attachments: HDFS-6133.patch
>
>
> Currently, run Balancer will destroying Regionserver's data locality.
> If getBlocks could exclude blocks belongs to files which have specific path 
> prefix, like "/hbase", then we can run Balancer without destroying 
> Regionserver's data locality.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6133) Make Balancer support exclude specified path

2014-05-06 Thread zhaoyunjiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhaoyunjiong updated HDFS-6133:
---

Attachment: HDFS-6133-1.patch

> Make Balancer support exclude specified path
> 
>
> Key: HDFS-6133
> URL: https://issues.apache.org/jira/browse/HDFS-6133
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer, namenode
>Reporter: zhaoyunjiong
>Assignee: zhaoyunjiong
> Attachments: HDFS-6133-1.patch, HDFS-6133.patch
>
>
> Currently, run Balancer will destroying Regionserver's data locality.
> If getBlocks could exclude blocks belongs to files which have specific path 
> prefix, like "/hbase", then we can run Balancer without destroying 
> Regionserver's data locality.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6133) Make Balancer support exclude specified path

2014-05-06 Thread zhaoyunjiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhaoyunjiong updated HDFS-6133:
---

Attachment: HDFS-6133.patch.1

This patch will set sticky bit on the block file if the DFSClient have  favored 
nodes hint set, and refuse to move from Balancer.

> Make Balancer support exclude specified path
> 
>
> Key: HDFS-6133
> URL: https://issues.apache.org/jira/browse/HDFS-6133
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer, namenode
>Reporter: zhaoyunjiong
>Assignee: zhaoyunjiong
> Attachments: HDFS-6133.patch
>
>
> Currently, run Balancer will destroying Regionserver's data locality.
> If getBlocks could exclude blocks belongs to files which have specific path 
> prefix, like "/hbase", then we can run Balancer without destroying 
> Regionserver's data locality.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6250) TestBalancerWithNodeGroup.testBalancerWithRackLocality fails

2014-05-06 Thread Binglin Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13990491#comment-13990491
 ] 

Binglin Chang commented on HDFS-6250:
-

I maded a patch to address this jira and HDFS-6159, along with a minor fix in 
balancer.id ralated doc, changes:
1. make CAPACITY to 5000 rather than 6000, so it remains same ratio to block 
size as before, make DEFAULT_BLOCK_SIZE useful
2. change validate method in testBalancerWithRackLocality so it doesn't depends 
on balancer.id file
3. there is a doc error about  balancer.id, fixed
With the change the test now runs only about 7 seconds, rather than 20+ seconds.

> TestBalancerWithNodeGroup.testBalancerWithRackLocality fails
> 
>
> Key: HDFS-6250
> URL: https://issues.apache.org/jira/browse/HDFS-6250
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Chen He
> Attachments: HDFS-6250-v2.patch, HDFS-6250-v3.patch, HDFS-6250.patch, 
> test_log.txt
>
>
> It was seen in https://builds.apache.org/job/PreCommit-HDFS-Build/6669/
> {panel}
> java.lang.AssertionError: expected:<1800> but was:<1810>
>   at org.junit.Assert.fail(Assert.java:93)
>   at org.junit.Assert.failNotEquals(Assert.java:647)
>   at org.junit.Assert.assertEquals(Assert.java:128)
>   at org.junit.Assert.assertEquals(Assert.java:147)
>   at org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup
>  .testBalancerWithRackLocality(TestBalancerWithNodeGroup.java:253)
> {panel}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6250) TestBalancerWithNodeGroup.testBalancerWithRackLocality fails

2014-05-06 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HDFS-6250:


Attachment: HDFS-6250-v3.patch

> TestBalancerWithNodeGroup.testBalancerWithRackLocality fails
> 
>
> Key: HDFS-6250
> URL: https://issues.apache.org/jira/browse/HDFS-6250
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Chen He
> Attachments: HDFS-6250-v2.patch, HDFS-6250-v3.patch, HDFS-6250.patch, 
> test_log.txt
>
>
> It was seen in https://builds.apache.org/job/PreCommit-HDFS-Build/6669/
> {panel}
> java.lang.AssertionError: expected:<1800> but was:<1810>
>   at org.junit.Assert.fail(Assert.java:93)
>   at org.junit.Assert.failNotEquals(Assert.java:647)
>   at org.junit.Assert.assertEquals(Assert.java:128)
>   at org.junit.Assert.assertEquals(Assert.java:147)
>   at org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup
>  .testBalancerWithRackLocality(TestBalancerWithNodeGroup.java:253)
> {panel}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6298) XML based End-to-End test for getfattr and setfattr commands

2014-05-06 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13990471#comment-13990471
 ] 

Uma Maheswara Rao G commented on HDFS-6298:
---

Thanks Yi for the patch update.
+1 Patch looks good to me.

> XML based End-to-End test for getfattr and setfattr commands
> 
>
> Key: HDFS-6298
> URL: https://issues.apache.org/jira/browse/HDFS-6298
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Uma Maheswara Rao G
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6298.1.patch, HDFS-6298.2.patch, HDFS-6298.3.patch, 
> HDFS-6298.patch
>
>
> This JIRA to add test cases with CLI



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HDFS-6298) XML based End-to-End test for getfattr and setfattr commands

2014-05-06 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G resolved HDFS-6298.
---

  Resolution: Fixed
Hadoop Flags: Reviewed

I have just committed this to branch!

> XML based End-to-End test for getfattr and setfattr commands
> 
>
> Key: HDFS-6298
> URL: https://issues.apache.org/jira/browse/HDFS-6298
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Uma Maheswara Rao G
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6298.1.patch, HDFS-6298.2.patch, HDFS-6298.3.patch, 
> HDFS-6298.patch
>
>
> This JIRA to add test cases with CLI



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6343) fix TestNamenodeRetryCache and TestRetryCacheWithHA failures

2014-05-06 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-6343:
--

Attachment: HDFS-6343.patch

updated simple patch to increase the number in assertion.

> fix TestNamenodeRetryCache and TestRetryCacheWithHA failures
> 
>
> Key: HDFS-6343
> URL: https://issues.apache.org/jira/browse/HDFS-6343
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6343.patch
>
>
> Due to the increase of Xattrs oprations in DFSTestUtil.runOperations, cache 
> set size assertion also be increased.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6343) fix TestNamenodeRetryCache and TestRetryCacheWithHA failures

2014-05-06 Thread Uma Maheswara Rao G (JIRA)
Uma Maheswara Rao G created HDFS-6343:
-

 Summary: fix TestNamenodeRetryCache and TestRetryCacheWithHA 
failures
 Key: HDFS-6343
 URL: https://issues.apache.org/jira/browse/HDFS-6343
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G
 Fix For: HDFS XAttrs (HDFS-2006)


Due to the increase of Xattrs oprations in DFSTestUtil.runOperations, cache set 
size assertion also be increased.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6250) TestBalancerWithNodeGroup.testBalancerWithRackLocality fails

2014-05-06 Thread Binglin Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13990440#comment-13990440
 ] 

Binglin Chang commented on HDFS-6250:
-

Hi [~airbots], please see my comments about HDFS-6159
https://issues.apache.org/jira/browse/HDFS-6159?focusedCommentId=13990433&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13990433
The fix in HDFS-6159 and this jira seems to be suboptimal, we may need to 
reconsider the approach.

> TestBalancerWithNodeGroup.testBalancerWithRackLocality fails
> 
>
> Key: HDFS-6250
> URL: https://issues.apache.org/jira/browse/HDFS-6250
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Chen He
> Attachments: HDFS-6250-v2.patch, HDFS-6250.patch, test_log.txt
>
>
> It was seen in https://builds.apache.org/job/PreCommit-HDFS-Build/6669/
> {panel}
> java.lang.AssertionError: expected:<1800> but was:<1810>
>   at org.junit.Assert.fail(Assert.java:93)
>   at org.junit.Assert.failNotEquals(Assert.java:647)
>   at org.junit.Assert.assertEquals(Assert.java:128)
>   at org.junit.Assert.assertEquals(Assert.java:147)
>   at org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup
>  .testBalancerWithRackLocality(TestBalancerWithNodeGroup.java:253)
> {panel}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6159) TestBalancerWithNodeGroup.testBalancerWithNodeGroup fails if there is block missing after balancer success

2014-05-06 Thread Binglin Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13990433#comment-13990433
 ] 

Binglin Chang commented on HDFS-6159:
-

The fix in the patch has some issue:
bq. I propose to increase datanode capacity up to 6000B and data block size to 
100B.
{code}
  static final int DEFAULT_BLOCK_SIZE = 100;
{code}
this variable is not used anywhere, change it does not change block size, hence 
capacity is changed to 6000, block size remains 10 bytes actually leads more 
blocks needs to be moved, hence increase the total balancer running time, more 
likely to cause timeout.


> TestBalancerWithNodeGroup.testBalancerWithNodeGroup fails if there is block 
> missing after balancer success
> --
>
> Key: HDFS-6159
> URL: https://issues.apache.org/jira/browse/HDFS-6159
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.3.0
>Reporter: Chen He
>Assignee: Chen He
> Fix For: 3.0.0, 2.5.0
>
> Attachments: HDFS-6159-v2.patch, HDFS-6159-v2.patch, HDFS-6159.patch, 
> logs.txt
>
>
> The TestBalancerWithNodeGroup.testBalancerWithNodeGroup will report negative 
> false failure if there is(are) data block(s) losing after balancer 
> successfuly finishes. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6313) WebHdfs may use the wrong NN when configured for multiple HA NNs

2014-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13990393#comment-13990393
 ] 

Hadoop QA commented on HDFS-6313:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12643424/HDFS-6313.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6830//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6830//console

This message is automatically generated.

> WebHdfs may use the wrong NN when configured for multiple HA NNs
> 
>
> Key: HDFS-6313
> URL: https://issues.apache.org/jira/browse/HDFS-6313
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Daryn Sharp
>Assignee: Kihwal Lee
>Priority: Blocker
> Attachments: HDFS-6313.branch-2.4.patch, HDFS-6313.patch
>
>
> WebHdfs resolveNNAddr will return a union of addresses for all HA configured 
> NNs.  The client may access the wrong NN.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6314) Test cases for XAttrs

2014-05-06 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13990394#comment-13990394
 ] 

Yi Liu commented on HDFS-6314:
--

Thanks Chris for review. I will add them :)

> Test cases for XAttrs
> -
>
> Key: HDFS-6314
> URL: https://issues.apache.org/jira/browse/HDFS-6314
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6314.1.patch, HDFS-6314.patch
>
>
> Tests NameNode interaction for all XAttr APIs, covers restarting NN, saving 
> new checkpoint.
> Tests XAttr for Snapshot, symlinks.
> Tests XAttr for HA failover.
> And more...



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6298) XML based End-to-End test for getfattr and setfattr commands

2014-05-06 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-6298:
-

Attachment: HDFS-6298.3.patch

Thanks Uma. OK, Let add the test cases of dir/file permissions for xattr in 
HDFS-6314.

The new patch include 3 new end to end tests:

setfattr : Add an xattr of trusted namespace
setfattr : Add an xattr of system namespace
setfattr : Add an xattr of security namespace

> XML based End-to-End test for getfattr and setfattr commands
> 
>
> Key: HDFS-6298
> URL: https://issues.apache.org/jira/browse/HDFS-6298
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Uma Maheswara Rao G
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6298.1.patch, HDFS-6298.2.patch, HDFS-6298.3.patch, 
> HDFS-6298.patch
>
>
> This JIRA to add test cases with CLI



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6328) Simplify code in FSDirectory

2014-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13990382#comment-13990382
 ] 

Hadoop QA commented on HDFS-6328:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12643430/HDFS-6328.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6829//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6829//console

This message is automatically generated.

> Simplify code in FSDirectory
> 
>
> Key: HDFS-6328
> URL: https://issues.apache.org/jira/browse/HDFS-6328
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-6328.000.patch, HDFS-6328.001.patch
>
>
> This jira proposes:
> # Cleaning up dead code in FSDirectory.
> # Simplify the control flows that IntelliJ flags as warnings.
> # Move functions related to resolving paths into one place.



--
This message was sent by Atlassian JIRA
(v6.2#6252)