[jira] [Commented] (HDFS-5594) FileSystem API for ACLs.

2013-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13841076#comment-13841076
 ] 

Hadoop QA commented on HDFS-5594:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12617349/HDFS-5594.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.fs.TestHarFileSystem

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5658//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5658//console

This message is automatically generated.

> FileSystem API for ACLs.
> 
>
> Key: HDFS-5594
> URL: https://issues.apache.org/jira/browse/HDFS-5594
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: HDFS ACLs (HDFS-4685)
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-5594.1.patch
>
>
> Add new methods to {{FileSystem}} for manipulating ACLs.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5637) try to refeatchToken while local read InvalidToken occurred

2013-12-05 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13841071#comment-13841071
 ] 

Andrew Purtell commented on HDFS-5637:
--

+1

Looks like an update needed for Hadoop 2.x that was missed.

> try to refeatchToken while local read InvalidToken occurred
> ---
>
> Key: HDFS-5637
> URL: https://issues.apache.org/jira/browse/HDFS-5637
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, security
>Affects Versions: 2.0.5-alpha, 2.2.0
>Reporter: Liang Xie
>Assignee: Liang Xie
> Attachments: HDFS-5637.txt
>
>
> we observed several warning logs like below from region server nodes:
> 2013-12-05,13:22:26,042 WARN org.apache.hadoop.hdfs.DFSClient: Failed to 
> connect to /10.2.201.110:11402 for block, add to deadNodes and continue. 
> org.apache.hadoop.security.token.SecretManager$InvalidToken: Block token with 
> block_token_identifier (expiryDate=1386060141977, keyId=-333530248, 
> userId=hbase_srv, blockPoolId=BP-1310313570-10.101.10.66-1373527541386, 
> blockId=-190217754078101701, access modes=[READ]) is expired.
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:280)
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager.checkAccess(BlockPoolTokenSecretManager.java:88)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockToken(DataNode.java:1082)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1033)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)
> org.apache.hadoop.security.token.SecretManager$InvalidToken: Block token with 
> block_token_identifier (expiryDate=1386060141977, keyId=-333530248, 
> userId=hbase_srv, blockPoolId=BP-1310313570-10.101.10.66-1373527541386, 
> blockId=-190217754078101701, access modes=[READ]) is expired.
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:280)
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager.checkAccess(BlockPoolTokenSecretManager.java:88)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockToken(DataNode.java:1082)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1033)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(Remot

[jira] [Updated] (HDFS-5594) FileSystem API for ACLs.

2013-12-05 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-5594:


Attachment: HDFS-5594.1.patch

This patch provides the {{FileSystem}} API described in the design doc for 
reading and writing ACLs.  Here are a few additional notes on this patch:
# The design doc mentioned an {{AclSpec}} class for use in the modification 
operations.  It turns out that we can simplify this to a {{Set}}, so 
I didn't code a separate {{AclSpec}} class.  I'll plan to update the design doc 
accordingly.
# It might look odd that we have separate enums for read flags and write flags, 
both of which just contain a {{RECURSIVE}} option.  It's possible that these 
flags will diverge over time, so I'd like to keep the enums separate.  For 
example, Linux getfacl has various additional filtering flags that wouldn't 
make sense in the context of a write operation.
# The new objects are following patterns that we've started to use recently on 
things like the cache management APIs.  I made the objects immutable and 
provided builders to avoid an explosion of multiple constructors.
# I'm reusing {{FsAction}} in the ACL model.  This class is perfect for 
representing the permissions portion of an ACL entry, and it has convenience 
methods for computing union and intersection of permissions, which will help 
later.  I've expanded visiblity of the class from 
{{LimitedPrivate(\{"HDFS"\})}}/{{Unstable}} to {{Public}}/{{Stable}}.  I 
checked revision history, and this class actually has been quite stable.  The 
last code change was 5 years ago, and that was an internal implementation 
change that didn't alter the interface.  I see very little risk in expanding 
the visibility of this class.

The branch hasn't deviated from trunk yet, so I'm going to submit this for a 
Jenkins run.


> FileSystem API for ACLs.
> 
>
> Key: HDFS-5594
> URL: https://issues.apache.org/jira/browse/HDFS-5594
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: HDFS ACLs (HDFS-4685)
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-5594.1.patch
>
>
> Add new methods to {{FileSystem}} for manipulating ACLs.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5594) FileSystem API for ACLs.

2013-12-05 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-5594:


Status: Patch Available  (was: Open)

> FileSystem API for ACLs.
> 
>
> Key: HDFS-5594
> URL: https://issues.apache.org/jira/browse/HDFS-5594
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: HDFS ACLs (HDFS-4685)
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-5594.1.patch
>
>
> Add new methods to {{FileSystem}} for manipulating ACLs.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5594) FileSystem API for ACLs.

2013-12-05 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-5594:


Description: Add new methods to {{FileSystem}} for manipulating ACLs.  
(was: Add new methods to {{FileSystem}} and {{FileContext}} for manipulating 
ACLs.)

I've updated the description to state that this issue covers just 
{{FileSystem}}.  For adding the API to {{FileContext}} and 
{{AbstractFileSystem}}, I created HDFS-5638.

> FileSystem API for ACLs.
> 
>
> Key: HDFS-5594
> URL: https://issues.apache.org/jira/browse/HDFS-5594
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: HDFS ACLs (HDFS-4685)
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>
> Add new methods to {{FileSystem}} for manipulating ACLs.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5638) FileContext API for ACLs.

2013-12-05 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-5638:
---

 Summary: FileContext API for ACLs.
 Key: HDFS-5638
 URL: https://issues.apache.org/jira/browse/HDFS-5638
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS ACLs (HDFS-4685)
Reporter: Chris Nauroth


Add new methods to {{AbstractFileSystem}} and {{FileContext}} for manipulating 
ACLs.




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5638) FileContext API for ACLs.

2013-12-05 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-5638:


Component/s: (was: security)
 (was: hdfs-client)
 (was: namenode)

> FileContext API for ACLs.
> -
>
> Key: HDFS-5638
> URL: https://issues.apache.org/jira/browse/HDFS-5638
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS ACLs (HDFS-4685)
>Reporter: Chris Nauroth
>
> Add new methods to {{AbstractFileSystem}} and {{FileContext}} for 
> manipulating ACLs.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5312) Generate HTTP / HTTPS URL in DFSUtil#getInfoServer() based on the configured http policy

2013-12-05 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5312:
-

Attachment: HDFS-5312.008.patch

Minor clean up.

> Generate HTTP / HTTPS URL in DFSUtil#getInfoServer() based on the configured 
> http policy
> 
>
> Key: HDFS-5312
> URL: https://issues.apache.org/jira/browse/HDFS-5312
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5312.000.patch, HDFS-5312.001.patch, 
> HDFS-5312.002.patch, HDFS-5312.003.patch, HDFS-5312.004.patch, 
> HDFS-5312.005.patch, HDFS-5312.006.patch, HDFS-5312.007.patch, 
> HDFS-5312.008.patch
>
>
> DFSUtil#getInfoServer() returns only the authority (i.e., host+port) when 
> searching for the http / https server. This is insufficient because HDFS-5536 
> and related jiras allows NN / DN / JN to open HTTPS only using the HTTPS_ONLY 
> policy.
> This JIRA addresses two issues. First, DFSUtil#getInfoServer() should return 
> an URI instead of a string, so that the scheme is an inherent parts of the 
> return value, which eliminates the task of figuring out the scheme by design. 
> Second, it introduces a new function to choose whether http or https should 
> be used to connect to the remote server based of dfs.http.policy.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5634) allow BlockReaderLocal to switch between checksumming and not

2013-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13841025#comment-13841025
 ] 

Hadoop QA commented on HDFS-5634:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12617331/HDFS-5634.002.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5655//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5655//console

This message is automatically generated.

> allow BlockReaderLocal to switch between checksumming and not
> -
>
> Key: HDFS-5634
> URL: https://issues.apache.org/jira/browse/HDFS-5634
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5634.001.patch, HDFS-5634.002.patch
>
>
> BlockReaderLocal should be able to switch between checksumming and 
> non-checksumming, so that when we get notifications that something is mlocked 
> (see HDFS-5182), we can avoid checksumming when reading from that block.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5637) try to refeatchToken while local read InvalidToken occurred

2013-12-05 Thread Liang Xie (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13841020#comment-13841020
 ] 

Liang Xie commented on HDFS-5637:
-

Specially, [~apurtell], would you mind taking a look at it as well? thanks, i 
know you are definitely a HBase security expert, probably you have seen it 
before in any region server nodes? 

> try to refeatchToken while local read InvalidToken occurred
> ---
>
> Key: HDFS-5637
> URL: https://issues.apache.org/jira/browse/HDFS-5637
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, security
>Affects Versions: 2.0.5-alpha, 2.2.0
>Reporter: Liang Xie
>Assignee: Liang Xie
> Attachments: HDFS-5637.txt
>
>
> we observed several warning logs like below from region server nodes:
> 2013-12-05,13:22:26,042 WARN org.apache.hadoop.hdfs.DFSClient: Failed to 
> connect to /10.2.201.110:11402 for block, add to deadNodes and continue. 
> org.apache.hadoop.security.token.SecretManager$InvalidToken: Block token with 
> block_token_identifier (expiryDate=1386060141977, keyId=-333530248, 
> userId=hbase_srv, blockPoolId=BP-1310313570-10.101.10.66-1373527541386, 
> blockId=-190217754078101701, access modes=[READ]) is expired.
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:280)
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager.checkAccess(BlockPoolTokenSecretManager.java:88)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockToken(DataNode.java:1082)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1033)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)
> org.apache.hadoop.security.token.SecretManager$InvalidToken: Block token with 
> block_token_identifier (expiryDate=1386060141977, keyId=-333530248, 
> userId=hbase_srv, blockPoolId=BP-1310313570-10.101.10.66-1373527541386, 
> blockId=-190217754078101701, access modes=[READ]) is expired.
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:280)
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager.checkAccess(BlockPoolTokenSecretManager.java:88)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockToken(DataNode.java:1082)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1033)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> at java.lang.reflect.Construct

[jira] [Updated] (HDFS-5637) try to refeatchToken while local read InvalidToken occurred

2013-12-05 Thread Liang Xie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liang Xie updated HDFS-5637:


Attachment: HDFS-5637.txt

> try to refeatchToken while local read InvalidToken occurred
> ---
>
> Key: HDFS-5637
> URL: https://issues.apache.org/jira/browse/HDFS-5637
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, security
>Affects Versions: 2.0.5-alpha, 2.2.0
>Reporter: Liang Xie
>Assignee: Liang Xie
> Attachments: HDFS-5637.txt
>
>
> we observed several warning logs like below from region server nodes:
> 2013-12-05,13:22:26,042 WARN org.apache.hadoop.hdfs.DFSClient: Failed to 
> connect to /10.2.201.110:11402 for block, add to deadNodes and continue. 
> org.apache.hadoop.security.token.SecretManager$InvalidToken: Block token with 
> block_token_identifier (expiryDate=1386060141977, keyId=-333530248, 
> userId=hbase_srv, blockPoolId=BP-1310313570-10.101.10.66-1373527541386, 
> blockId=-190217754078101701, access modes=[READ]) is expired.
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:280)
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager.checkAccess(BlockPoolTokenSecretManager.java:88)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockToken(DataNode.java:1082)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1033)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)
> org.apache.hadoop.security.token.SecretManager$InvalidToken: Block token with 
> block_token_identifier (expiryDate=1386060141977, keyId=-333530248, 
> userId=hbase_srv, blockPoolId=BP-1310313570-10.101.10.66-1373527541386, 
> blockId=-190217754078101701, access modes=[READ]) is expired.
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:280)
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager.checkAccess(BlockPoolTokenSecretManager.java:88)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockToken(DataNode.java:1082)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1033)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(Remot

[jira] [Updated] (HDFS-5637) try to refeatchToken while local read InvalidToken occurred

2013-12-05 Thread Liang Xie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liang Xie updated HDFS-5637:


Status: Patch Available  (was: Open)

> try to refeatchToken while local read InvalidToken occurred
> ---
>
> Key: HDFS-5637
> URL: https://issues.apache.org/jira/browse/HDFS-5637
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, security
>Affects Versions: 2.2.0, 2.0.5-alpha
>Reporter: Liang Xie
>Assignee: Liang Xie
> Attachments: HDFS-5637.txt
>
>
> we observed several warning logs like below from region server nodes:
> 2013-12-05,13:22:26,042 WARN org.apache.hadoop.hdfs.DFSClient: Failed to 
> connect to /10.2.201.110:11402 for block, add to deadNodes and continue. 
> org.apache.hadoop.security.token.SecretManager$InvalidToken: Block token with 
> block_token_identifier (expiryDate=1386060141977, keyId=-333530248, 
> userId=hbase_srv, blockPoolId=BP-1310313570-10.101.10.66-1373527541386, 
> blockId=-190217754078101701, access modes=[READ]) is expired.
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:280)
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager.checkAccess(BlockPoolTokenSecretManager.java:88)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockToken(DataNode.java:1082)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1033)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)
> org.apache.hadoop.security.token.SecretManager$InvalidToken: Block token with 
> block_token_identifier (expiryDate=1386060141977, keyId=-333530248, 
> userId=hbase_srv, blockPoolId=BP-1310313570-10.101.10.66-1373527541386, 
> blockId=-190217754078101701, access modes=[READ]) is expired.
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:280)
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager.checkAccess(BlockPoolTokenSecretManager.java:88)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockToken(DataNode.java:1082)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1033)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteExce

[jira] [Commented] (HDFS-5637) try to refeatchToken while local read InvalidToken occurred

2013-12-05 Thread Liang Xie (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13841017#comment-13841017
 ] 

Liang Xie commented on HDFS-5637:
-

Our hadoop version is  based on 2.0.0 modified.
The log shows the addToDeadNodes() will be triggered if a ssr failed due to 
token expired, we should try to refetchToken first, just like 
InvalidBlockTokenException occurred.
Attached patch is a quick/simple fix, just keep the similar handle like 
InvalidBlockTokenException.
Another possible fix is that we should uniform into one same exception for both 
ssr and normal non-ssr scenario.

> try to refeatchToken while local read InvalidToken occurred
> ---
>
> Key: HDFS-5637
> URL: https://issues.apache.org/jira/browse/HDFS-5637
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, security
>Affects Versions: 2.0.5-alpha, 2.2.0
>Reporter: Liang Xie
>Assignee: Liang Xie
> Attachments: HDFS-5637.txt
>
>
> we observed several warning logs like below from region server nodes:
> 2013-12-05,13:22:26,042 WARN org.apache.hadoop.hdfs.DFSClient: Failed to 
> connect to /10.2.201.110:11402 for block, add to deadNodes and continue. 
> org.apache.hadoop.security.token.SecretManager$InvalidToken: Block token with 
> block_token_identifier (expiryDate=1386060141977, keyId=-333530248, 
> userId=hbase_srv, blockPoolId=BP-1310313570-10.101.10.66-1373527541386, 
> blockId=-190217754078101701, access modes=[READ]) is expired.
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:280)
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager.checkAccess(BlockPoolTokenSecretManager.java:88)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockToken(DataNode.java:1082)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1033)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)
> org.apache.hadoop.security.token.SecretManager$InvalidToken: Block token with 
> block_token_identifier (expiryDate=1386060141977, keyId=-333530248, 
> userId=hbase_srv, blockPoolId=BP-1310313570-10.101.10.66-1373527541386, 
> blockId=-190217754078101701, access modes=[READ]) is expired.
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:280)
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager.checkAccess(BlockPoolTokenSecretManager.java:88)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockToken(DataNode.java:1082)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1033)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> s

[jira] [Created] (HDFS-5637) try to refeatchToken while local read InvalidToken occurred

2013-12-05 Thread Liang Xie (JIRA)
Liang Xie created HDFS-5637:
---

 Summary: try to refeatchToken while local read InvalidToken 
occurred
 Key: HDFS-5637
 URL: https://issues.apache.org/jira/browse/HDFS-5637
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client, security
Affects Versions: 2.2.0, 2.0.5-alpha
Reporter: Liang Xie
Assignee: Liang Xie


we observed several warning logs like below from region server nodes:

2013-12-05,13:22:26,042 WARN org.apache.hadoop.hdfs.DFSClient: Failed to 
connect to /10.2.201.110:11402 for block, add to deadNodes and continue. 
org.apache.hadoop.security.token.SecretManager$InvalidToken: Block token with 
block_token_identifier (expiryDate=1386060141977, keyId=-333530248, 
userId=hbase_srv, blockPoolId=BP-1310313570-10.101.10.66-1373527541386, 
blockId=-190217754078101701, access modes=[READ]) is expired.
at 
org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:280)
at 
org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager.checkAccess(BlockPoolTokenSecretManager.java:88)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockToken(DataNode.java:1082)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1033)
at 
org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)

org.apache.hadoop.security.token.SecretManager$InvalidToken: Block token with 
block_token_identifier (expiryDate=1386060141977, keyId=-333530248, 
userId=hbase_srv, blockPoolId=BP-1310313570-10.101.10.66-1373527541386, 
blockId=-190217754078101701, access modes=[READ]) is expired.
at 
org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:280)
at 
org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager.checkAccess(BlockPoolTokenSecretManager.java:88)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockToken(DataNode.java:1082)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1033)
at 
org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
at 
org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:771)
at 
org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888)
at 
org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455)
at 
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645)
 

[jira] [Updated] (HDFS-5629) Support HTTPS in JournalNode

2013-12-05 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5629:
-

Attachment: HDFS-5629.000.patch

> Support HTTPS in JournalNode
> 
>
> Key: HDFS-5629
> URL: https://issues.apache.org/jira/browse/HDFS-5629
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5629.000.patch
>
>
> Currently JournalNode has only HTTP support only. This jira tracks the effort 
> to add HTTPS support into JournalNode.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5590) Block ID and generation stamp may be reused when persistBlocks is set to false

2013-12-05 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840988#comment-13840988
 ] 

Suresh Srinivas commented on HDFS-5590:
---

bq.  Do I understand correctly that removal of this parameter will be affecting 
downstream tools like CM and Ambari, but we don't care?

This is as indicated in comments above, an undocumented configuration. It was 
only used within hdfs code for optimization, which we now know causes data 
loss. 

This should not cause issues for downstream tools. 

> Block ID and generation stamp may be reused when persistBlocks is set to false
> --
>
> Key: HDFS-5590
> URL: https://issues.apache.org/jira/browse/HDFS-5590
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: 2.3.0
>
> Attachments: HDFS-5590.000.patch, HDFS-5590.001.patch
>
>
> In a cluster with non-HA setup and dfs.persist.blocks set to false, we may 
> have data loss in the following case:
> # client creates file1 and requests a block from NN and get blk_id1_gs1
> # client writes blk_id1_gs1 to DN
> # NN is restarted and because persistBlocks is false, blk_id1_gs1 may not be 
> persisted in disk
> # another client creates file2 and NN will allocate a new block using the 
> same block id blk_id1_gs1 since block ID and generation stamp are both 
> increased sequentially.
> Now we may have two versions (file1 and file2) of the blk_id1_gs1 (same id, 
> same gs) in the system. It will case data loss.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5312) Generate HTTP / HTTPS URL in DFSUtil#getInfoServer() based on the configured http policy

2013-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840986#comment-13840986
 ] 

Hadoop QA commented on HDFS-5312:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12617312/HDFS-5312.007.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5654//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5654//console

This message is automatically generated.

> Generate HTTP / HTTPS URL in DFSUtil#getInfoServer() based on the configured 
> http policy
> 
>
> Key: HDFS-5312
> URL: https://issues.apache.org/jira/browse/HDFS-5312
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5312.000.patch, HDFS-5312.001.patch, 
> HDFS-5312.002.patch, HDFS-5312.003.patch, HDFS-5312.004.patch, 
> HDFS-5312.005.patch, HDFS-5312.006.patch, HDFS-5312.007.patch
>
>
> DFSUtil#getInfoServer() returns only the authority (i.e., host+port) when 
> searching for the http / https server. This is insufficient because HDFS-5536 
> and related jiras allows NN / DN / JN to open HTTPS only using the HTTPS_ONLY 
> policy.
> This JIRA addresses two issues. First, DFSUtil#getInfoServer() should return 
> an URI instead of a string, so that the scheme is an inherent parts of the 
> return value, which eliminates the task of figuring out the scheme by design. 
> Second, it introduces a new function to choose whether http or https should 
> be used to connect to the remote server based of dfs.http.policy.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5431) support cachepool-based quota management in path-based caching

2013-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840982#comment-13840982
 ] 

Hadoop QA commented on HDFS-5431:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12617310/hdfs-5431-1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5653//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5653//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5653//console

This message is automatically generated.

> support cachepool-based quota management in path-based caching
> --
>
> Key: HDFS-5431
> URL: https://issues.apache.org/jira/browse/HDFS-5431
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Andrew Wang
> Attachments: hdfs-5431-1.patch
>
>
> We should support cachepool-based quota management in path-based caching.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5590) Block ID and generation stamp may be reused when persistBlocks is set to false

2013-12-05 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840967#comment-13840967
 ] 

Konstantin Boudnik commented on HDFS-5590:
--

Do I understand correctly that removal of this parameter will be affecting 
downstream tools like CM and Ambari, but we don't care?

> Block ID and generation stamp may be reused when persistBlocks is set to false
> --
>
> Key: HDFS-5590
> URL: https://issues.apache.org/jira/browse/HDFS-5590
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: 2.3.0
>
> Attachments: HDFS-5590.000.patch, HDFS-5590.001.patch
>
>
> In a cluster with non-HA setup and dfs.persist.blocks set to false, we may 
> have data loss in the following case:
> # client creates file1 and requests a block from NN and get blk_id1_gs1
> # client writes blk_id1_gs1 to DN
> # NN is restarted and because persistBlocks is false, blk_id1_gs1 may not be 
> persisted in disk
> # another client creates file2 and NN will allocate a new block using the 
> same block id blk_id1_gs1 since block ID and generation stamp are both 
> increased sequentially.
> Now we may have two versions (file1 and file2) of the blk_id1_gs1 (same id, 
> same gs) in the system. It will case data loss.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5634) allow BlockReaderLocal to switch between checksumming and not

2013-12-05 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-5634:
---

Attachment: HDFS-5634.002.patch

* fix findbugs warnings

* fix bug in {{BlockReaderLocal#skip}}

* fix bug causing mmaps to be granted even without checksum-skipping in effect

> allow BlockReaderLocal to switch between checksumming and not
> -
>
> Key: HDFS-5634
> URL: https://issues.apache.org/jira/browse/HDFS-5634
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5634.001.patch, HDFS-5634.002.patch
>
>
> BlockReaderLocal should be able to switch between checksumming and 
> non-checksumming, so that when we get notifications that something is mlocked 
> (see HDFS-5182), we can avoid checksumming when reading from that block.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-4201) NPE in BPServiceActor#sendHeartBeat

2013-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840950#comment-13840950
 ] 

Hadoop QA commented on HDFS-4201:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12617278/trunk-4201.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.TestMetaSave
  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotFileLength
  org.apache.hadoop.fs.TestFcHdfsCreateMkdir
  org.apache.hadoop.hdfs.TestSetTimes
  
org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks
  
org.apache.hadoop.hdfs.server.blockmanagement.TestOverReplicatedBlocks
  org.apache.hadoop.hdfs.TestDFSStartupVersions
  org.apache.hadoop.fs.TestUrlStreamHandler
  
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
  org.apache.hadoop.hdfs.server.namenode.TestSnapshotPathINodes
  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshottableDirListing
  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion
  org.apache.hadoop.fs.TestSymlinkHdfsDisable
  org.apache.hadoop.hdfs.server.namenode.TestNameNodeMXBean
  org.apache.hadoop.fs.TestGlobPaths
  
org.apache.hadoop.hdfs.server.blockmanagement.TestPendingReplication
  
org.apache.hadoop.hdfs.server.datanode.TestMultipleNNDataBlockScanner
  org.apache.hadoop.hdfs.server.namenode.TestINodeFile
  
org.apache.hadoop.hdfs.server.namenode.TestListCorruptFileBlocks
  org.apache.hadoop.hdfs.TestDFSShellGenericOptions
  org.apache.hadoop.hdfs.server.namenode.ha.TestHAMetrics
  org.apache.hadoop.hdfs.TestDatanodeDeath
  org.apache.hadoop.hdfs.TestFileAppend2
  
org.apache.hadoop.hdfs.server.namenode.TestCheckPointForSecurityTokens
  org.apache.hadoop.hdfs.TestDFSPermission
  org.apache.hadoop.hdfs.TestFileAppendRestart
  org.apache.hadoop.hdfs.TestDataTransferProtocol
  org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
  org.apache.hadoop.hdfs.server.namenode.TestFileLimit
  org.apache.hadoop.fs.TestSymlinkHdfsFileContext
  org.apache.hadoop.hdfs.server.datanode.TestDeleteBlockPool
  org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs
  org.apache.hadoop.fs.TestEnhancedByteBufferAccess
  org.apache.hadoop.hdfs.TestDFSUpgrade
  org.apache.hadoop.hdfs.TestIsMethodSupported
  org.apache.hadoop.fs.TestHDFSFileContextMainOperations
  org.apache.hadoop.hdfs.server.datanode.TestBlockReport
  org.apache.hadoop.hdfs.TestParallelUnixDomainRead
  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotListing
  org.apache.hadoop.hdfs.server.namenode.TestParallelImageWrite
  org.apache.hadoop.hdfs.server.namenode.TestCheckpoint
  org.apache.hadoop.hdfs.web.TestHttpsFileSystem
  org.apache.hadoop.hdfs.server.namenode.TestFSNamesystemMBean
  
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes
  org.apache.hadoop.hdfs.TestClientReportBadBlock
  org.apache.hadoop.hdfs.server.datanode.TestDataNodeMXBean
  org.apache.hadoop.hdfs.server.namenode.TestCreateEditsLog
  org.apache.hadoop.hdfs.server.datanode.TestRefreshNamenodes
  
org.apache.hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality
  
org.apache.hadoop.hdfs.server.blockmanagement.TestComputeInvalidateWork
  o

[jira] [Created] (HDFS-5636) Enforce a max TTL per cache pool

2013-12-05 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-5636:
-

 Summary: Enforce a max TTL per cache pool
 Key: HDFS-5636
 URL: https://issues.apache.org/jira/browse/HDFS-5636
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: caching, namenode
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang


It'd be nice for administrators to be able to specify a maximum TTL for 
directives in a cache pool. This forces all directives to eventually age out.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-4983) Numeric usernames do not work with WebHDFS FS

2013-12-05 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840860#comment-13840860
 ] 

Andrew Wang commented on HDFS-4983:
---

Looks good Yongjun, +1 once this little nit is addressed:

{code}
+  static {
+  setUserPattern(DFS_WEBHDFS_USER_PATTERN_DEFAULT);
+  }
{code}

Indentation here should be 2 not 4 spaces.

I'll wait a day or two before committing to give Jing and Haohui time to review.

> Numeric usernames do not work with WebHDFS FS
> -
>
> Key: HDFS-4983
> URL: https://issues.apache.org/jira/browse/HDFS-4983
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.0.0-alpha
>Reporter: Harsh J
>Assignee: Yongjun Zhang
>  Labels: patch
> Attachments: HDFS-4983.001.patch, HDFS-4983.002.patch, 
> HDFS-4983.003.patch, HDFS-4983.004.patch
>
>
> Per the file 
> hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserParam.java,
>  the DOMAIN pattern is set to: {{^[A-Za-z_][A-Za-z0-9._-]*[$]?$}}.
> Given this, using a username such as "123" seems to fail for some reason 
> (tried on insecure setup):
> {code}
> [123@host-1 ~]$ whoami
> 123
> [123@host-1 ~]$ hadoop fs -fs webhdfs://host-2.domain.com -ls /
> -ls: Invalid value: "123" does not belong to the domain 
> ^[A-Za-z_][A-Za-z0-9._-]*[$]?$
> Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [ ...]
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5312) Generate HTTP / HTTPS URL in DFSUtil#getInfoServer() based on the configured http policy

2013-12-05 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5312:
-

Attachment: HDFS-5312.007.patch

> Generate HTTP / HTTPS URL in DFSUtil#getInfoServer() based on the configured 
> http policy
> 
>
> Key: HDFS-5312
> URL: https://issues.apache.org/jira/browse/HDFS-5312
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5312.000.patch, HDFS-5312.001.patch, 
> HDFS-5312.002.patch, HDFS-5312.003.patch, HDFS-5312.004.patch, 
> HDFS-5312.005.patch, HDFS-5312.006.patch, HDFS-5312.007.patch
>
>
> DFSUtil#getInfoServer() returns only the authority (i.e., host+port) when 
> searching for the http / https server. This is insufficient because HDFS-5536 
> and related jiras allows NN / DN / JN to open HTTPS only using the HTTPS_ONLY 
> policy.
> This JIRA addresses two issues. First, DFSUtil#getInfoServer() should return 
> an URI instead of a string, so that the scheme is an inherent parts of the 
> return value, which eliminates the task of figuring out the scheme by design. 
> Second, it introduces a new function to choose whether http or https should 
> be used to connect to the remote server based of dfs.http.policy.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5431) support cachepool-based quota management in path-based caching

2013-12-05 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-5431:
--

Attachment: hdfs-5431-1.patch

Patch attached. This implements quotas, and a few other things:

* Quotas are based on the demand, not the current amount of cached data. 
They're enforced at scan time.
* Introduce two new CachePool fields, quota and reservation. We only use quota, 
reservation and weight are present in the fsimage and edit log so we can later 
implement these without doing a metadata upgrade, but are not present in proto 
format so clients can't actually set them.
* Made the CacheManager success/fail prints for pools consistent with 
directives, now we actually always see a NN log print which is nice for 
debugging.
* Some misc CacheAdmin cleanups on top of removing weight and adding quota
* Some TestCacheDirective cleanups, basically unifying the setup/teardown so we 
aren't always restarting a new cluster. Might help to use "git diff -w" to view 
this.

> support cachepool-based quota management in path-based caching
> --
>
> Key: HDFS-5431
> URL: https://issues.apache.org/jira/browse/HDFS-5431
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Andrew Wang
> Attachments: hdfs-5431-1.patch
>
>
> We should support cachepool-based quota management in path-based caching.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5431) support cachepool-based quota management in path-based caching

2013-12-05 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-5431:
--

Affects Version/s: 3.0.0
   Status: Patch Available  (was: Open)

Also will note in advance that the OEV test will fail, but I ran it 
successfully after fixing up the input files:

{noformat}
---
 T E S T S
---
Running org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.551 sec - in 
org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer

Results :

Tests run: 3, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 32.629s
[INFO] Finished at: Thu Dec 05 18:17:14 PST 2013
[INFO] Final Memory: 30M/511M
[INFO] 
{noformat}

> support cachepool-based quota management in path-based caching
> --
>
> Key: HDFS-5431
> URL: https://issues.apache.org/jira/browse/HDFS-5431
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Andrew Wang
> Attachments: hdfs-5431-1.patch
>
>
> We should support cachepool-based quota management in path-based caching.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5590) Block ID and generation stamp may be reused when persistBlocks is set to false

2013-12-05 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5590:


  Resolution: Fixed
   Fix Version/s: 2.3.0
Target Version/s:   (was: 2.4.0)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I've committed this to trunk, branch-2 and branch-2.3. Thanks for the review 
Suresh, Arpit, and Konstantin.

> Block ID and generation stamp may be reused when persistBlocks is set to false
> --
>
> Key: HDFS-5590
> URL: https://issues.apache.org/jira/browse/HDFS-5590
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: 2.3.0
>
> Attachments: HDFS-5590.000.patch, HDFS-5590.001.patch
>
>
> In a cluster with non-HA setup and dfs.persist.blocks set to false, we may 
> have data loss in the following case:
> # client creates file1 and requests a block from NN and get blk_id1_gs1
> # client writes blk_id1_gs1 to DN
> # NN is restarted and because persistBlocks is false, blk_id1_gs1 may not be 
> persisted in disk
> # another client creates file2 and NN will allocate a new block using the 
> same block id blk_id1_gs1 since block ID and generation stamp are both 
> increased sequentially.
> Now we may have two versions (file1 and file2) of the blk_id1_gs1 (same id, 
> same gs) in the system. It will case data loss.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5590) Block ID and generation stamp may be reused when persistBlocks is set to false

2013-12-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840826#comment-13840826
 ] 

Hudson commented on HDFS-5590:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4844 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4844/])
HDFS-5590. Block ID and generation stamp may be reused when persistBlocks is 
set to false. Contributed by Jing Zhao. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1548368)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPersistBlocks.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestBackupNode.java


> Block ID and generation stamp may be reused when persistBlocks is set to false
> --
>
> Key: HDFS-5590
> URL: https://issues.apache.org/jira/browse/HDFS-5590
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-5590.000.patch, HDFS-5590.001.patch
>
>
> In a cluster with non-HA setup and dfs.persist.blocks set to false, we may 
> have data loss in the following case:
> # client creates file1 and requests a block from NN and get blk_id1_gs1
> # client writes blk_id1_gs1 to DN
> # NN is restarted and because persistBlocks is false, blk_id1_gs1 may not be 
> persisted in disk
> # another client creates file2 and NN will allocate a new block using the 
> same block id blk_id1_gs1 since block ID and generation stamp are both 
> increased sequentially.
> Now we may have two versions (file1 and file2) of the blk_id1_gs1 (same id, 
> same gs) in the system. It will case data loss.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5339) WebHDFS URI does not accept logical nameservices when security is enabled

2013-12-05 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840813#comment-13840813
 ] 

Haohui Mai commented on HDFS-5339:
--

What happens is that the token selector tries to resolve the logical name when 
calling SecurityUtil#buildTokenService(). A temporary workaround is to set 
hadoop.security.token.service.use_ip to false.

> WebHDFS URI does not accept logical nameservices when security is enabled
> -
>
> Key: HDFS-5339
> URL: https://issues.apache.org/jira/browse/HDFS-5339
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.2.0
>Reporter: Stephen Chu
>Assignee: Haohui Mai
>
> On an insecure, HA setup, we see that this works:
> {code}
> [jenkins@hdfs-cdh5-ha-1 ~]$ hdfs dfs -ls webhdfs://ns1/
> 13/09/27 15:23:52 INFO web.WebHdfsFileSystem: Retrying connect to namenode: 
> hdfs-cdh5-ha-1.ent.cloudera.com/10.20.190.104:20101. Already tried 0 time(s); 
> retry policy is 
> org.apache.hadoop.io.retry.RetryPolicies$FailoverOnNetworkExceptionRetry@5ebc404e,
>  delay 0ms.
> Found 5 items
> drwxr-xr-x   - hbase hbase   0 2013-09-23 09:04 
> webhdfs://ns1/hbase
> drwxrwxr-x   - solr  solr0 2013-09-18 12:07 webhdfs://ns1/solr
> drwxr-xr-x   - hdfs  supergroup  0 2013-09-19 11:09 
> webhdfs://ns1/system
> drwxrwxrwt   - hdfs  supergroup  0 2013-09-18 16:25 webhdfs://ns1/tmp
> drwxr-xr-x   - hdfs  supergroup  0 2013-09-18 15:53 webhdfs://ns1/user
> [jenkins@hdfs-cdh5-ha-1 ~]$
> {code}
> However, when security is enabled, we get the following error:
> {code}
> [jenkins@hdfs-cdh5-ha-secure-1 ~]$ hdfs dfs -ls webhdfs://ns1/
> -ls: java.net.UnknownHostException: ns1
> Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [ ...]
> [jenkins@hdfs-cdh5-ha-secure-1 ~]$
> {code}
> I verified that we can use the hdfs://ns1/ URI on the cluster where I see the 
> problem.
> Also, I verified on a secure, non-HA cluster that we can use the webhdfs uri 
> in secure mode:
> {code}
> [jenkins@hdfs-cdh5-secure-1 ~]$ hdfs dfs -ls 
> webhdfs://hdfs-cdh5-secure-1.ent.cloudera.com:20101/
> drwxr-xr-x   - hbase hbase   0 2013-09-25 10:33 
> webhdfs://hdfs-cdh5-secure-1.ent.cloudera.com:20101/hbase
> drwxrwxr-x   - solr  solr0 2013-09-25 10:34 
> webhdfs://hdfs-cdh5-secure-1.ent.cloudera.com:20101/solr
> drwxrwxrwt   - hdfs  supergroup  0 2013-09-25 10:39 
> webhdfs://hdfs-cdh5-secure-1.ent.cloudera.com:20101/tmp
> drwxr-xr-x   - hdfs  supergroup  0 2013-09-25 11:00 
> webhdfs://hdfs-cdh5-secure-1.ent.cloudera.com:20101/user
> [jenkins@hdfs-cdh5-secure-1 ~]$
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (HDFS-5339) WebHDFS URI does not accept logical nameservices when security is enabled

2013-12-05 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai reassigned HDFS-5339:


Assignee: Haohui Mai

> WebHDFS URI does not accept logical nameservices when security is enabled
> -
>
> Key: HDFS-5339
> URL: https://issues.apache.org/jira/browse/HDFS-5339
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.2.0
>Reporter: Stephen Chu
>Assignee: Haohui Mai
>
> On an insecure, HA setup, we see that this works:
> {code}
> [jenkins@hdfs-cdh5-ha-1 ~]$ hdfs dfs -ls webhdfs://ns1/
> 13/09/27 15:23:52 INFO web.WebHdfsFileSystem: Retrying connect to namenode: 
> hdfs-cdh5-ha-1.ent.cloudera.com/10.20.190.104:20101. Already tried 0 time(s); 
> retry policy is 
> org.apache.hadoop.io.retry.RetryPolicies$FailoverOnNetworkExceptionRetry@5ebc404e,
>  delay 0ms.
> Found 5 items
> drwxr-xr-x   - hbase hbase   0 2013-09-23 09:04 
> webhdfs://ns1/hbase
> drwxrwxr-x   - solr  solr0 2013-09-18 12:07 webhdfs://ns1/solr
> drwxr-xr-x   - hdfs  supergroup  0 2013-09-19 11:09 
> webhdfs://ns1/system
> drwxrwxrwt   - hdfs  supergroup  0 2013-09-18 16:25 webhdfs://ns1/tmp
> drwxr-xr-x   - hdfs  supergroup  0 2013-09-18 15:53 webhdfs://ns1/user
> [jenkins@hdfs-cdh5-ha-1 ~]$
> {code}
> However, when security is enabled, we get the following error:
> {code}
> [jenkins@hdfs-cdh5-ha-secure-1 ~]$ hdfs dfs -ls webhdfs://ns1/
> -ls: java.net.UnknownHostException: ns1
> Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [ ...]
> [jenkins@hdfs-cdh5-ha-secure-1 ~]$
> {code}
> I verified that we can use the hdfs://ns1/ URI on the cluster where I see the 
> problem.
> Also, I verified on a secure, non-HA cluster that we can use the webhdfs uri 
> in secure mode:
> {code}
> [jenkins@hdfs-cdh5-secure-1 ~]$ hdfs dfs -ls 
> webhdfs://hdfs-cdh5-secure-1.ent.cloudera.com:20101/
> drwxr-xr-x   - hbase hbase   0 2013-09-25 10:33 
> webhdfs://hdfs-cdh5-secure-1.ent.cloudera.com:20101/hbase
> drwxrwxr-x   - solr  solr0 2013-09-25 10:34 
> webhdfs://hdfs-cdh5-secure-1.ent.cloudera.com:20101/solr
> drwxrwxrwt   - hdfs  supergroup  0 2013-09-25 10:39 
> webhdfs://hdfs-cdh5-secure-1.ent.cloudera.com:20101/tmp
> drwxr-xr-x   - hdfs  supergroup  0 2013-09-25 11:00 
> webhdfs://hdfs-cdh5-secure-1.ent.cloudera.com:20101/user
> [jenkins@hdfs-cdh5-secure-1 ~]$
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5633) Improve OfflineImageViewer to use less memory

2013-12-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840806#comment-13840806
 ] 

Hudson commented on HDFS-5633:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4842 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4842/])
HDFS-5633. Improve OfflineImageViewer to use less memory. Contributed by Jing 
Zhao. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1548359)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FileDistributionVisitor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/ImageLoaderCurrent.java


> Improve OfflineImageViewer to use less memory
> -
>
> Key: HDFS-5633
> URL: https://issues.apache.org/jira/browse/HDFS-5633
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Fix For: 2.4.0
>
> Attachments: HDFS-5633.000.patch
>
>
> Currently after we rename a file/dir which is included in a snapshot, the 
> file/dir can be linked with two different reference INodes. To avoid 
> saving/loading the inode multiple times in/from FSImage, we use a temporary 
> map to record whether we have visited this inode before.
> However, in OfflineImageViewer (specifically, in ImageLoaderCurrent), the 
> current implementation simply records all the directory inodes. This can take 
> a lot of memory when the fsimage is big. We should only record an inode in 
> the temp map when it is referenced by an INodeReference, just like what we do 
> in FSImageFormat.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5312) Generate HTTP / HTTPS URL in DFSUtil#getInfoServer() based on the configured http policy

2013-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840803#comment-13840803
 ] 

Hadoop QA commented on HDFS-5312:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12617255/HDFS-5312.006.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.TestPersistBlocks

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5651//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5651//console

This message is automatically generated.

> Generate HTTP / HTTPS URL in DFSUtil#getInfoServer() based on the configured 
> http policy
> 
>
> Key: HDFS-5312
> URL: https://issues.apache.org/jira/browse/HDFS-5312
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5312.000.patch, HDFS-5312.001.patch, 
> HDFS-5312.002.patch, HDFS-5312.003.patch, HDFS-5312.004.patch, 
> HDFS-5312.005.patch, HDFS-5312.006.patch
>
>
> DFSUtil#getInfoServer() returns only the authority (i.e., host+port) when 
> searching for the http / https server. This is insufficient because HDFS-5536 
> and related jiras allows NN / DN / JN to open HTTPS only using the HTTPS_ONLY 
> policy.
> This JIRA addresses two issues. First, DFSUtil#getInfoServer() should return 
> an URI instead of a string, so that the scheme is an inherent parts of the 
> return value, which eliminates the task of figuring out the scheme by design. 
> Second, it introduces a new function to choose whether http or https should 
> be used to connect to the remote server based of dfs.http.policy.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5633) Improve OfflineImageViewer to use less memory

2013-12-05 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5633:


   Resolution: Fixed
Fix Version/s: 2.4.0
   Status: Resolved  (was: Patch Available)

Thanks for the review, Nicholas! I've committed this to trunk and branch-2.

> Improve OfflineImageViewer to use less memory
> -
>
> Key: HDFS-5633
> URL: https://issues.apache.org/jira/browse/HDFS-5633
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Fix For: 2.4.0
>
> Attachments: HDFS-5633.000.patch
>
>
> Currently after we rename a file/dir which is included in a snapshot, the 
> file/dir can be linked with two different reference INodes. To avoid 
> saving/loading the inode multiple times in/from FSImage, we use a temporary 
> map to record whether we have visited this inode before.
> However, in OfflineImageViewer (specifically, in ImageLoaderCurrent), the 
> current implementation simply records all the directory inodes. This can take 
> a lot of memory when the fsimage is big. We should only record an inode in 
> the temp map when it is referenced by an INodeReference, just like what we do 
> in FSImageFormat.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5023) TestSnapshotPathINodes.testAllowSnapshot is failing in branch-2

2013-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840784#comment-13840784
 ] 

Hadoop QA commented on HDFS-5023:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12617252/HDFS-5023.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5650//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5650//console

This message is automatically generated.

> TestSnapshotPathINodes.testAllowSnapshot is failing in branch-2
> ---
>
> Key: HDFS-5023
> URL: https://issues.apache.org/jira/browse/HDFS-5023
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots, test
>Affects Versions: 2.4.0
>Reporter: Ravi Prakash
>Assignee: Mit Desai
>  Labels: test
> Attachments: HDFS-5023.patch, 
> TEST-org.apache.hadoop.hdfs.server.namenode.TestSnapshotPathINodes.xml, 
> org.apache.hadoop.hdfs.server.namenode.TestSnapshotPathINodes-output.txt
>
>
> The assertion on line 91 is failing. I am using Fedora 19 + JDK7. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-4983) Numeric usernames do not work with WebHDFS FS

2013-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840756#comment-13840756
 ] 

Hadoop QA commented on HDFS-4983:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12617248/HDFS-4983.004.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5649//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5649//console

This message is automatically generated.

> Numeric usernames do not work with WebHDFS FS
> -
>
> Key: HDFS-4983
> URL: https://issues.apache.org/jira/browse/HDFS-4983
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.0.0-alpha
>Reporter: Harsh J
>Assignee: Yongjun Zhang
>  Labels: patch
> Attachments: HDFS-4983.001.patch, HDFS-4983.002.patch, 
> HDFS-4983.003.patch, HDFS-4983.004.patch
>
>
> Per the file 
> hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserParam.java,
>  the DOMAIN pattern is set to: {{^[A-Za-z_][A-Za-z0-9._-]*[$]?$}}.
> Given this, using a username such as "123" seems to fail for some reason 
> (tried on insecure setup):
> {code}
> [123@host-1 ~]$ whoami
> 123
> [123@host-1 ~]$ hadoop fs -fs webhdfs://host-2.domain.com -ls /
> -ls: Invalid value: "123" does not belong to the domain 
> ^[A-Za-z_][A-Za-z0-9._-]*[$]?$
> Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [ ...]
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-4201) NPE in BPServiceActor#sendHeartBeat

2013-12-05 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HDFS-4201:
--

Fix Version/s: 3.0.0
   Status: Patch Available  (was: Open)

> NPE in BPServiceActor#sendHeartBeat
> ---
>
> Key: HDFS-4201
> URL: https://issues.apache.org/jira/browse/HDFS-4201
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Eli Collins
>Assignee: Jimmy Xiang
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: trunk-4201.patch
>
>
> Saw the following NPE in a log.
> Think this is likely due to {{dn}} or {{dn.getFSDataset()}} being null, (not 
> {{bpRegistration}}) due to a configuration or local directory failure.
> {code}
> 2012-09-25 04:33:20,782 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> For namenode svsrs00127/11.164.162.226:8020 using DELETEREPORT_INTERVAL of 
> 30 msec  BLOCKREPORT_INTERVAL of 2160msec Initial delay: 0msec; 
> heartBeatInterval=3000
> 2012-09-25 04:33:20,782 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in BPOfferService 
> for Block pool BP-1678908700-11.164.162.226-1342785481826 (storage id 
> DS-1031100678-11.164.162.251-5010-1341933415989) service to 
> svsrs00127/11.164.162.226:8020
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:434)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:520)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:673)
> at java.lang.Thread.run(Thread.java:722)
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-4201) NPE in BPServiceActor#sendHeartBeat

2013-12-05 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HDFS-4201:
--

Attachment: trunk-4201.patch

Attached a patch that fixed datanode initBlockPool error handling. With this 
fix, if BPOfferService fails to connect to one NN, it still can connect other 
NNs, if any, without throwing NPE.

> NPE in BPServiceActor#sendHeartBeat
> ---
>
> Key: HDFS-4201
> URL: https://issues.apache.org/jira/browse/HDFS-4201
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Eli Collins
>Assignee: Jimmy Xiang
>Priority: Critical
> Attachments: trunk-4201.patch
>
>
> Saw the following NPE in a log.
> Think this is likely due to {{dn}} or {{dn.getFSDataset()}} being null, (not 
> {{bpRegistration}}) due to a configuration or local directory failure.
> {code}
> 2012-09-25 04:33:20,782 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> For namenode svsrs00127/11.164.162.226:8020 using DELETEREPORT_INTERVAL of 
> 30 msec  BLOCKREPORT_INTERVAL of 2160msec Initial delay: 0msec; 
> heartBeatInterval=3000
> 2012-09-25 04:33:20,782 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in BPOfferService 
> for Block pool BP-1678908700-11.164.162.226-1342785481826 (storage id 
> DS-1031100678-11.164.162.251-5010-1341933415989) service to 
> svsrs00127/11.164.162.226:8020
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:434)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:520)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:673)
> at java.lang.Thread.run(Thread.java:722)
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5633) Improve OfflineImageViewer to use less memory

2013-12-05 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-5633:
-

Hadoop Flags: Reviewed

+1

> Improve OfflineImageViewer to use less memory
> -
>
> Key: HDFS-5633
> URL: https://issues.apache.org/jira/browse/HDFS-5633
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Attachments: HDFS-5633.000.patch
>
>
> Currently after we rename a file/dir which is included in a snapshot, the 
> file/dir can be linked with two different reference INodes. To avoid 
> saving/loading the inode multiple times in/from FSImage, we use a temporary 
> map to record whether we have visited this inode before.
> However, in OfflineImageViewer (specifically, in ImageLoaderCurrent), the 
> current implementation simply records all the directory inodes. This can take 
> a lot of memory when the fsimage is big. We should only record an inode in 
> the temp map when it is referenced by an INodeReference, just like what we do 
> in FSImageFormat.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-4201) NPE in BPServiceActor#sendHeartBeat

2013-12-05 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840679#comment-13840679
 ] 

Jimmy Xiang commented on HDFS-4201:
---

HDFS-4442 is the same as this one. The root cause is because dn.getFSDataset() 
is null. I will fix this issue and add a test case.

> NPE in BPServiceActor#sendHeartBeat
> ---
>
> Key: HDFS-4201
> URL: https://issues.apache.org/jira/browse/HDFS-4201
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Eli Collins
>Assignee: Jimmy Xiang
>Priority: Critical
>
> Saw the following NPE in a log.
> Think this is likely due to {{dn}} or {{dn.getFSDataset()}} being null, (not 
> {{bpRegistration}}) due to a configuration or local directory failure.
> {code}
> 2012-09-25 04:33:20,782 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> For namenode svsrs00127/11.164.162.226:8020 using DELETEREPORT_INTERVAL of 
> 30 msec  BLOCKREPORT_INTERVAL of 2160msec Initial delay: 0msec; 
> heartBeatInterval=3000
> 2012-09-25 04:33:20,782 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in BPOfferService 
> for Block pool BP-1678908700-11.164.162.226-1342785481826 (storage id 
> DS-1031100678-11.164.162.251-5010-1341933415989) service to 
> svsrs00127/11.164.162.226:8020
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:434)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:520)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:673)
> at java.lang.Thread.run(Thread.java:722)
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5312) Generate HTTP / HTTPS URL in DFSUtil#getInfoServer() based on the configured http policy

2013-12-05 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5312:
-

Summary: Generate HTTP / HTTPS URL in DFSUtil#getInfoServer() based on the 
configured http policy  (was: Refactor DFSUtil#getInfoServer to return an URI)

> Generate HTTP / HTTPS URL in DFSUtil#getInfoServer() based on the configured 
> http policy
> 
>
> Key: HDFS-5312
> URL: https://issues.apache.org/jira/browse/HDFS-5312
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5312.000.patch, HDFS-5312.001.patch, 
> HDFS-5312.002.patch, HDFS-5312.003.patch, HDFS-5312.004.patch, 
> HDFS-5312.005.patch, HDFS-5312.006.patch
>
>
> DFSUtil#getInfoServer() returns only the authority (i.e., host+port) when 
> searching for the http / https server. This is insufficient because HDFS-5536 
> and related jiras allows NN / DN / JN to open HTTPS only using the HTTPS_ONLY 
> policy.
> This JIRA addresses two issues. First, DFSUtil#getInfoServer() should return 
> an URI instead of a string, so that the scheme is an inherent parts of the 
> return value, which eliminates the task of figuring out the scheme by design. 
> Second, it introduces a new function to choose whether http or https should 
> be used to connect to the remote server based of dfs.http.policy.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5628) Some namenode servlets should not be internal.

2013-12-05 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840676#comment-13840676
 ] 

Kihwal Lee commented on HDFS-5628:
--

The main thing is FsckServlet and the token sevlets. I do not care much about 
others, but was going to do them for the completeness.  We are experimenting 
with a change in kerberos auth handler, which will require no change in HDFS.  
I think we will move this jira to the common project and update the tile 
eventually.

> Some namenode servlets should not be internal.
> --
>
> Key: HDFS-5628
> URL: https://issues.apache.org/jira/browse/HDFS-5628
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Kihwal Lee
>
> This is the list of internal servlets added by namenode.
> | Name | Auth | Need to be accessible by end users |
> | StartupProgressServlet | none | no |
> | GetDelegationTokenServlet | internal SPNEGO | yes |
> | RenewDelegationTokenServlet | internal SPNEGO | yes |
> |  CancelDelegationTokenServlet | internal SPNEGO | yes |
> |  FsckServlet | internal SPNEGO | yes |
> |  GetImageServlet | internal SPNEGO | no |
> |  ListPathsServlet | token in query | yes |
> |  FileDataServlet | token in query | yes |
> |  FileChecksumServlets | token in query | yes |
> | ContentSummaryServlet | token in query | yes |
> GetDelegationTokenServlet, RenewDelegationTokenServlet, 
> CancelDelegationTokenServlet and FsckServlet are accessed by end users, but 
> hard-coded to use the internal SPNEGO filter.
> If a name node HTTP server binds to multiple external IP addresses, the 
> internal SPNEGO service principal name may not work with an address to which 
> end users are connecting.  The current SPNEGO implementation in Hadoop is 
> limited to use a single service principal per filter.
> If the underlying hadoop kerberos authentication handler cannot easily be 
> modified, we can at least create a separate auth filter for the end-user 
> facing servlets so that their service principals can be independently 
> configured. If not defined, it should fall back to the current behavior.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HDFS-4442) Initialization failed for block (...) Invalid volume failure config value: 1

2013-12-05 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang resolved HDFS-4442.
---

Resolution: Duplicate
  Assignee: Jimmy Xiang

Yes, that's a duplicate of HDFS-4201. Let me close this one and work on 
HDFS-4201.

> Initialization failed for block (...) Invalid volume failure  config value: 1
> -
>
> Key: HDFS-4442
> URL: https://issues.apache.org/jira/browse/HDFS-4442
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
> Environment: Amazon Linux (Centos 6), Cloudera nightly RPMs
>Reporter: Alexandre Fouché
>Assignee: Jimmy Xiang
>  Labels: datanode, hdfs
>
> (Note: Some of the message are similar to HDFS-4201)
> Just after i created a new HDFS cluster, and this time using Cloudera nightly 
> RPM hadoop-hdfs-datanode-2.0.0+898-1.cdh4.2.0.p0.939.el6.x86_64, HDFS 
> datanodes were unable to initialize or store anything. It stays alive, but 
> keeps logging exceptions every few seconds.
> It was "Initialization failed for block pool Block pool (...)" 
> "org.apache.hadoop.util.DiskChecker$DiskErrorException: Invalid volume 
> failure  config value: 1" and then repeatedly "Exception in BPOfferService 
> for Block pool (...)"
> My config was :
> 
>  dfs.datanode.data.dir
>  file:///opt/hadoop/dn1/data
> 
> After a bit of tweaking, it worked once i added a second EBS volume to the 
> node. Yet it does not explain the initial error. A bug ?
> 
>  dfs.datanode.data.dir
>  file:///opt/hadoop/dn1/data,file:///opt/hadoop/dn2/data
> 
> Original exceptions:
> {code}
> (...)
> 2013-01-25 15:04:28,573 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Formatting block pool BP-1342054845-10.118.50.25-1359125000145 directory 
> /opt/hadoop/dn1/data/current/BP-1342054845-10.118.50.25-1359125000145/current
> 2013-01-25 15:04:28,581 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Setting up storage: 
> nsid=1786416716;bpid=BP-1342054845-10.118.50.25-1359125000145;lv=-40;nsInfo=lv=-40;cid=CID-3c2cfe5f-da56-4115-90db-81e06c14bc50;nsid=1786416716;c=0;bpid=BP-1342054845-10.118.50.25-1359125000145
> 2013-01-25 15:04:28,601 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Block pool BP-1342054845-10.118.50.25-1359125000145 (storage id 
> DS-404982471-10.194.189.193-50010-1359126268221) service to 
> namenode2.somedomain.com/10.2.118.169:8020 beginning handshake with NN
> 2013-01-25 15:04:28,605 FATAL 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for 
> block pool Block pool BP-1342054845-10.118.50.25-1359125000145 (storage id 
> DS-404982471-10.194.189.193-50010-1359126268221) service to 
> namenode1.somedomain.com/10.118.50.25:8020
> org.apache.hadoop.util.DiskChecker$DiskErrorException: Invalid volume failure 
>  config value: 1
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.(FsDatasetImpl.java:182)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:34)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:30)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:910)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:872)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:308)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:218)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:660)
> at java.lang.Thread.run(Unknown Source)
> 2013-01-25 15:04:28,702 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Block pool Block pool BP-1342054845-10.118.50.25-1359125000145 (storage id 
> DS-404982471-10.194.189.193-50010-1359126268221) service to 
> namenode2.somedomain.com/10.2.118.169:8020 successfully registered with NN
> 2013-01-25 15:04:28,863 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> For namenode namenode2.somedomain.com/10.2.118.169:8020 using 
> DELETEREPORT_INTERVAL of 30 msec  BLOCKREPORT_INTERVAL of 2160msec 
> Initial delay: 0msec; heartBeatInterval=3000
> 2013-01-25 15:04:28,864 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in BPOfferService 
> for Block pool BP-1342054845-10.118.50.25-1359125000145 (storage id 
> DS-404982471-10.194.189.193-50010-1359126268221) service to 
> namenode2.somedomain.com/10.2.118.169:8020
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:435)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BP

[jira] [Commented] (HDFS-5634) allow BlockReaderLocal to switch between checksumming and not

2013-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840667#comment-13840667
 ] 

Hadoop QA commented on HDFS-5634:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12617232/HDFS-5634.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.fs.TestEnhancedByteBufferAccess

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5648//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5648//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5648//console

This message is automatically generated.

> allow BlockReaderLocal to switch between checksumming and not
> -
>
> Key: HDFS-5634
> URL: https://issues.apache.org/jira/browse/HDFS-5634
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5634.001.patch
>
>
> BlockReaderLocal should be able to switch between checksumming and 
> non-checksumming, so that when we get notifications that something is mlocked 
> (see HDFS-5182), we can avoid checksumming when reading from that block.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (HDFS-5570) Deprecate hftp / hsftp and replace them with webhdfs / swebhdfs

2013-12-05 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840665#comment-13840665
 ] 

Kihwal Lee edited comment on HDFS-5570 at 12/5/13 11:06 PM:


GetDelegationTokenServlet, RenewDelegationTokenServlet and 
CancelDelegationTokenServlet are also used by {{hdfs fetchdt}} command. 


was (Author: kihwal):
GetDelegationTokenServlet is also used by {{hdfs fetchdt}} command. 

> Deprecate hftp / hsftp and replace them with webhdfs / swebhdfs
> ---
>
> Key: HDFS-5570
> URL: https://issues.apache.org/jira/browse/HDFS-5570
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5570.000.patch
>
>
> Currently hftp / hsftp only provide a strict subset of functionality that 
> webhdfs / swebhdfs offer. Notably, hftp / hsftp do not support writes and HA 
> namenodes. Maintaining two piece of code with similar functionality introduce 
> unnecessary work.
> Webhdfs has been around since Hadoop 1.0 therefore moving forward with 
> webhdfs does not seem to cause any significant migration issues.
> This jira proposes to deprecate hftp / hsftp in branch-2 and remove them in 
> trunk.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5570) Deprecate hftp / hsftp and replace them with webhdfs / swebhdfs

2013-12-05 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840674#comment-13840674
 ] 

Haohui Mai commented on HDFS-5570:
--

This patch also changes fetchdt to go through webhdfs, as webhdfs now is 
enabled by default.

> Deprecate hftp / hsftp and replace them with webhdfs / swebhdfs
> ---
>
> Key: HDFS-5570
> URL: https://issues.apache.org/jira/browse/HDFS-5570
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5570.000.patch
>
>
> Currently hftp / hsftp only provide a strict subset of functionality that 
> webhdfs / swebhdfs offer. Notably, hftp / hsftp do not support writes and HA 
> namenodes. Maintaining two piece of code with similar functionality introduce 
> unnecessary work.
> Webhdfs has been around since Hadoop 1.0 therefore moving forward with 
> webhdfs does not seem to cause any significant migration issues.
> This jira proposes to deprecate hftp / hsftp in branch-2 and remove them in 
> trunk.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5312) Refactor DFSUtil#getInfoServer to return an URI

2013-12-05 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840671#comment-13840671
 ] 

Haohui Mai commented on HDFS-5312:
--

The V5 patch should fix HDFS-5627. The V6 patch addresses Jing's comments.

> Refactor DFSUtil#getInfoServer to return an URI
> ---
>
> Key: HDFS-5312
> URL: https://issues.apache.org/jira/browse/HDFS-5312
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5312.000.patch, HDFS-5312.001.patch, 
> HDFS-5312.002.patch, HDFS-5312.003.patch, HDFS-5312.004.patch, 
> HDFS-5312.005.patch, HDFS-5312.006.patch
>
>
> DFSUtil#getInfoServer() returns only the authority (i.e., host+port) when 
> searching for the http / https server. This is insufficient because HDFS-5536 
> and related jiras allows NN / DN / JN to open HTTPS only using the HTTPS_ONLY 
> policy.
> This JIRA addresses two issues. First, DFSUtil#getInfoServer() should return 
> an URI instead of a string, so that the scheme is an inherent parts of the 
> return value, which eliminates the task of figuring out the scheme by design. 
> Second, it introduces a new function to choose whether http or https should 
> be used to connect to the remote server based of dfs.http.policy.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5312) Refactor DFSUtil#getInfoServer to return an URI

2013-12-05 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5312:
-

Attachment: HDFS-5312.006.patch

> Refactor DFSUtil#getInfoServer to return an URI
> ---
>
> Key: HDFS-5312
> URL: https://issues.apache.org/jira/browse/HDFS-5312
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5312.000.patch, HDFS-5312.001.patch, 
> HDFS-5312.002.patch, HDFS-5312.003.patch, HDFS-5312.004.patch, 
> HDFS-5312.005.patch, HDFS-5312.006.patch
>
>
> DFSUtil#getInfoServer() returns only the authority (i.e., host+port) when 
> searching for the http / https server. This is insufficient because HDFS-5536 
> and related jiras allows NN / DN / JN to open HTTPS only using the HTTPS_ONLY 
> policy.
> This JIRA addresses two issues. First, DFSUtil#getInfoServer() should return 
> an URI instead of a string, so that the scheme is an inherent parts of the 
> return value, which eliminates the task of figuring out the scheme by design. 
> Second, it introduces a new function to choose whether http or https should 
> be used to connect to the remote server based of dfs.http.policy.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5570) Deprecate hftp / hsftp and replace them with webhdfs / swebhdfs

2013-12-05 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840665#comment-13840665
 ] 

Kihwal Lee commented on HDFS-5570:
--

GetDelegationTokenServlet is also used by {{hdfs fetchdt}} command. 

> Deprecate hftp / hsftp and replace them with webhdfs / swebhdfs
> ---
>
> Key: HDFS-5570
> URL: https://issues.apache.org/jira/browse/HDFS-5570
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5570.000.patch
>
>
> Currently hftp / hsftp only provide a strict subset of functionality that 
> webhdfs / swebhdfs offer. Notably, hftp / hsftp do not support writes and HA 
> namenodes. Maintaining two piece of code with similar functionality introduce 
> unnecessary work.
> Webhdfs has been around since Hadoop 1.0 therefore moving forward with 
> webhdfs does not seem to cause any significant migration issues.
> This jira proposes to deprecate hftp / hsftp in branch-2 and remove them in 
> trunk.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5023) TestSnapshotPathINodes.testAllowSnapshot is failing in branch-2

2013-12-05 Thread Mit Desai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mit Desai updated HDFS-5023:


Status: Patch Available  (was: Open)

> TestSnapshotPathINodes.testAllowSnapshot is failing in branch-2
> ---
>
> Key: HDFS-5023
> URL: https://issues.apache.org/jira/browse/HDFS-5023
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots, test
>Affects Versions: 2.4.0
>Reporter: Ravi Prakash
>Assignee: Mit Desai
>  Labels: test
> Attachments: HDFS-5023.patch, 
> TEST-org.apache.hadoop.hdfs.server.namenode.TestSnapshotPathINodes.xml, 
> org.apache.hadoop.hdfs.server.namenode.TestSnapshotPathINodes-output.txt
>
>
> The assertion on line 91 is failing. I am using Fedora 19 + JDK7. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5023) TestSnapshotPathINodes.testAllowSnapshot is failing in branch-2

2013-12-05 Thread Mit Desai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mit Desai updated HDFS-5023:


Attachment: HDFS-5023.patch

Attaching patch for Trunk/Branch-2

> TestSnapshotPathINodes.testAllowSnapshot is failing in branch-2
> ---
>
> Key: HDFS-5023
> URL: https://issues.apache.org/jira/browse/HDFS-5023
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots, test
>Affects Versions: 2.4.0
>Reporter: Ravi Prakash
>Assignee: Mit Desai
>  Labels: test
> Attachments: HDFS-5023.patch, 
> TEST-org.apache.hadoop.hdfs.server.namenode.TestSnapshotPathINodes.xml, 
> org.apache.hadoop.hdfs.server.namenode.TestSnapshotPathINodes-output.txt
>
>
> The assertion on line 91 is failing. I am using Fedora 19 + JDK7. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-4983) Numeric usernames do not work with WebHDFS FS

2013-12-05 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-4983:


Attachment: HDFS-4983.004.patch

> Numeric usernames do not work with WebHDFS FS
> -
>
> Key: HDFS-4983
> URL: https://issues.apache.org/jira/browse/HDFS-4983
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.0.0-alpha
>Reporter: Harsh J
>Assignee: Yongjun Zhang
>  Labels: patch
> Attachments: HDFS-4983.001.patch, HDFS-4983.002.patch, 
> HDFS-4983.003.patch, HDFS-4983.004.patch
>
>
> Per the file 
> hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserParam.java,
>  the DOMAIN pattern is set to: {{^[A-Za-z_][A-Za-z0-9._-]*[$]?$}}.
> Given this, using a username such as "123" seems to fail for some reason 
> (tried on insecure setup):
> {code}
> [123@host-1 ~]$ whoami
> 123
> [123@host-1 ~]$ hadoop fs -fs webhdfs://host-2.domain.com -ls /
> -ls: Invalid value: "123" does not belong to the domain 
> ^[A-Za-z_][A-Za-z0-9._-]*[$]?$
> Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [ ...]
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-4983) Numeric usernames do not work with WebHDFS FS

2013-12-05 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840653#comment-13840653
 ] 

Yongjun Zhang commented on HDFS-4983:
-

Many thanks to all. I just uploaded a new version to address the comments.


> Numeric usernames do not work with WebHDFS FS
> -
>
> Key: HDFS-4983
> URL: https://issues.apache.org/jira/browse/HDFS-4983
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.0.0-alpha
>Reporter: Harsh J
>Assignee: Yongjun Zhang
>  Labels: patch
> Attachments: HDFS-4983.001.patch, HDFS-4983.002.patch, 
> HDFS-4983.003.patch, HDFS-4983.004.patch
>
>
> Per the file 
> hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserParam.java,
>  the DOMAIN pattern is set to: {{^[A-Za-z_][A-Za-z0-9._-]*[$]?$}}.
> Given this, using a username such as "123" seems to fail for some reason 
> (tried on insecure setup):
> {code}
> [123@host-1 ~]$ whoami
> 123
> [123@host-1 ~]$ hadoop fs -fs webhdfs://host-2.domain.com -ls /
> -ls: Invalid value: "123" does not belong to the domain 
> ^[A-Za-z_][A-Za-z0-9._-]*[$]?$
> Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [ ...]
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5312) Refactor DFSUtil#getInfoServer to return an URI

2013-12-05 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840647#comment-13840647
 ] 

Jing Zhao commented on HDFS-5312:
-

The patch looks good overall. Some comments:
# In DFSUtil#getInfoServer, we may want to throw an 
IOException/RuntimeException instead of
  IllegalArgumentException in the following code (or using URI.create() there):
{code}
+} catch (URISyntaxException e) {
+  throw new IllegalArgumentException(e);
 }
{code}
# Since you touched GetImageServlet.java, you can also clean the following code,
  where we do not need to call getServletContext() again.
{code}
  final Configuration conf = 
(Configuration)getServletContext().getAttribute(JspHelper.CURRENT_CONF);
{code}
# In SecondaryNameNode#getInfoServer, the following change will change the 
original behavior:
{code}
-  private String getInfoServer() throws IOException {
+  private URL getInfoServer() throws IOException {
 URI fsName = FileSystem.getDefaultUri(conf);
 if (!HdfsConstants.HDFS_URI_SCHEME.equalsIgnoreCase(fsName.getScheme())) {
   throw new IOException("This is not a DFS");
 }
+InetSocketAddress nnAddr = new InetSocketAddress(fsName.getHost(),
+fsName.getPort());
 
-String configuredAddress = DFSUtil.getInfoServer(null, conf, false);
-String address = DFSUtil.substituteForWildcardAddress(configuredAddress,
-fsName.getHost());
-LOG.debug("Will connect to NameNode at HTTP address: " + address);
+URL address = DFSUtil.getInfoServer(nnAddr, conf,
+DFSUtil.getHttpClientScheme(conf)).toURL();
{code}
The original code will read http/https address from the configuration first, 
and use filesystem default URI as a fallback in case of wildcard address. With 
the change the filesystem default URI will become the first choice.
# Similarly we may want to use the original logic for 
BootstrapStandby#parseConfAndFindOtherNN, and 
StandbyCheckpointer#getHttpAddress(Configuration).
# Please update the jira title since the current patch not just refactors the 
code.

> Refactor DFSUtil#getInfoServer to return an URI
> ---
>
> Key: HDFS-5312
> URL: https://issues.apache.org/jira/browse/HDFS-5312
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5312.000.patch, HDFS-5312.001.patch, 
> HDFS-5312.002.patch, HDFS-5312.003.patch, HDFS-5312.004.patch, 
> HDFS-5312.005.patch
>
>
> DFSUtil#getInfoServer() returns only the authority (i.e., host+port) when 
> searching for the http / https server. This is insufficient because HDFS-5536 
> and related jiras allows NN / DN / JN to open HTTPS only using the HTTPS_ONLY 
> policy.
> This JIRA addresses two issues. First, DFSUtil#getInfoServer() should return 
> an URI instead of a string, so that the scheme is an inherent parts of the 
> return value, which eliminates the task of figuring out the scheme by design. 
> Second, it introduces a new function to choose whether http or https should 
> be used to connect to the remote server based of dfs.http.policy.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (HDFS-4201) NPE in BPServiceActor#sendHeartBeat

2013-12-05 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang reassigned HDFS-4201:
-

Assignee: Jimmy Xiang

> NPE in BPServiceActor#sendHeartBeat
> ---
>
> Key: HDFS-4201
> URL: https://issues.apache.org/jira/browse/HDFS-4201
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Eli Collins
>Assignee: Jimmy Xiang
>Priority: Critical
>
> Saw the following NPE in a log.
> Think this is likely due to {{dn}} or {{dn.getFSDataset()}} being null, (not 
> {{bpRegistration}}) due to a configuration or local directory failure.
> {code}
> 2012-09-25 04:33:20,782 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> For namenode svsrs00127/11.164.162.226:8020 using DELETEREPORT_INTERVAL of 
> 30 msec  BLOCKREPORT_INTERVAL of 2160msec Initial delay: 0msec; 
> heartBeatInterval=3000
> 2012-09-25 04:33:20,782 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in BPOfferService 
> for Block pool BP-1678908700-11.164.162.226-1342785481826 (storage id 
> DS-1031100678-11.164.162.251-5010-1341933415989) service to 
> svsrs00127/11.164.162.226:8020
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:434)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:520)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:673)
> at java.lang.Thread.run(Thread.java:722)
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5312) Refactor DFSUtil#getInfoServer to return an URI

2013-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840626#comment-13840626
 ] 

Hadoop QA commented on HDFS-5312:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12617221/HDFS-5312.005.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5647//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5647//console

This message is automatically generated.

> Refactor DFSUtil#getInfoServer to return an URI
> ---
>
> Key: HDFS-5312
> URL: https://issues.apache.org/jira/browse/HDFS-5312
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5312.000.patch, HDFS-5312.001.patch, 
> HDFS-5312.002.patch, HDFS-5312.003.patch, HDFS-5312.004.patch, 
> HDFS-5312.005.patch
>
>
> DFSUtil#getInfoServer() returns only the authority (i.e., host+port) when 
> searching for the http / https server. This is insufficient because HDFS-5536 
> and related jiras allows NN / DN / JN to open HTTPS only using the HTTPS_ONLY 
> policy.
> This JIRA addresses two issues. First, DFSUtil#getInfoServer() should return 
> an URI instead of a string, so that the scheme is an inherent parts of the 
> return value, which eliminates the task of figuring out the scheme by design. 
> Second, it introduces a new function to choose whether http or https should 
> be used to connect to the remote server based of dfs.http.policy.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5635) Inconsistency in report DFS used %

2013-12-05 Thread Tim Thorpe (JIRA)
Tim Thorpe created HDFS-5635:


 Summary: Inconsistency in report DFS used %
 Key: HDFS-5635
 URL: https://issues.apache.org/jira/browse/HDFS-5635
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Tim Thorpe
Priority: Minor


In a one node cluster, I get a different DFS used for the NameNode and the 
DataNode.  Also the DataNode section doesn't report the present capacity.

[biadmin@bdvm317 IHC]$ bin/hadoop dfsadmin -report
Configured Capacity: 23757870694 (22.13 GB)
Present Capacity: 14723592192 (13.71 GB)
DFS Used: 125276160 (119.47 MB)
DFS Used%: 0.85%

-
Datanodes available: 1 (1 total, 0 dead)

Live datanodes:
Configured Capacity: 23757870694 (22.13 GB)
DFS Used: 125276160 (119.47 MB)
DFS Used%: 0.53%



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-4983) Numeric usernames do not work with WebHDFS FS

2013-12-05 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840597#comment-13840597
 ] 

Haohui Mai commented on HDFS-4983:
--

[~andrew.wang], this sounds good to me. Let's move forward.

> Numeric usernames do not work with WebHDFS FS
> -
>
> Key: HDFS-4983
> URL: https://issues.apache.org/jira/browse/HDFS-4983
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.0.0-alpha
>Reporter: Harsh J
>Assignee: Yongjun Zhang
>  Labels: patch
> Attachments: HDFS-4983.001.patch, HDFS-4983.002.patch, 
> HDFS-4983.003.patch
>
>
> Per the file 
> hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserParam.java,
>  the DOMAIN pattern is set to: {{^[A-Za-z_][A-Za-z0-9._-]*[$]?$}}.
> Given this, using a username such as "123" seems to fail for some reason 
> (tried on insecure setup):
> {code}
> [123@host-1 ~]$ whoami
> 123
> [123@host-1 ~]$ hadoop fs -fs webhdfs://host-2.domain.com -ls /
> -ls: Invalid value: "123" does not belong to the domain 
> ^[A-Za-z_][A-Za-z0-9._-]*[$]?$
> Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [ ...]
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5369) Support negative caching of user-group mapping

2013-12-05 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-5369:
--

Assignee: (was: Andrew Wang)

> Support negative caching of user-group mapping
> --
>
> Key: HDFS-5369
> URL: https://issues.apache.org/jira/browse/HDFS-5369
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.2.0
>Reporter: Andrew Wang
>
> We've seen a situation at a couple of our customers where interactions from 
> an unknown user leads to a high-rate of group mapping calls. In one case, 
> this was happening at a rate of 450 calls per second with the shell-based 
> group mapping, enough to severely impact overall namenode performance and 
> also leading to large amounts of log spam (prints a stack trace each time).
> Let's consider negative caching of group mapping, as well as quashing the 
> rate of this log message.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-4983) Numeric usernames do not work with WebHDFS FS

2013-12-05 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840596#comment-13840596
 ] 

Jing Zhao commented on HDFS-4983:
-

bq. How about Yongjun revs his patch again based on your review feedback, then 
we file another JIRA where we discuss changing the default regex for 
HttpFs/WebHDFS to be more accepting?

Sounds good to me. Let's keep moving and fix the issue first.

> Numeric usernames do not work with WebHDFS FS
> -
>
> Key: HDFS-4983
> URL: https://issues.apache.org/jira/browse/HDFS-4983
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.0.0-alpha
>Reporter: Harsh J
>Assignee: Yongjun Zhang
>  Labels: patch
> Attachments: HDFS-4983.001.patch, HDFS-4983.002.patch, 
> HDFS-4983.003.patch
>
>
> Per the file 
> hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserParam.java,
>  the DOMAIN pattern is set to: {{^[A-Za-z_][A-Za-z0-9._-]*[$]?$}}.
> Given this, using a username such as "123" seems to fail for some reason 
> (tried on insecure setup):
> {code}
> [123@host-1 ~]$ whoami
> 123
> [123@host-1 ~]$ hadoop fs -fs webhdfs://host-2.domain.com -ls /
> -ls: Invalid value: "123" does not belong to the domain 
> ^[A-Za-z_][A-Za-z0-9._-]*[$]?$
> Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [ ...]
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5633) Improve OfflineImageViewer to use less memory

2013-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840589#comment-13840589
 ] 

Hadoop QA commented on HDFS-5633:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12617211/HDFS-5633.000.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5646//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5646//console

This message is automatically generated.

> Improve OfflineImageViewer to use less memory
> -
>
> Key: HDFS-5633
> URL: https://issues.apache.org/jira/browse/HDFS-5633
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Attachments: HDFS-5633.000.patch
>
>
> Currently after we rename a file/dir which is included in a snapshot, the 
> file/dir can be linked with two different reference INodes. To avoid 
> saving/loading the inode multiple times in/from FSImage, we use a temporary 
> map to record whether we have visited this inode before.
> However, in OfflineImageViewer (specifically, in ImageLoaderCurrent), the 
> current implementation simply records all the directory inodes. This can take 
> a lot of memory when the fsimage is big. We should only record an inode in 
> the temp map when it is referenced by an INodeReference, just like what we do 
> in FSImageFormat.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-4983) Numeric usernames do not work with WebHDFS FS

2013-12-05 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840587#comment-13840587
 ] 

Andrew Wang commented on HDFS-4983:
---

Jing, Haohui, thanks for your comments thus far. Is it that big a deal to make 
this configurable though? Personally, I will always choose a configurable value 
with a good default over a hardcoded constant, simply because the kinds of 
issues Harsh is talking about can crop up in production, and tweaking a conf 
option is far better than having to ship a custom build.

How about Yongjun revs his patch again based on your review feedback, then we 
file another JIRA where we discuss changing the default regex for 
HttpFs/WebHDFS to be more accepting? This seems like a good compromise to me.

> Numeric usernames do not work with WebHDFS FS
> -
>
> Key: HDFS-4983
> URL: https://issues.apache.org/jira/browse/HDFS-4983
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.0.0-alpha
>Reporter: Harsh J
>Assignee: Yongjun Zhang
>  Labels: patch
> Attachments: HDFS-4983.001.patch, HDFS-4983.002.patch, 
> HDFS-4983.003.patch
>
>
> Per the file 
> hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserParam.java,
>  the DOMAIN pattern is set to: {{^[A-Za-z_][A-Za-z0-9._-]*[$]?$}}.
> Given this, using a username such as "123" seems to fail for some reason 
> (tried on insecure setup):
> {code}
> [123@host-1 ~]$ whoami
> 123
> [123@host-1 ~]$ hadoop fs -fs webhdfs://host-2.domain.com -ls /
> -ls: Invalid value: "123" does not belong to the domain 
> ^[A-Za-z_][A-Za-z0-9._-]*[$]?$
> Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [ ...]
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5630) Hook up cache directive and pool usage statistics

2013-12-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840588#comment-13840588
 ] 

Hudson commented on HDFS-5630:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4839 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4839/])
HDFS-5630. Hook up cache directive and pool usage statistics. (wang) (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1548309)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirective.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirectiveStats.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CachePoolStats.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CacheReplicationMonitor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CacheManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CachePool.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCacheDirectives.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testCacheAdminConf.xml


> Hook up cache directive and pool usage statistics
> -
>
> Key: HDFS-5630
> URL: https://issues.apache.org/jira/browse/HDFS-5630
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, namenode
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 3.0.0
>
> Attachments: hdfs-5630-1.patch, hdfs-5630-2.patch
>
>
> Right now we have stubs for bytes/files statistics for cache pools, but we 
> need to hook them up so they're actually being tracked.
> This is a pre-requisite for enforcing per-pool quotas.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5554) Add Snapshot Feature to INodeFile

2013-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840573#comment-13840573
 ] 

Hadoop QA commented on HDFS-5554:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12617209/HDFS-5554.003.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 5 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5645//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5645//console

This message is automatically generated.

> Add Snapshot Feature to INodeFile
> -
>
> Key: HDFS-5554
> URL: https://issues.apache.org/jira/browse/HDFS-5554
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-5554.001.patch, HDFS-5554.002.patch, 
> HDFS-5554.003.patch
>
>
> Similar with HDFS-5285, we can add a FileWithSnapshot feature to INodeFile 
> and use it to replace the current INodeFileWithSnapshot.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5630) Hook up cache directive and pool usage statistics

2013-12-05 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-5630:
--

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Thanks Colin, committed to trunk.

> Hook up cache directive and pool usage statistics
> -
>
> Key: HDFS-5630
> URL: https://issues.apache.org/jira/browse/HDFS-5630
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, namenode
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 3.0.0
>
> Attachments: hdfs-5630-1.patch, hdfs-5630-2.patch
>
>
> Right now we have stubs for bytes/files statistics for cache pools, but we 
> need to hook them up so they're actually being tracked.
> This is a pre-requisite for enforcing per-pool quotas.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HDFS-5551) Rename "path.based" caching configuration options

2013-12-05 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HDFS-5551.


Resolution: Won't Fix

I discussed this with Andrew, and we have no plans to rename these options at 
the moment.  We can reopen this if someone thinks of a better thing to call them

> Rename "path.based" caching configuration options
> -
>
> Key: HDFS-5551
> URL: https://issues.apache.org/jira/browse/HDFS-5551
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
>
> Some configuration options still have the "path.based" moniker, missed during 
> the big rename removing this naming convention.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HDFS-5521) skip checksums when reading a cached block via non-local reads

2013-12-05 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HDFS-5521.


Resolution: Won't Fix

Andrew is right about this one... we still want to detect network errors, so 
let's not skip those checksums in this case.

> skip checksums when reading a cached block via non-local reads
> --
>
> Key: HDFS-5521
> URL: https://issues.apache.org/jira/browse/HDFS-5521
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>
> The DataNode needs to skip checksumming when reading a cached block via 
> non-local reads.  This is like HDFS-5182, but for non-short-circuit.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5634) allow BlockReaderLocal to switch between checksumming and not

2013-12-05 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-5634:
---

Status: Patch Available  (was: Open)

> allow BlockReaderLocal to switch between checksumming and not
> -
>
> Key: HDFS-5634
> URL: https://issues.apache.org/jira/browse/HDFS-5634
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5634.001.patch
>
>
> BlockReaderLocal should be able to switch between checksumming and 
> non-checksumming, so that when we get notifications that something is mlocked 
> (see HDFS-5182), we can avoid checksumming when reading from that block.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5634) allow BlockReaderLocal to switch between checksumming and not

2013-12-05 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-5634:
---

Attachment: HDFS-5634.001.patch

This change allows the BlockReaderLocal to switch back and forth between 
mlocked and non-mlocked states.  (Later, we will hook up callbacks from code 
added by HDFS-5182 to do this).

A few other changes:
* honor the readahead parameter, so that skipping checksums doesn't mean 
skipping all buffering.  See HDFS-4710 of the discussion of why we want this.
* for reads to a direct ByteBuffer, add a "fast lane" that copies directly into 
the user-supplied ByteBuffer.  We only do this if the read is longer than our 
configured readahead.  This avoids a copy.
* use pread everywhere instead of read.  This means that if a client opens a 
file multiple times, they only need one set of file descriptors rather than 
multiple.  This will become more important with HDFS-5182, since that change 
will add a notification system per set of FDs.  We don't want to track too many 
of those.
* move reading of the meta file header out of the {{BlockReaderLocal}} 
constructor.  This will allow us to implement HDFS-4960 (only read version 
once).  This is mainly a win in the no-checksum case.
* avoid using a skip buffer in BlockReaderLocal#skip (implements HDFS-5574)

> allow BlockReaderLocal to switch between checksumming and not
> -
>
> Key: HDFS-5634
> URL: https://issues.apache.org/jira/browse/HDFS-5634
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5634.001.patch
>
>
> BlockReaderLocal should be able to switch between checksumming and 
> non-checksumming, so that when we get notifications that something is mlocked 
> (see HDFS-5182), we can avoid checksumming when reading from that block.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5634) allow BlockReaderLocal to switch between checksumming and not

2013-12-05 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-5634:
--

 Summary: allow BlockReaderLocal to switch between checksumming and 
not
 Key: HDFS-5634
 URL: https://issues.apache.org/jira/browse/HDFS-5634
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Affects Versions: 3.0.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


BlockReaderLocal should be able to switch between checksumming and 
non-checksumming, so that when we get notifications that something is mlocked 
(see HDFS-5182), we can avoid checksumming when reading from that block.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5630) Hook up cache directive and pool usage statistics

2013-12-05 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840527#comment-13840527
 ] 

Colin Patrick McCabe commented on HDFS-5630:


+1.  thanks, Andrew.

> Hook up cache directive and pool usage statistics
> -
>
> Key: HDFS-5630
> URL: https://issues.apache.org/jira/browse/HDFS-5630
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, namenode
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hdfs-5630-1.patch, hdfs-5630-2.patch
>
>
> Right now we have stubs for bytes/files statistics for cache pools, but we 
> need to hook them up so they're actually being tracked.
> This is a pre-requisite for enforcing per-pool quotas.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5630) Hook up cache directive and pool usage statistics

2013-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840479#comment-13840479
 ] 

Hadoop QA commented on HDFS-5630:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12617191/hdfs-5630-2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5644//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5644//console

This message is automatically generated.

> Hook up cache directive and pool usage statistics
> -
>
> Key: HDFS-5630
> URL: https://issues.apache.org/jira/browse/HDFS-5630
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, namenode
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hdfs-5630-1.patch, hdfs-5630-2.patch
>
>
> Right now we have stubs for bytes/files statistics for cache pools, but we 
> need to hook them up so they're actually being tracked.
> This is a pre-requisite for enforcing per-pool quotas.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5312) Refactor DFSUtil#getInfoServer to return an URI

2013-12-05 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5312:
-

Attachment: HDFS-5312.005.patch

> Refactor DFSUtil#getInfoServer to return an URI
> ---
>
> Key: HDFS-5312
> URL: https://issues.apache.org/jira/browse/HDFS-5312
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5312.000.patch, HDFS-5312.001.patch, 
> HDFS-5312.002.patch, HDFS-5312.003.patch, HDFS-5312.004.patch, 
> HDFS-5312.005.patch
>
>
> DFSUtil#getInfoServer() returns only the authority (i.e., host+port) when 
> searching for the http / https server. This is insufficient because HDFS-5536 
> and related jiras allows NN / DN / JN to open HTTPS only using the HTTPS_ONLY 
> policy.
> This JIRA addresses two issues. First, DFSUtil#getInfoServer() should return 
> an URI instead of a string, so that the scheme is an inherent parts of the 
> return value, which eliminates the task of figuring out the scheme by design. 
> Second, it introduces a new function to choose whether http or https should 
> be used to connect to the remote server based of dfs.http.policy.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5633) Improve OfflineImageViewer to use less memory

2013-12-05 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5633:


Status: Patch Available  (was: Open)

> Improve OfflineImageViewer to use less memory
> -
>
> Key: HDFS-5633
> URL: https://issues.apache.org/jira/browse/HDFS-5633
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Attachments: HDFS-5633.000.patch
>
>
> Currently after we rename a file/dir which is included in a snapshot, the 
> file/dir can be linked with two different reference INodes. To avoid 
> saving/loading the inode multiple times in/from FSImage, we use a temporary 
> map to record whether we have visited this inode before.
> However, in OfflineImageViewer (specifically, in ImageLoaderCurrent), the 
> current implementation simply records all the directory inodes. This can take 
> a lot of memory when the fsimage is big. We should only record an inode in 
> the temp map when it is referenced by an INodeReference, just like what we do 
> in FSImageFormat.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5633) Improve OfflineImageViewer to use less memory

2013-12-05 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5633:


Attachment: HDFS-5633.000.patch

> Improve OfflineImageViewer to use less memory
> -
>
> Key: HDFS-5633
> URL: https://issues.apache.org/jira/browse/HDFS-5633
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Attachments: HDFS-5633.000.patch
>
>
> Currently after we rename a file/dir which is included in a snapshot, the 
> file/dir can be linked with two different reference INodes. To avoid 
> saving/loading the inode multiple times in/from FSImage, we use a temporary 
> map to record whether we have visited this inode before.
> However, in OfflineImageViewer (specifically, in ImageLoaderCurrent), the 
> current implementation simply records all the directory inodes. This can take 
> a lot of memory when the fsimage is big. We should only record an inode in 
> the temp map when it is referenced by an INodeReference, just like what we do 
> in FSImageFormat.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5633) Improve OfflineImageViewer to use less memory

2013-12-05 Thread Jing Zhao (JIRA)
Jing Zhao created HDFS-5633:
---

 Summary: Improve OfflineImageViewer to use less memory
 Key: HDFS-5633
 URL: https://issues.apache.org/jira/browse/HDFS-5633
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jing Zhao
Assignee: Jing Zhao
Priority: Minor


Currently after we rename a file/dir which is included in a snapshot, the 
file/dir can be linked with two different reference INodes. To avoid 
saving/loading the inode multiple times in/from FSImage, we use a temporary map 
to record whether we have visited this inode before.

However, in OfflineImageViewer (specifically, in ImageLoaderCurrent), the 
current implementation simply records all the directory inodes. This can take a 
lot of memory when the fsimage is big. We should only record an inode in the 
temp map when it is referenced by an INodeReference, just like what we do in 
FSImageFormat.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5632) Add Snapshot feature to INodeDirectory

2013-12-05 Thread Jing Zhao (JIRA)
Jing Zhao created HDFS-5632:
---

 Summary: Add Snapshot feature to INodeDirectory
 Key: HDFS-5632
 URL: https://issues.apache.org/jira/browse/HDFS-5632
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Jing Zhao


We will add snapshot feature to INodeDirectory and remove 
INodeDirectoryWithSnapshot in this jira.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5554) Add Snapshot Feature to INodeFile

2013-12-05 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5554:


Attachment: HDFS-5554.003.patch

Thanks for the review, Nicholas! Update the patch to address your comments.

> Add Snapshot Feature to INodeFile
> -
>
> Key: HDFS-5554
> URL: https://issues.apache.org/jira/browse/HDFS-5554
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-5554.001.patch, HDFS-5554.002.patch, 
> HDFS-5554.003.patch
>
>
> Similar with HDFS-5285, we can add a FileWithSnapshot feature to INodeFile 
> and use it to replace the current INodeFileWithSnapshot.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5630) Hook up cache directive and pool usage statistics

2013-12-05 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-5630:
--

Attachment: hdfs-5630-2.patch

Thanks Colin, good catches. I also missed printing filesCached in toString, so 
I put that there too.

> Hook up cache directive and pool usage statistics
> -
>
> Key: HDFS-5630
> URL: https://issues.apache.org/jira/browse/HDFS-5630
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, namenode
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hdfs-5630-1.patch, hdfs-5630-2.patch
>
>
> Right now we have stubs for bytes/files statistics for cache pools, but we 
> need to hook them up so they're actually being tracked.
> This is a pre-requisite for enforcing per-pool quotas.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5630) Hook up cache directive and pool usage statistics

2013-12-05 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840211#comment-13840211
 ] 

Colin Patrick McCabe commented on HDFS-5630:


{code}
+  append(", filesAffected:").append(filesNeeded).
{code}
Should be filesNeeded  in the string.

{code}
 if (LOG.isTraceEnabled()) {
-  LOG.debug("Directive " + pce.getId() + " is caching " +
-  file.getFullPathName() + ": " + cachedTotal + "/" + neededTotal);
+  LOG.debug("Directive " + directive.getId() + " is caching " +
+  file.getFullPathName() + ": " + cachedTotal + "/" + neededTotal +
+  " bytes");
{code}

You're checking for trace, but printing as debug.  Pick one (probably trace).  
This is a pre-existing bug, I know.

{code}
   CachePoolInfo info = entry.getInfo();
-  String[] row = new String[5];
+  String[] row = new String[numColumns];
   if (name == null || info.getPoolName().equals(name)) {
 row[0] = info.getPoolName();
   ...
{code}

It would be nicer to use a LinkedList here with toArray at the end, I think

{code}
+  LOG.info("XXX: while start");
{code}

Looks like a leftover.  There are some others.

> Hook up cache directive and pool usage statistics
> -
>
> Key: HDFS-5630
> URL: https://issues.apache.org/jira/browse/HDFS-5630
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, namenode
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hdfs-5630-1.patch
>
>
> Right now we have stubs for bytes/files statistics for cache pools, but we 
> need to hook them up so they're actually being tracked.
> This is a pre-requisite for enforcing per-pool quotas.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5242) Reduce contention on DatanodeInfo instances

2013-12-05 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840205#comment-13840205
 ] 

Daryn Sharp commented on HDFS-5242:
---

[~sureshms] Are you ok with this patch?

> Reduce contention on DatanodeInfo instances
> ---
>
> Key: HDFS-5242
> URL: https://issues.apache.org/jira/browse/HDFS-5242
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-5242.patch
>
>
> Synchronization in {{DatanodeInfo}} instances causes unnecessary contention 
> between call handlers.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5514) FSNamesystem's fsLock should allow custom implementation

2013-12-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840193#comment-13840193
 ] 

Hudson commented on HDFS-5514:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4834 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4834/])
Neglected to add new file in HDFS-5514 (daryn) (daryn: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1548167)
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystemLock.java


> FSNamesystem's fsLock should allow custom implementation
> 
>
> Key: HDFS-5514
> URL: https://issues.apache.org/jira/browse/HDFS-5514
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 3.0.0, 2.4.0
>
> Attachments: HDFS-5514.patch, HDFS-5514.patch
>
>
> Changing {{fsLock}} from a {{ReentrantReadWriteLock}} to an API compatible 
> class that encapsulates the rwLock will allow for more sophisticated locking 
> implementations such as fine grain locking.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5514) FSNamesystem's fsLock should allow custom implementation

2013-12-05 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-5514:
--

   Resolution: Fixed
Fix Version/s: 2.4.0
   3.0.0
   Status: Resolved  (was: Patch Available)

Thanks Chris, I've committed to trunk and branch-2.  Note that I originally 
missed the svn add of the new class but caught it a minute later...

> FSNamesystem's fsLock should allow custom implementation
> 
>
> Key: HDFS-5514
> URL: https://issues.apache.org/jira/browse/HDFS-5514
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 3.0.0, 2.4.0
>
> Attachments: HDFS-5514.patch, HDFS-5514.patch
>
>
> Changing {{fsLock}} from a {{ReentrantReadWriteLock}} to an API compatible 
> class that encapsulates the rwLock will allow for more sophisticated locking 
> implementations such as fine grain locking.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5514) FSNamesystem's fsLock should allow custom implementation

2013-12-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840177#comment-13840177
 ] 

Hudson commented on HDFS-5514:
--

FAILURE: Integrated in Hadoop-trunk-Commit #4833 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4833/])
HDFS-5514. FSNamesystem's fsLock should allow custom implementation (daryn) 
(daryn: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1548161)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSNamesystem.java


> FSNamesystem's fsLock should allow custom implementation
> 
>
> Key: HDFS-5514
> URL: https://issues.apache.org/jira/browse/HDFS-5514
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-5514.patch, HDFS-5514.patch
>
>
> Changing {{fsLock}} from a {{ReentrantReadWriteLock}} to an API compatible 
> class that encapsulates the rwLock will allow for more sophisticated locking 
> implementations such as fine grain locking.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5536) Implement HTTP policy for Namenode and DataNode

2013-12-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840117#comment-13840117
 ] 

Hudson commented on HDFS-5536:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1629 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1629/])
HDFS-5536. Implement HTTP policy for Namenode and DataNode. Contributed by 
Haohui Mai. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1547925)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpConfig.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/ClusterSetup.apt.vm
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HdfsConfiguration.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/SecureDataNodeStarter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/BackupNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogFileInputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSClusterWithNodeGroup.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestValidateConfigurationSettings.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestHttpsFileSystem.java


> Implement HTTP policy for Namenode and DataNode
> ---
>
> Key: HDFS-5536
> URL: https://issues.apache.org/jira/browse/HDFS-5536
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 3.0.0
>
> Attachments: HDFS-5536.000.patch, HDFS-5536.001.patch, 
> HDFS-5536.002.patch, HDFS-5536.003.patch, HDFS-5536.004.patch, 
> HDFS-5536.005.patch, HDFS-5536.006.patch, HDFS-5536.007.patch, 
> HDFS-5536.008.patch, HDFS-5536.009.patch, HDFS-5536.010.patch
>
>
> this jira implements the http and https policy in the namenode and the 
> datanode.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-4983) Numeric usernames do not work with WebHDFS FS

2013-12-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840120#comment-13840120
 ] 

Hudson commented on HDFS-4983:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1629 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1629/])
Revert HDFS-4983 (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1547970)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserParam.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java
HDFS-4983. Numeric usernames do not work with WebHDFS FS. Contributed by 
Yongjun Zhang. (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1547935)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserParam.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java


> Numeric usernames do not work with WebHDFS FS
> -
>
> Key: HDFS-4983
> URL: https://issues.apache.org/jira/browse/HDFS-4983
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.0.0-alpha
>Reporter: Harsh J
>Assignee: Yongjun Zhang
>  Labels: patch
> Attachments: HDFS-4983.001.patch, HDFS-4983.002.patch, 
> HDFS-4983.003.patch
>
>
> Per the file 
> hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserParam.java,
>  the DOMAIN pattern is set to: {{^[A-Za-z_][A-Za-z0-9._-]*[$]?$}}.
> Given this, using a username such as "123" seems to fail for some reason 
> (tried on insecure setup):
> {code}
> [123@host-1 ~]$ whoami
> 123
> [123@host-1 ~]$ hadoop fs -fs webhdfs://host-2.domain.com -ls /
> -ls: Invalid value: "123" does not belong to the domain 
> ^[A-Za-z_][A-Za-z0-9._-]*[$]?$
> Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [ ...]
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5626) dfsadmin -report shows incorrect cache values

2013-12-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840123#comment-13840123
 ] 

Hudson commented on HDFS-5626:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1629 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1629/])
HDFS-5626. dfsadmin report shows incorrect values (cmccabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1548000)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java


> dfsadmin -report shows incorrect cache values
> -
>
> Key: HDFS-5626
> URL: https://issues.apache.org/jira/browse/HDFS-5626
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 3.0.0
>Reporter: Stephen Chu
>Assignee: Colin Patrick McCabe
> Fix For: 3.0.0
>
> Attachments: HDFS-5626.001.patch, HDFS-5626.002.patch
>
>
>  I have a single node hadoop-trunk cluster that has caching enabled and 
> datanode max locked memory set to 536870912B.
> When I run _dfsadmin -report_, I see the following:
> {code}
> [root@hdfs-c5-nfs hadoop_tar_confs]# hdfs dfsadmin -report
> Configured Capacity: 50779643904 (47.29 GB)
> Present Capacity: 43480281088 (40.49 GB)
> DFS Remaining: 43480227840 (40.49 GB)
> DFS Used: 53248 (52 KB)
> DFS Used%: 0.00%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> -
> Datanodes available: 1 (1 total, 0 dead)
> Live datanodes:
> Name: 10.20.217.45:50010 (hdfs-c5-nfs.ent.cloudera.com)
> Hostname: hdfs-c5-nfs.ent.cloudera.com
> Decommission Status : Normal
> Configured Capacity: 50779643904 (47.29 GB)
> DFS Used: 53248 (52 KB)
> Non DFS Used: 7299362816 (6.80 GB)
> DFS Remaining: 43480227840 (40.49 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 85.63%
> Configured Cache Capacity: 50779643904 (0 B)
> Cache Used: 0 (52 KB)
> Cache Remaining: 0 (40.49 GB)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Last contact: Tue Dec 03 14:59:31 PST 2013
> {code}
> The values seem to be wrong. In configured cache capacity, we have listed 
> 50779643904 but in parantheses that is translated to (0B). The non-cache 
> related values with parantheses have correct translations.
> It says that I've used 100% of the cache, but the system does not have any 
> pools or directives.
> Also, we see that we have 0 Cache remaining, but that is translated to 
> (40.49GB).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5587) add debug information when NFS fails to start with duplicate user or group names

2013-12-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840121#comment-13840121
 ] 

Hudson commented on HDFS-5587:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1629 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1629/])
HDFS-5587. add debug information when NFS fails to start with duplicate user or 
group names. Contributed by Brandon Li (brandonli: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1548028)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/IdUserGroup.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/nfs/nfs3/TestIdUserGroup.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> add debug information when NFS fails to start with duplicate user or group 
> names
> 
>
> Key: HDFS-5587
> URL: https://issues.apache.org/jira/browse/HDFS-5587
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: nfs
>Reporter: Brandon Li
>Assignee: Brandon Li
>Priority: Minor
> Fix For: 2.2.1
>
> Attachments: HDFS-5587.001.patch, HDFS-5587.002.patch, 
> HDFS-5587.003.patch
>
>
> When the host provides duplicate user or group names, NFS will not start and 
> print errors like the following:
> {noformat}
> ... ... 
> 13/11/25 18:11:52 INFO nfs3.Nfs3Base: registered UNIX signal handlers for 
> [TERM, HUP, INT]
> Exception in thread "main" java.lang.IllegalArgumentException: value already 
> present: s-iss
> at com.google.common.base.Preconditions.checkArgument(Preconditions.java:115)
> at 
> com.google.common.collect.AbstractBiMap.putInBothMaps(AbstractBiMap.java:112)
> at com.google.common.collect.AbstractBiMap.put(AbstractBiMap.java:96)
> at com.google.common.collect.HashBiMap.put(HashBiMap.java:85)
> at 
> org.apache.hadoop.nfs.nfs3.IdUserGroup.updateMapInternal(IdUserGroup.java:85)
> at org.apache.hadoop.nfs.nfs3.IdUserGroup.updateMaps(IdUserGroup.java:110)
> at org.apache.hadoop.nfs.nfs3.IdUserGroup.(IdUserGroup.java:54)
> at 
> org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.(RpcProgramNfs3.java:172)
> at 
> org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.(RpcProgramNfs3.java:164)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.(Nfs3.java:41)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:52)
> 13/11/25 18:11:54 INFO nfs3.Nfs3Base: SHUTDOWN_MSG:
> ... ...
> {noformat}
> The reason NFS should not start is that, HDFS (non-kerberos cluster) uses 
> name as the only way to identify a user. On some linux box, it could have two 
> users with the same name but different user IDs. Linux might be able to work 
> fine with that most of the time. However, when NFS gateway talks to HDFS, 
> HDFS accepts only user name. That is, from HDFS' point of view, these two 
> different users are the same user even though they are different on the Linux 
> box.
> The duplicate names on Linux systems sometimes is because of some legacy 
> system configurations, or combined name services.
> Regardless, NFS gateway should print some help information so the user can 
> understand the error and the remove the duplicated names before NFS restart.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5555) CacheAdmin commands fail when first listed NameNode is in Standby

2013-12-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840119#comment-13840119
 ] 

Hudson commented on HDFS-:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1629 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1629/])
HDFS-. CacheAdmin commands fail when first listed NameNode is in Standby 
(jxiang via cmccabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1547895)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirectiveIterator.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CachePoolIterator.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCacheDirectives.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java


> CacheAdmin commands fail when first listed NameNode is in Standby
> -
>
> Key: HDFS-
> URL: https://issues.apache.org/jira/browse/HDFS-
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 3.0.0
>Reporter: Stephen Chu
>Assignee: Jimmy Xiang
> Fix For: 3.0.0
>
> Attachments: trunk-.patch, trunk-_v2.patch
>
>
> I am on a HA-enabled cluster. The NameNodes are on host-1 and host-2.
> In the configurations, we specify the host-1 NN first and the host-2 NN 
> afterwards in the _dfs.ha.namenodes.ns1_ property (where _ns1_ is the name of 
> the nameservice).
> If the host-1 NN is Standby and the host-2 NN is Active, some CacheAdmins 
> will fail complaining about operation not supported in standby state.
> e.g.
> {code}
> bash-4.1$ hdfs cacheadmin -removeDirectives -path /user/hdfs2
> Exception in thread "main" 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
>  Operation category READ is not supported in state standby
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1501)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1082)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.listCacheDirectives(FSNamesystem.java:6892)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer$ServerSideCacheEntriesIterator.makeRequest(NameNodeRpcServer.java:1263)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer$ServerSideCacheEntriesIterator.makeRequest(NameNodeRpcServer.java:1249)
>   at 
> org.apache.hadoop.fs.BatchedRemoteIterator.makeRequest(BatchedRemoteIterator.java:77)
>   at 
> org.apache.hadoop.fs.BatchedRemoteIterator.makeRequestIfNeeded(BatchedRemoteIterator.java:85)
>   at 
> org.apache.hadoop.fs.BatchedRemoteIterator.hasNext(BatchedRemoteIterator.java:99)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.listCacheDirectives(ClientNamenodeProtocolServerSideTranslatorPB.java:1087)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1499)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
>   at org.apache.hadoop.ipc.Client.call(Cl

[jira] [Commented] (HDFS-4983) Numeric usernames do not work with WebHDFS FS

2013-12-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840081#comment-13840081
 ] 

Hudson commented on HDFS-4983:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1603 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1603/])
Revert HDFS-4983 (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1547970)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserParam.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java
HDFS-4983. Numeric usernames do not work with WebHDFS FS. Contributed by 
Yongjun Zhang. (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1547935)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserParam.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java


> Numeric usernames do not work with WebHDFS FS
> -
>
> Key: HDFS-4983
> URL: https://issues.apache.org/jira/browse/HDFS-4983
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.0.0-alpha
>Reporter: Harsh J
>Assignee: Yongjun Zhang
>  Labels: patch
> Attachments: HDFS-4983.001.patch, HDFS-4983.002.patch, 
> HDFS-4983.003.patch
>
>
> Per the file 
> hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserParam.java,
>  the DOMAIN pattern is set to: {{^[A-Za-z_][A-Za-z0-9._-]*[$]?$}}.
> Given this, using a username such as "123" seems to fail for some reason 
> (tried on insecure setup):
> {code}
> [123@host-1 ~]$ whoami
> 123
> [123@host-1 ~]$ hadoop fs -fs webhdfs://host-2.domain.com -ls /
> -ls: Invalid value: "123" does not belong to the domain 
> ^[A-Za-z_][A-Za-z0-9._-]*[$]?$
> Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [ ...]
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5587) add debug information when NFS fails to start with duplicate user or group names

2013-12-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840082#comment-13840082
 ] 

Hudson commented on HDFS-5587:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1603 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1603/])
HDFS-5587. add debug information when NFS fails to start with duplicate user or 
group names. Contributed by Brandon Li (brandonli: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1548028)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/IdUserGroup.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/nfs/nfs3/TestIdUserGroup.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> add debug information when NFS fails to start with duplicate user or group 
> names
> 
>
> Key: HDFS-5587
> URL: https://issues.apache.org/jira/browse/HDFS-5587
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: nfs
>Reporter: Brandon Li
>Assignee: Brandon Li
>Priority: Minor
> Fix For: 2.2.1
>
> Attachments: HDFS-5587.001.patch, HDFS-5587.002.patch, 
> HDFS-5587.003.patch
>
>
> When the host provides duplicate user or group names, NFS will not start and 
> print errors like the following:
> {noformat}
> ... ... 
> 13/11/25 18:11:52 INFO nfs3.Nfs3Base: registered UNIX signal handlers for 
> [TERM, HUP, INT]
> Exception in thread "main" java.lang.IllegalArgumentException: value already 
> present: s-iss
> at com.google.common.base.Preconditions.checkArgument(Preconditions.java:115)
> at 
> com.google.common.collect.AbstractBiMap.putInBothMaps(AbstractBiMap.java:112)
> at com.google.common.collect.AbstractBiMap.put(AbstractBiMap.java:96)
> at com.google.common.collect.HashBiMap.put(HashBiMap.java:85)
> at 
> org.apache.hadoop.nfs.nfs3.IdUserGroup.updateMapInternal(IdUserGroup.java:85)
> at org.apache.hadoop.nfs.nfs3.IdUserGroup.updateMaps(IdUserGroup.java:110)
> at org.apache.hadoop.nfs.nfs3.IdUserGroup.(IdUserGroup.java:54)
> at 
> org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.(RpcProgramNfs3.java:172)
> at 
> org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.(RpcProgramNfs3.java:164)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.(Nfs3.java:41)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:52)
> 13/11/25 18:11:54 INFO nfs3.Nfs3Base: SHUTDOWN_MSG:
> ... ...
> {noformat}
> The reason NFS should not start is that, HDFS (non-kerberos cluster) uses 
> name as the only way to identify a user. On some linux box, it could have two 
> users with the same name but different user IDs. Linux might be able to work 
> fine with that most of the time. However, when NFS gateway talks to HDFS, 
> HDFS accepts only user name. That is, from HDFS' point of view, these two 
> different users are the same user even though they are different on the Linux 
> box.
> The duplicate names on Linux systems sometimes is because of some legacy 
> system configurations, or combined name services.
> Regardless, NFS gateway should print some help information so the user can 
> understand the error and the remove the duplicated names before NFS restart.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5555) CacheAdmin commands fail when first listed NameNode is in Standby

2013-12-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840080#comment-13840080
 ] 

Hudson commented on HDFS-:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1603 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1603/])
HDFS-. CacheAdmin commands fail when first listed NameNode is in Standby 
(jxiang via cmccabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1547895)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirectiveIterator.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CachePoolIterator.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCacheDirectives.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java


> CacheAdmin commands fail when first listed NameNode is in Standby
> -
>
> Key: HDFS-
> URL: https://issues.apache.org/jira/browse/HDFS-
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 3.0.0
>Reporter: Stephen Chu
>Assignee: Jimmy Xiang
> Fix For: 3.0.0
>
> Attachments: trunk-.patch, trunk-_v2.patch
>
>
> I am on a HA-enabled cluster. The NameNodes are on host-1 and host-2.
> In the configurations, we specify the host-1 NN first and the host-2 NN 
> afterwards in the _dfs.ha.namenodes.ns1_ property (where _ns1_ is the name of 
> the nameservice).
> If the host-1 NN is Standby and the host-2 NN is Active, some CacheAdmins 
> will fail complaining about operation not supported in standby state.
> e.g.
> {code}
> bash-4.1$ hdfs cacheadmin -removeDirectives -path /user/hdfs2
> Exception in thread "main" 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
>  Operation category READ is not supported in state standby
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1501)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1082)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.listCacheDirectives(FSNamesystem.java:6892)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer$ServerSideCacheEntriesIterator.makeRequest(NameNodeRpcServer.java:1263)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer$ServerSideCacheEntriesIterator.makeRequest(NameNodeRpcServer.java:1249)
>   at 
> org.apache.hadoop.fs.BatchedRemoteIterator.makeRequest(BatchedRemoteIterator.java:77)
>   at 
> org.apache.hadoop.fs.BatchedRemoteIterator.makeRequestIfNeeded(BatchedRemoteIterator.java:85)
>   at 
> org.apache.hadoop.fs.BatchedRemoteIterator.hasNext(BatchedRemoteIterator.java:99)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.listCacheDirectives(ClientNamenodeProtocolServerSideTranslatorPB.java:1087)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1499)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
>   at org.apache.hadoop.ipc.Client.call(Client.java:

[jira] [Commented] (HDFS-5536) Implement HTTP policy for Namenode and DataNode

2013-12-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840078#comment-13840078
 ] 

Hudson commented on HDFS-5536:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1603 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1603/])
HDFS-5536. Implement HTTP policy for Namenode and DataNode. Contributed by 
Haohui Mai. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1547925)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpConfig.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/ClusterSetup.apt.vm
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HdfsConfiguration.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/SecureDataNodeStarter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/BackupNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogFileInputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSClusterWithNodeGroup.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestValidateConfigurationSettings.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestHttpsFileSystem.java


> Implement HTTP policy for Namenode and DataNode
> ---
>
> Key: HDFS-5536
> URL: https://issues.apache.org/jira/browse/HDFS-5536
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 3.0.0
>
> Attachments: HDFS-5536.000.patch, HDFS-5536.001.patch, 
> HDFS-5536.002.patch, HDFS-5536.003.patch, HDFS-5536.004.patch, 
> HDFS-5536.005.patch, HDFS-5536.006.patch, HDFS-5536.007.patch, 
> HDFS-5536.008.patch, HDFS-5536.009.patch, HDFS-5536.010.patch
>
>
> this jira implements the http and https policy in the namenode and the 
> datanode.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5626) dfsadmin -report shows incorrect cache values

2013-12-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840084#comment-13840084
 ] 

Hudson commented on HDFS-5626:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1603 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1603/])
HDFS-5626. dfsadmin report shows incorrect values (cmccabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1548000)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java


> dfsadmin -report shows incorrect cache values
> -
>
> Key: HDFS-5626
> URL: https://issues.apache.org/jira/browse/HDFS-5626
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 3.0.0
>Reporter: Stephen Chu
>Assignee: Colin Patrick McCabe
> Fix For: 3.0.0
>
> Attachments: HDFS-5626.001.patch, HDFS-5626.002.patch
>
>
>  I have a single node hadoop-trunk cluster that has caching enabled and 
> datanode max locked memory set to 536870912B.
> When I run _dfsadmin -report_, I see the following:
> {code}
> [root@hdfs-c5-nfs hadoop_tar_confs]# hdfs dfsadmin -report
> Configured Capacity: 50779643904 (47.29 GB)
> Present Capacity: 43480281088 (40.49 GB)
> DFS Remaining: 43480227840 (40.49 GB)
> DFS Used: 53248 (52 KB)
> DFS Used%: 0.00%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> -
> Datanodes available: 1 (1 total, 0 dead)
> Live datanodes:
> Name: 10.20.217.45:50010 (hdfs-c5-nfs.ent.cloudera.com)
> Hostname: hdfs-c5-nfs.ent.cloudera.com
> Decommission Status : Normal
> Configured Capacity: 50779643904 (47.29 GB)
> DFS Used: 53248 (52 KB)
> Non DFS Used: 7299362816 (6.80 GB)
> DFS Remaining: 43480227840 (40.49 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 85.63%
> Configured Cache Capacity: 50779643904 (0 B)
> Cache Used: 0 (52 KB)
> Cache Remaining: 0 (40.49 GB)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Last contact: Tue Dec 03 14:59:31 PST 2013
> {code}
> The values seem to be wrong. In configured cache capacity, we have listed 
> 50779643904 but in parantheses that is translated to (0B). The non-cache 
> related values with parantheses have correct translations.
> It says that I've used 100% of the cache, but the system does not have any 
> pools or directives.
> Also, we see that we have 0 Cache remaining, but that is translated to 
> (40.49GB).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-4983) Numeric usernames do not work with WebHDFS FS

2013-12-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840019#comment-13840019
 ] 

Hudson commented on HDFS-4983:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #412 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/412/])
Revert HDFS-4983 (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1547970)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserParam.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java
HDFS-4983. Numeric usernames do not work with WebHDFS FS. Contributed by 
Yongjun Zhang. (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1547935)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserParam.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java


> Numeric usernames do not work with WebHDFS FS
> -
>
> Key: HDFS-4983
> URL: https://issues.apache.org/jira/browse/HDFS-4983
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.0.0-alpha
>Reporter: Harsh J
>Assignee: Yongjun Zhang
>  Labels: patch
> Attachments: HDFS-4983.001.patch, HDFS-4983.002.patch, 
> HDFS-4983.003.patch
>
>
> Per the file 
> hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserParam.java,
>  the DOMAIN pattern is set to: {{^[A-Za-z_][A-Za-z0-9._-]*[$]?$}}.
> Given this, using a username such as "123" seems to fail for some reason 
> (tried on insecure setup):
> {code}
> [123@host-1 ~]$ whoami
> 123
> [123@host-1 ~]$ hadoop fs -fs webhdfs://host-2.domain.com -ls /
> -ls: Invalid value: "123" does not belong to the domain 
> ^[A-Za-z_][A-Za-z0-9._-]*[$]?$
> Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [ ...]
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5626) dfsadmin -report shows incorrect cache values

2013-12-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840023#comment-13840023
 ] 

Hudson commented on HDFS-5626:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #412 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/412/])
HDFS-5626. dfsadmin report shows incorrect values (cmccabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1548000)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java


> dfsadmin -report shows incorrect cache values
> -
>
> Key: HDFS-5626
> URL: https://issues.apache.org/jira/browse/HDFS-5626
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 3.0.0
>Reporter: Stephen Chu
>Assignee: Colin Patrick McCabe
> Fix For: 3.0.0
>
> Attachments: HDFS-5626.001.patch, HDFS-5626.002.patch
>
>
>  I have a single node hadoop-trunk cluster that has caching enabled and 
> datanode max locked memory set to 536870912B.
> When I run _dfsadmin -report_, I see the following:
> {code}
> [root@hdfs-c5-nfs hadoop_tar_confs]# hdfs dfsadmin -report
> Configured Capacity: 50779643904 (47.29 GB)
> Present Capacity: 43480281088 (40.49 GB)
> DFS Remaining: 43480227840 (40.49 GB)
> DFS Used: 53248 (52 KB)
> DFS Used%: 0.00%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> -
> Datanodes available: 1 (1 total, 0 dead)
> Live datanodes:
> Name: 10.20.217.45:50010 (hdfs-c5-nfs.ent.cloudera.com)
> Hostname: hdfs-c5-nfs.ent.cloudera.com
> Decommission Status : Normal
> Configured Capacity: 50779643904 (47.29 GB)
> DFS Used: 53248 (52 KB)
> Non DFS Used: 7299362816 (6.80 GB)
> DFS Remaining: 43480227840 (40.49 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 85.63%
> Configured Cache Capacity: 50779643904 (0 B)
> Cache Used: 0 (52 KB)
> Cache Remaining: 0 (40.49 GB)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Last contact: Tue Dec 03 14:59:31 PST 2013
> {code}
> The values seem to be wrong. In configured cache capacity, we have listed 
> 50779643904 but in parantheses that is translated to (0B). The non-cache 
> related values with parantheses have correct translations.
> It says that I've used 100% of the cache, but the system does not have any 
> pools or directives.
> Also, we see that we have 0 Cache remaining, but that is translated to 
> (40.49GB).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5587) add debug information when NFS fails to start with duplicate user or group names

2013-12-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840020#comment-13840020
 ] 

Hudson commented on HDFS-5587:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #412 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/412/])
HDFS-5587. add debug information when NFS fails to start with duplicate user or 
group names. Contributed by Brandon Li (brandonli: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1548028)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/IdUserGroup.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/nfs/nfs3/TestIdUserGroup.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> add debug information when NFS fails to start with duplicate user or group 
> names
> 
>
> Key: HDFS-5587
> URL: https://issues.apache.org/jira/browse/HDFS-5587
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: nfs
>Reporter: Brandon Li
>Assignee: Brandon Li
>Priority: Minor
> Fix For: 2.2.1
>
> Attachments: HDFS-5587.001.patch, HDFS-5587.002.patch, 
> HDFS-5587.003.patch
>
>
> When the host provides duplicate user or group names, NFS will not start and 
> print errors like the following:
> {noformat}
> ... ... 
> 13/11/25 18:11:52 INFO nfs3.Nfs3Base: registered UNIX signal handlers for 
> [TERM, HUP, INT]
> Exception in thread "main" java.lang.IllegalArgumentException: value already 
> present: s-iss
> at com.google.common.base.Preconditions.checkArgument(Preconditions.java:115)
> at 
> com.google.common.collect.AbstractBiMap.putInBothMaps(AbstractBiMap.java:112)
> at com.google.common.collect.AbstractBiMap.put(AbstractBiMap.java:96)
> at com.google.common.collect.HashBiMap.put(HashBiMap.java:85)
> at 
> org.apache.hadoop.nfs.nfs3.IdUserGroup.updateMapInternal(IdUserGroup.java:85)
> at org.apache.hadoop.nfs.nfs3.IdUserGroup.updateMaps(IdUserGroup.java:110)
> at org.apache.hadoop.nfs.nfs3.IdUserGroup.(IdUserGroup.java:54)
> at 
> org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.(RpcProgramNfs3.java:172)
> at 
> org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.(RpcProgramNfs3.java:164)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.(Nfs3.java:41)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:52)
> 13/11/25 18:11:54 INFO nfs3.Nfs3Base: SHUTDOWN_MSG:
> ... ...
> {noformat}
> The reason NFS should not start is that, HDFS (non-kerberos cluster) uses 
> name as the only way to identify a user. On some linux box, it could have two 
> users with the same name but different user IDs. Linux might be able to work 
> fine with that most of the time. However, when NFS gateway talks to HDFS, 
> HDFS accepts only user name. That is, from HDFS' point of view, these two 
> different users are the same user even though they are different on the Linux 
> box.
> The duplicate names on Linux systems sometimes is because of some legacy 
> system configurations, or combined name services.
> Regardless, NFS gateway should print some help information so the user can 
> understand the error and the remove the duplicated names before NFS restart.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5555) CacheAdmin commands fail when first listed NameNode is in Standby

2013-12-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840018#comment-13840018
 ] 

Hudson commented on HDFS-:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #412 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/412/])
HDFS-. CacheAdmin commands fail when first listed NameNode is in Standby 
(jxiang via cmccabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1547895)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirectiveIterator.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CachePoolIterator.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCacheDirectives.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java


> CacheAdmin commands fail when first listed NameNode is in Standby
> -
>
> Key: HDFS-
> URL: https://issues.apache.org/jira/browse/HDFS-
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 3.0.0
>Reporter: Stephen Chu
>Assignee: Jimmy Xiang
> Fix For: 3.0.0
>
> Attachments: trunk-.patch, trunk-_v2.patch
>
>
> I am on a HA-enabled cluster. The NameNodes are on host-1 and host-2.
> In the configurations, we specify the host-1 NN first and the host-2 NN 
> afterwards in the _dfs.ha.namenodes.ns1_ property (where _ns1_ is the name of 
> the nameservice).
> If the host-1 NN is Standby and the host-2 NN is Active, some CacheAdmins 
> will fail complaining about operation not supported in standby state.
> e.g.
> {code}
> bash-4.1$ hdfs cacheadmin -removeDirectives -path /user/hdfs2
> Exception in thread "main" 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
>  Operation category READ is not supported in state standby
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1501)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1082)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.listCacheDirectives(FSNamesystem.java:6892)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer$ServerSideCacheEntriesIterator.makeRequest(NameNodeRpcServer.java:1263)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer$ServerSideCacheEntriesIterator.makeRequest(NameNodeRpcServer.java:1249)
>   at 
> org.apache.hadoop.fs.BatchedRemoteIterator.makeRequest(BatchedRemoteIterator.java:77)
>   at 
> org.apache.hadoop.fs.BatchedRemoteIterator.makeRequestIfNeeded(BatchedRemoteIterator.java:85)
>   at 
> org.apache.hadoop.fs.BatchedRemoteIterator.hasNext(BatchedRemoteIterator.java:99)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.listCacheDirectives(ClientNamenodeProtocolServerSideTranslatorPB.java:1087)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1499)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
>   at org.apache.hadoop.ipc.Client.call(Client.java:13

[jira] [Commented] (HDFS-5536) Implement HTTP policy for Namenode and DataNode

2013-12-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840016#comment-13840016
 ] 

Hudson commented on HDFS-5536:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #412 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/412/])
HDFS-5536. Implement HTTP policy for Namenode and DataNode. Contributed by 
Haohui Mai. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1547925)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpConfig.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/ClusterSetup.apt.vm
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HdfsConfiguration.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/SecureDataNodeStarter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/BackupNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogFileInputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSClusterWithNodeGroup.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestValidateConfigurationSettings.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestHttpsFileSystem.java


> Implement HTTP policy for Namenode and DataNode
> ---
>
> Key: HDFS-5536
> URL: https://issues.apache.org/jira/browse/HDFS-5536
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 3.0.0
>
> Attachments: HDFS-5536.000.patch, HDFS-5536.001.patch, 
> HDFS-5536.002.patch, HDFS-5536.003.patch, HDFS-5536.004.patch, 
> HDFS-5536.005.patch, HDFS-5536.006.patch, HDFS-5536.007.patch, 
> HDFS-5536.008.patch, HDFS-5536.009.patch, HDFS-5536.010.patch
>
>
> this jira implements the http and https policy in the namenode and the 
> datanode.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-4114) Deprecate the BackupNode and CheckpointNode in 2.0

2013-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13839962#comment-13839962
 ] 

Hadoop QA commented on HDFS-4114:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12617121/HDFS-4114.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 5 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5643//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5643//console

This message is automatically generated.

> Deprecate the BackupNode and CheckpointNode in 2.0
> --
>
> Key: HDFS-4114
> URL: https://issues.apache.org/jira/browse/HDFS-4114
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eli Collins
>Assignee: Suresh Srinivas
> Attachments: HDFS-4114.patch
>
>
> Per the thread on hdfs-dev@ (http://s.apache.org/tMT) let's remove the 
> BackupNode and CheckpointNode.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-3405) Checkpointing should use HTTP POST or PUT instead of GET-GET to send merged fsimages

2013-12-05 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13839947#comment-13839947
 ] 

Vinay commented on HDFS-3405:
-

Test failure is unrelated. Test passed in local

> Checkpointing should use HTTP POST or PUT instead of GET-GET to send merged 
> fsimages
> 
>
> Key: HDFS-3405
> URL: https://issues.apache.org/jira/browse/HDFS-3405
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 1.0.0, 3.0.0, 2.0.5-alpha
>Reporter: Aaron T. Myers
>Assignee: Vinay
> Attachments: HDFS-3405.patch, HDFS-3405.patch, HDFS-3405.patch, 
> HDFS-3405.patch, HDFS-3405.patch, HDFS-3405.patch, HDFS-3405.patch, 
> HDFS-3405.patch
>
>
> As Todd points out in [this 
> comment|https://issues.apache.org/jira/browse/HDFS-3404?focusedCommentId=13272986&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13272986],
>  the current scheme for a checkpointing daemon to upload a merged fsimage 
> file to an NN is to issue an HTTP get request to tell the target NN to issue 
> another GET request back to the checkpointing daemon to retrieve the merged 
> fsimage file. There's no fundamental reason the checkpointing daemon can't 
> just use an HTTP POST or PUT to send back the merged fsimage file, rather 
> than the double-GET scheme.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


  1   2   >