[jira] [Created] (HDFS-6536) FileSystem.Cache.closeAll() threw an exception due to authentication failure at the end of a webhdfs client session

2014-06-15 Thread Yongjun Zhang (JIRA)
Yongjun Zhang created HDFS-6536:
---

 Summary: FileSystem.Cache.closeAll() threw an exception due to 
authentication failure at the end of a webhdfs client session
 Key: HDFS-6536
 URL: https://issues.apache.org/jira/browse/HDFS-6536
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client, webhdfs
Affects Versions: 2.4.0
Reporter: Yongjun Zhang


With a small client program below, when running as user root, exception is 
thrown at the end of the client run. The config is HA security enabled, with 
client config setting 
{code}
  property
namefs.defaultFS/name
valuewebhdfs://ns1/value
  /property
{code}

The client program:

{code}
public class kclient1 {
public static void main(String[] args) throws IOException {
  final Configuration conf = new Configuration();
  //a non-root user
  final UserGroupInformation ugi = 
UserGroupInformation.getUGIFromTicketCache(/tmp/krb5cc_496, h...@xyz.com);  

  System.out.println(Starting);
  ugi.doAs(new PrivilegedActionObject() {
  @Override
  public Object run() {
  try {
FileSystem fs = FileSystem.get(conf);   
String renewer = abcdefg;
fs.addDelegationTokens(
renewer, ugi.getCredentials());
// Just to prove that we connected with right 
credentials.
fs.getFileStatus(new Path(/));
return fs.getDelegationToken(renewer);
  } catch (Exception e) {
e.printStackTrace();
return null;
  }
   }
});
System.out.println(THE END);
  }
}   
{code}

Output:
{code}
[root@yjzc5w-1 tmp2]# hadoop --config /tmp2/conf jar kclient1.jar 
kclient1.kclient1
Starting
14/06/14 20:38:51 WARN ssl.FileBasedKeyStoresFactory: The property 
'ssl.client.truststore.location' has not been set, no TrustStore will be loaded
14/06/14 20:38:52 INFO web.WebHdfsFileSystem: Retrying connect to namenode: 
yjzc5w-2.xyz.com/172.26.3.87:20101. Already tried 0 time(s); retry policy is 
org.apache.hadoop.io.retry.RetryPolicies$FailoverOnNetworkExceptionRetry@1a92210,
 delay 0ms.
To prove that connection with right credentials
to get file status updated updated 7
THE END
14/06/14 20:38:53 WARN ssl.FileBasedKeyStoresFactory: The property 
'ssl.client.truststore.location' has not been set, no TrustStore will be loaded
14/06/14 20:38:53 WARN security.UserGroupInformation: 
PriviledgedActionException as:root (auth:KERBEROS) cause:java.io.IOException: 
org.apache.hadoop.security.authentication.client.AuthenticationException: 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)
14/06/14 20:38:53 INFO fs.FileSystem: FileSystem.Cache.closeAll() threw an 
exception:
java.io.IOException: Authentication failed, 
url=http://yjzc5w-2.xyz.com:20101/webhdfs/v1/?op=CANCELDELEGATIONTOKENuser.name=roottoken=HAAEaGRmcwRoZGZzAIoBRp2bNByKAUbBp7gcbBQUD6vWmRYJRv03XZj7Jajf8PU8CB8SV0VCSERGUyBkZWxlZ2F0aW9uC2hhLWhkZnM6bnMx
[root@yjzc5w-1 tmp2]# 
{code}

We can see the the exception is thrown in the end of the client run.

I found that the problem is that at the end of client run, like the C++ 
destructor is called at the end of object scope, the tokens stored in the 
filesystem cache is get cancelled with the following all:

{code}
final class TokenAspectT extends FileSystem  Renewable {
  @InterfaceAudience.Private
  public static class TokenManager extends TokenRenewer {
@Override
public void cancel(Token? token, Configuration conf) throws IOException {
  getInstance(token, conf).cancelDelegationToken(token); ==
}
{code}
where getInstance(token, conf) create a FileSystem as user root, then call 
cancelDelegationToken to server side. However, server doesn't have root 
credential, so throw this exceptoin.

When I run the same program as user hdfs, then it's fine.

I think if we run the call to cancelDelegationToken as the user who created the 
token intially (hdfs in this case), then it should work fine.
However, the information of the user who created the token is not available at 
that point. 

Hi [~daryn], I wonder if you could give a quick comment, really appreciate it!




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6536) FileSystem.Cache.closeAll() throws authentication exception at the end of a webhdfs client

2014-06-15 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-6536:


Summary: FileSystem.Cache.closeAll() throws authentication exception at the 
end of a webhdfs client  (was: FileSystem.Cache.closeAll() threw an exception 
due to authentication failure at the end of a webhdfs client session)

 FileSystem.Cache.closeAll() throws authentication exception at the end of a 
 webhdfs client
 --

 Key: HDFS-6536
 URL: https://issues.apache.org/jira/browse/HDFS-6536
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client, webhdfs
Affects Versions: 2.4.0
Reporter: Yongjun Zhang

 With a small client program below, when running as user root, exception is 
 thrown at the end of the client run. The config is HA security enabled, with 
 client config setting 
 {code}
   property
 namefs.defaultFS/name
 valuewebhdfs://ns1/value
   /property
 {code}
 The client program:
 {code}
 public class kclient1 {
   public static void main(String[] args) throws IOException {
 final Configuration conf = new Configuration();
   //a non-root user
   final UserGroupInformation ugi = 
 UserGroupInformation.getUGIFromTicketCache(/tmp/krb5cc_496, 
 h...@xyz.com);  
   System.out.println(Starting);
   ugi.doAs(new PrivilegedActionObject() {
 @Override
 public Object run() {
 try {
   FileSystem fs = FileSystem.get(conf);   
   String renewer = abcdefg;
   fs.addDelegationTokens(
   renewer, ugi.getCredentials());
   // Just to prove that we connected with right 
 credentials.
   fs.getFileStatus(new Path(/));
   return fs.getDelegationToken(renewer);
 } catch (Exception e) {
   e.printStackTrace();
   return null;
   }
}
 });
 System.out.println(THE END);
   }
 } 
 {code}
 Output:
 {code}
 [root@yjzc5w-1 tmp2]# hadoop --config /tmp2/conf jar kclient1.jar 
 kclient1.kclient1
 Starting
 14/06/14 20:38:51 WARN ssl.FileBasedKeyStoresFactory: The property 
 'ssl.client.truststore.location' has not been set, no TrustStore will be 
 loaded
 14/06/14 20:38:52 INFO web.WebHdfsFileSystem: Retrying connect to namenode: 
 yjzc5w-2.xyz.com/172.26.3.87:20101. Already tried 0 time(s); retry policy is 
 org.apache.hadoop.io.retry.RetryPolicies$FailoverOnNetworkExceptionRetry@1a92210,
  delay 0ms.
 To prove that connection with right credentials
 to get file status updated updated 7
 THE END
 14/06/14 20:38:53 WARN ssl.FileBasedKeyStoresFactory: The property 
 'ssl.client.truststore.location' has not been set, no TrustStore will be 
 loaded
 14/06/14 20:38:53 WARN security.UserGroupInformation: 
 PriviledgedActionException as:root (auth:KERBEROS) cause:java.io.IOException: 
 org.apache.hadoop.security.authentication.client.AuthenticationException: 
 GSSException: No valid credentials provided (Mechanism level: Failed to find 
 any Kerberos tgt)
 14/06/14 20:38:53 INFO fs.FileSystem: FileSystem.Cache.closeAll() threw an 
 exception:
 java.io.IOException: Authentication failed, 
 url=http://yjzc5w-2.xyz.com:20101/webhdfs/v1/?op=CANCELDELEGATIONTOKENuser.name=roottoken=HAAEaGRmcwRoZGZzAIoBRp2bNByKAUbBp7gcbBQUD6vWmRYJRv03XZj7Jajf8PU8CB8SV0VCSERGUyBkZWxlZ2F0aW9uC2hhLWhkZnM6bnMx
 [root@yjzc5w-1 tmp2]# 
 {code}
 We can see the the exception is thrown in the end of the client run.
 I found that the problem is that at the end of client run, like the C++ 
 destructor is called at the end of object scope, the tokens stored in the 
 filesystem cache is get cancelled with the following all:
 {code}
 final class TokenAspectT extends FileSystem  Renewable {
   @InterfaceAudience.Private
   public static class TokenManager extends TokenRenewer {
 @Override
 public void cancel(Token? token, Configuration conf) throws IOException 
 {
   getInstance(token, conf).cancelDelegationToken(token); ==
 }
 {code}
 where getInstance(token, conf) create a FileSystem as user root, then call 
 cancelDelegationToken to server side. However, server doesn't have root 
 credential, so throw this exceptoin.
 When I run the same program as user hdfs, then it's fine.
 I think if we run the call to cancelDelegationToken as the user who created 
 the token intially (hdfs in this case), then it should work fine.
 However, the information of the user who created the token is not available 
 at that point. 
 Hi [~daryn], I wonder if you could give a quick comment, really appreciate it!



--
This message was sent by Atlassian 

[jira] [Updated] (HDFS-6536) FileSystem.Cache.closeAll() throws authentication exception at the end of a webhdfs client

2014-06-15 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-6536:


Description: 
With a small client program below, when running as user root, exception is 
thrown at the end of the client run. The config is HA security enabled, with 
client config setting 
{code}
  property
namefs.defaultFS/name
valuewebhdfs://ns1/value
  /property
{code}

The client program:

{code}
public class kclient1 {
public static void main(String[] args) throws IOException {
  final Configuration conf = new Configuration();
  //a non-root user
  final UserGroupInformation ugi = 
UserGroupInformation.getUGIFromTicketCache(/tmp/krb5cc_496, h...@xyz.com);  

  System.out.println(Starting);
  ugi.doAs(new PrivilegedActionObject() {
  @Override
  public Object run() {
  try {
FileSystem fs = FileSystem.get(conf);   
String renewer = abcdefg;
fs.addDelegationTokens(
renewer, ugi.getCredentials());
// Just to prove that we connected with right 
credentials.
fs.getFileStatus(new Path(/));
return fs.getDelegationToken(renewer);
  } catch (Exception e) {
e.printStackTrace();
return null;
  }
   }
});
System.out.println(THE END);
  }
}   
{code}

Output:
{code}
[root@yjzc5w-1 tmp2]# hadoop --config /tmp2/conf jar kclient1.jar 
kclient1.kclient1
Starting
14/06/14 20:38:51 WARN ssl.FileBasedKeyStoresFactory: The property 
'ssl.client.truststore.location' has not been set, no TrustStore will be loaded
14/06/14 20:38:52 INFO web.WebHdfsFileSystem: Retrying connect to namenode: 
yjzc5w-2.xyz.com/172.26.3.87:20101. Already tried 0 time(s); retry policy is 
org.apache.hadoop.io.retry.RetryPolicies$FailoverOnNetworkExceptionRetry@1a92210,
 delay 0ms.
To prove that connection with right credentials
to get file status updated updated 7
THE END
14/06/14 20:38:53 WARN ssl.FileBasedKeyStoresFactory: The property 
'ssl.client.truststore.location' has not been set, no TrustStore will be loaded
14/06/14 20:38:53 WARN security.UserGroupInformation: 
PriviledgedActionException as:root (auth:KERBEROS) cause:java.io.IOException: 
org.apache.hadoop.security.authentication.client.AuthenticationException: 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)
14/06/14 20:38:53 INFO fs.FileSystem: FileSystem.Cache.closeAll() threw an 
exception:
java.io.IOException: Authentication failed, 
url=http://yjzc5w-2.xyz.com:20101/webhdfs/v1/?op=CANCELDELEGATIONTOKENuser.name=roottoken=HAAEaGRmcwRoZGZzAIoBRp2bNByKAUbBp7gcbBQUD6vWmRYJRv03XZj7Jajf8PU8CB8SV0VCSERGUyBkZWxlZ2F0aW9uC2hhLWhkZnM6bnMx
[root@yjzc5w-1 tmp2]# 
{code}

We can see the the exception is thrown in the end of the client run.

I found that the problem is that at the end of client run, like the C++ 
destructor is called at the end of object scope, the tokens stored in the 
filesystem cache is get cancelled with the following all:

{code}
final class TokenAspectT extends FileSystem  Renewable {
  @InterfaceAudience.Private
  public static class TokenManager extends TokenRenewer {
@Override
public void cancel(Token? token, Configuration conf) throws IOException {
  getInstance(token, conf).cancelDelegationToken(token); ==
}
{code}
where getInstance(token, conf) create a FileSystem as user root, then call 
cancelDelegationToken to server side. However, server doesn't have root 
credential, so throw this exceptoin.

When I run the same program as user hdfs, then it's fine.

I think if we run the call to cancelDelegationToken as the user who created the 
token intially (hdfs in this case), then it should work fine. However, the 
information of the user who created the token is not available at that point. 

Hi [~daryn], I wonder if you could give a quick comment, really appreciate it!


  was:
With a small client program below, when running as user root, exception is 
thrown at the end of the client run. The config is HA security enabled, with 
client config setting 
{code}
  property
namefs.defaultFS/name
valuewebhdfs://ns1/value
  /property
{code}

The client program:

{code}
public class kclient1 {
public static void main(String[] args) throws IOException {
  final Configuration conf = new Configuration();
  //a non-root user
  final UserGroupInformation ugi = 
UserGroupInformation.getUGIFromTicketCache(/tmp/krb5cc_496, h...@xyz.com);  

  System.out.println(Starting);
  ugi.doAs(new PrivilegedActionObject() {
  @Override
  public Object run() {
 

[jira] [Commented] (HDFS-6536) FileSystem.Cache.closeAll() throws authentication exception at the end of a webhdfs client

2014-06-15 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14031798#comment-14031798
 ] 

Yongjun Zhang commented on HDFS-6536:
-

I guess if we create credential for user root at namenode, then the problem 
will go away. But since linux root user can run any client program like I 
described, the issue I described is still valid. Any comments are welcome.


 FileSystem.Cache.closeAll() throws authentication exception at the end of a 
 webhdfs client
 --

 Key: HDFS-6536
 URL: https://issues.apache.org/jira/browse/HDFS-6536
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client, webhdfs
Affects Versions: 2.4.0
Reporter: Yongjun Zhang

 With a small client program below, when running as user root, exception is 
 thrown at the end of the client run. The config is HA security enabled, with 
 client config setting 
 {code}
   property
 namefs.defaultFS/name
 valuewebhdfs://ns1/value
   /property
 {code}
 The client program:
 {code}
 public class kclient1 {
   public static void main(String[] args) throws IOException {
 final Configuration conf = new Configuration();
   //a non-root user
   final UserGroupInformation ugi = 
 UserGroupInformation.getUGIFromTicketCache(/tmp/krb5cc_496, 
 h...@xyz.com);  
   System.out.println(Starting);
   ugi.doAs(new PrivilegedActionObject() {
 @Override
 public Object run() {
 try {
   FileSystem fs = FileSystem.get(conf);   
   String renewer = abcdefg;
   fs.addDelegationTokens(
   renewer, ugi.getCredentials());
   // Just to prove that we connected with right 
 credentials.
   fs.getFileStatus(new Path(/));
   return fs.getDelegationToken(renewer);
 } catch (Exception e) {
   e.printStackTrace();
   return null;
   }
}
 });
 System.out.println(THE END);
   }
 } 
 {code}
 Output:
 {code}
 [root@yjzc5w-1 tmp2]# hadoop --config /tmp2/conf jar kclient1.jar 
 kclient1.kclient1
 Starting
 14/06/14 20:38:51 WARN ssl.FileBasedKeyStoresFactory: The property 
 'ssl.client.truststore.location' has not been set, no TrustStore will be 
 loaded
 14/06/14 20:38:52 INFO web.WebHdfsFileSystem: Retrying connect to namenode: 
 yjzc5w-2.xyz.com/172.26.3.87:20101. Already tried 0 time(s); retry policy is 
 org.apache.hadoop.io.retry.RetryPolicies$FailoverOnNetworkExceptionRetry@1a92210,
  delay 0ms.
 To prove that connection with right credentials
 to get file status updated updated 7
 THE END
 14/06/14 20:38:53 WARN ssl.FileBasedKeyStoresFactory: The property 
 'ssl.client.truststore.location' has not been set, no TrustStore will be 
 loaded
 14/06/14 20:38:53 WARN security.UserGroupInformation: 
 PriviledgedActionException as:root (auth:KERBEROS) cause:java.io.IOException: 
 org.apache.hadoop.security.authentication.client.AuthenticationException: 
 GSSException: No valid credentials provided (Mechanism level: Failed to find 
 any Kerberos tgt)
 14/06/14 20:38:53 INFO fs.FileSystem: FileSystem.Cache.closeAll() threw an 
 exception:
 java.io.IOException: Authentication failed, 
 url=http://yjzc5w-2.xyz.com:20101/webhdfs/v1/?op=CANCELDELEGATIONTOKENuser.name=roottoken=HAAEaGRmcwRoZGZzAIoBRp2bNByKAUbBp7gcbBQUD6vWmRYJRv03XZj7Jajf8PU8CB8SV0VCSERGUyBkZWxlZ2F0aW9uC2hhLWhkZnM6bnMx
 [root@yjzc5w-1 tmp2]# 
 {code}
 We can see the the exception is thrown in the end of the client run.
 I found that the problem is that at the end of client run, like the C++ 
 destructor is called at the end of object scope, the tokens stored in the 
 filesystem cache is get cancelled with the following all:
 {code}
 final class TokenAspectT extends FileSystem  Renewable {
   @InterfaceAudience.Private
   public static class TokenManager extends TokenRenewer {
 @Override
 public void cancel(Token? token, Configuration conf) throws IOException 
 {
   getInstance(token, conf).cancelDelegationToken(token); ==
 }
 {code}
 where getInstance(token, conf) create a FileSystem as user root, then call 
 cancelDelegationToken to server side. However, server doesn't have root 
 credential, so throw this exceptoin.
 When I run the same program as user hdfs, then it's fine.
 I think if we run the call to cancelDelegationToken as the user who created 
 the token intially (hdfs in this case), then it should work fine. However, 
 the information of the user who created the token is not available at that 
 point. 
 Hi [~daryn], I wonder if you could give a quick comment, really 

[jira] [Created] (HDFS-6537) Tests for Crypto filesystem decorating HDFS

2014-06-15 Thread Yi Liu (JIRA)
Yi Liu created HDFS-6537:


 Summary: Tests for Crypto filesystem decorating HDFS
 Key: HDFS-6537
 URL: https://issues.apache.org/jira/browse/HDFS-6537
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: fs-encryption (HADOOP-10150 and HDFS-6134)


{{CryptoFileSystem}} targets other filesystem. But currently other built-in 
Hadoop filesystems don't have XAttrs support, so this JIRA uses HDFS for test.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6537) Tests for Crypto filesystem decorating HDFS

2014-06-15 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-6537:
-

Attachment: HDFS-6537.patch

This test case extends {{CryptoFileSystemTestBase}}.

 Tests for Crypto filesystem decorating HDFS
 ---

 Key: HDFS-6537
 URL: https://issues.apache.org/jira/browse/HDFS-6537
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: fs-encryption (HADOOP-10150 and HDFS-6134)

 Attachments: HDFS-6537.patch


 {{CryptoFileSystem}} targets other filesystem. But currently other built-in 
 Hadoop filesystems don't have XAttrs support, so this JIRA uses HDFS for test.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6430) HTTPFS - Implement XAttr support

2014-06-15 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-6430:
-

Attachment: HDFS-6430.3.patch

Update the patch for the comments.

{quote}
Looks like there are some whitespace issues going on
{quote}
OK, remove some whitespace

{quote}
HttpFSFileSystem.java
lines 1129 and 1146 - a foreach() style loop might be marginally more clear
Please add javadoc for new functions
{quote}
OK, use foreach() and add javadoc for the two internal methods.

{quote}
FSOperations.java
line 259 - could probably also be replaced by foreach()
{quote}
OK, use foreach()

{quote}
HttpFSParametersProvider.java
line 487 - should this be a constant somewhere?
{quote}
Now, it’s constant.

{quote}
HttpFSServer.java
lines 341, 349, 568, 577 - don't you need some note about the operation that's 
being performed in the audit message?
{quote}
Add simple notes.

{quote}
BaseTestHttpFSWith.java
I like the tests.
Would be nice if you could also test different formats for values other than hex
Some negative tests - like invalid param/value would be nice (I guess I should 
have done this for the acl tests...)
{quote}
add some negative tests using invalid name.
The value is byte[], so there is no different formats.

{quote}
EnumSetParam.java
parse() - isn't the code in the for loop just a split(,)? If not, how not?
toString() - wouldn't a foreach() be easier here, too?
{quote}
OK, Use split(“,”).
Iterator is simple too.

 HTTPFS - Implement XAttr support
 

 Key: HDFS-6430
 URL: https://issues.apache.org/jira/browse/HDFS-6430
 Project: Hadoop HDFS
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 3.0.0

 Attachments: HDFS-6430.1.patch, HDFS-6430.2.patch, HDFS-6430.3.patch, 
 HDFS-6430.patch


 Add xattr support to HttpFS.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6375) Listing extended attributes with the search permission

2014-06-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14031905#comment-14031905
 ] 

Hudson commented on HDFS-6375:
--

FAILURE: Integrated in Hadoop-trunk-Commit #5707 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5707/])
Moved CHANGES.txt entries of MAPREDUCE-5898, MAPREDUCE-5920, HDFS-6464, 
HDFS-6375 from trunk to 2.5 section on merging HDFS-2006 to branch-2 
(umamahesh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1602699)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt


 Listing extended attributes with the search permission
 --

 Key: HDFS-6375
 URL: https://issues.apache.org/jira/browse/HDFS-6375
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Charles Lamb
 Fix For: 3.0.0, 2.5.0

 Attachments: HDFS-6375.1.patch, HDFS-6375.10.patch, 
 HDFS-6375.11.patch, HDFS-6375.13.patch, HDFS-6375.2.patch, HDFS-6375.3.patch, 
 HDFS-6375.4.patch, HDFS-6375.5.patch, HDFS-6375.6.patch, HDFS-6375.7.patch, 
 HDFS-6375.8.patch, HDFS-6375.9.patch


 From the attr(5) manpage:
 {noformat}
Users with search access to a file or directory may retrieve a list  of
attribute names defined for that file or directory.
 {noformat}
 This is like doing {{getfattr}} without the {{-d}} flag, which we currently 
 don't support.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-2006) ability to support storing extended attributes per file

2014-06-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14031902#comment-14031902
 ] 

Hudson commented on HDFS-2006:
--

FAILURE: Integrated in Hadoop-trunk-Commit #5707 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5707/])
Moved CHANGES.txt entries of MAPREDUCE-5898, MAPREDUCE-5920, HDFS-6464, 
HDFS-6375 from trunk to 2.5 section on merging HDFS-2006 to branch-2 
(umamahesh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1602699)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt


 ability to support storing extended attributes per file
 ---

 Key: HDFS-2006
 URL: https://issues.apache.org/jira/browse/HDFS-2006
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: HDFS XAttrs (HDFS-2006)
Reporter: dhruba borthakur
Assignee: Yi Liu
 Fix For: 3.0.0, 2.5.0

 Attachments: ExtendedAttributes.html, HDFS-2006-Branch-2-Merge.patch, 
 HDFS-2006-Merge-1.patch, HDFS-2006-Merge-2.patch, HDFS-XAttrs-Design-1.pdf, 
 HDFS-XAttrs-Design-2.pdf, HDFS-XAttrs-Design-3.pdf, 
 Test-Plan-for-Extended-Attributes-1.pdf, xattrs.1.patch, xattrs.patch


 It would be nice if HDFS provides a feature to store extended attributes for 
 files, similar to the one described here: 
 http://en.wikipedia.org/wiki/Extended_file_attributes. 
 The challenge is that it has to be done in such a way that a site not using 
 this feature does not waste precious memory resources in the namenode.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6464) Support multiple xattr.name parameters for WebHDFS getXAttrs.

2014-06-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14031904#comment-14031904
 ] 

Hudson commented on HDFS-6464:
--

FAILURE: Integrated in Hadoop-trunk-Commit #5707 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5707/])
Moved CHANGES.txt entries of MAPREDUCE-5898, MAPREDUCE-5920, HDFS-6464, 
HDFS-6375 from trunk to 2.5 section on merging HDFS-2006 to branch-2 
(umamahesh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1602699)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt


 Support multiple xattr.name parameters for WebHDFS getXAttrs.
 -

 Key: HDFS-6464
 URL: https://issues.apache.org/jira/browse/HDFS-6464
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 3.0.0
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 3.0.0, 2.5.0

 Attachments: HDFS-6464.1.patch, HDFS-6464.patch


 For WebHDFS getXAttrs through names, right now the entire list is passed to 
 the client side and then filtered, which is not the best choice since it's 
 inefficient and precludes us from doing server-side smarts on par with the 
 Java APIs. 
 Furthermore, if some xattrs doesn't exist, server side should return error.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6430) HTTPFS - Implement XAttr support

2014-06-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14031987#comment-14031987
 ] 

Hadoop QA commented on HDFS-6430:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12650486/HDFS-6430.3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-httpfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7123//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7123//console

This message is automatically generated.

 HTTPFS - Implement XAttr support
 

 Key: HDFS-6430
 URL: https://issues.apache.org/jira/browse/HDFS-6430
 Project: Hadoop HDFS
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 3.0.0

 Attachments: HDFS-6430.1.patch, HDFS-6430.2.patch, HDFS-6430.3.patch, 
 HDFS-6430.patch


 Add xattr support to HttpFS.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6527) Edit log corruption due to defered INode removal

2014-06-15 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14032063#comment-14032063
 ] 

Jing Zhao commented on HDFS-6527:
-

The v3 may not work when the file is contained in a snapshot. The new unit test 
can fail if we create a snapshot on root after the file creation:
{code}
  FSDataOutputStream out = null;
  out = fs.create(filePath);
  SnapshotTestHelper.createSnapshot(fs, new Path(/), s1);
  Thread deleteThread = new DeleteThread(fs, filePath, true);
{code}

Instead of the changes made in v3 patch, I guess the v2 patch may work with the 
following change:
{code}
@@ -3018,6 +3036,13 @@ private INodeFile checkLease(String src, String holder, 
INode inode,
   + (lease != null ? lease.toString()
   : Holder  + holder +  does not have any open files.));
 }
+// If parent is not there or we mark the file as deleted in its snapshot
+// feature, it must have been deleted.
+if (file.getParent() == null
+|| (file.isWithSnapshot()  file.getFileWithSnapshotFeature()
+.isCurrentFileDeleted())) {
+  throw new FileNotFoundException(src);
+}
 String clientName = file.getFileUnderConstructionFeature().getClientName();
 if (holder != null  !clientName.equals(holder)) {
   throw new LeaseExpiredException(Lease mismatch on  + ident +
{code}

 Edit log corruption due to defered INode removal
 

 Key: HDFS-6527
 URL: https://issues.apache.org/jira/browse/HDFS-6527
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Blocker
 Attachments: HDFS-6527.branch-2.4.patch, HDFS-6527.trunk.patch, 
 HDFS-6527.v2.patch, HDFS-6527.v3.patch


 We have seen a SBN crashing with the following error:
 {panel}
 \[Edit log tailer\] ERROR namenode.FSEditLogLoader:
 Encountered exception on operation AddBlockOp
 [path=/xxx,
 penultimateBlock=NULL, lastBlock=blk_111_111, RpcClientId=,
 RpcCallId=-2]
 java.io.FileNotFoundException: File does not exist: /xxx
 {panel}
 This was caused by the deferred removal of deleted inodes from the inode map. 
 Since getAdditionalBlock() acquires FSN read lock and then write lock, a 
 deletion can happen in between. Because of deferred inode removal outside FSN 
 write lock, getAdditionalBlock() can get the deleted inode from the inode map 
 with FSN write lock held. This allow addition of a block to a deleted file.
 As a result, the edit log will contain OP_ADD, OP_DELETE, followed by
  OP_ADD_BLOCK.  This cannot be replayed by NN, so NN doesn't start up or SBN 
 crashes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6527) Edit log corruption due to defered INode removal

2014-06-15 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14032066#comment-14032066
 ] 

Jing Zhao commented on HDFS-6527:
-

Besides, is it possible that we can hide the sleepForTesting part inside a 
customized BlockPlacementPolicy? The customized BlockPlacementPolicy 
implementation uses the same policy as BlockPlacementPolicyDefault, but make 
the thread sleep 1s before returning. In this way we do not need to inject 
testing code into FSNamesystem.

 Edit log corruption due to defered INode removal
 

 Key: HDFS-6527
 URL: https://issues.apache.org/jira/browse/HDFS-6527
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Blocker
 Attachments: HDFS-6527.branch-2.4.patch, HDFS-6527.trunk.patch, 
 HDFS-6527.v2.patch, HDFS-6527.v3.patch


 We have seen a SBN crashing with the following error:
 {panel}
 \[Edit log tailer\] ERROR namenode.FSEditLogLoader:
 Encountered exception on operation AddBlockOp
 [path=/xxx,
 penultimateBlock=NULL, lastBlock=blk_111_111, RpcClientId=,
 RpcCallId=-2]
 java.io.FileNotFoundException: File does not exist: /xxx
 {panel}
 This was caused by the deferred removal of deleted inodes from the inode map. 
 Since getAdditionalBlock() acquires FSN read lock and then write lock, a 
 deletion can happen in between. Because of deferred inode removal outside FSN 
 write lock, getAdditionalBlock() can get the deleted inode from the inode map 
 with FSN write lock held. This allow addition of a block to a deleted file.
 As a result, the edit log will contain OP_ADD, OP_DELETE, followed by
  OP_ADD_BLOCK.  This cannot be replayed by NN, so NN doesn't start up or SBN 
 crashes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6475) WebHdfs clients fail without retry because incorrect handling of StandbyException

2014-06-15 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14032067#comment-14032067
 ] 

Jing Zhao commented on HDFS-6475:
-

Thanks for the fix, [~yzhangal]! The patch looks good to me. My only concern is 
that if we can put the getTrueCause method into some Util class so that we do 
not need to call a method defined in ipc.Server from a class only used by 
webhdfs. I guess we can even move it into the InvalidToken class?

 WebHdfs clients fail without retry because incorrect handling of 
 StandbyException
 -

 Key: HDFS-6475
 URL: https://issues.apache.org/jira/browse/HDFS-6475
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha, webhdfs
Affects Versions: 2.4.0
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
 Attachments: HDFS-6475.001.patch, HDFS-6475.002.patch, 
 HDFS-6475.003.patch, HDFS-6475.003.patch, HDFS-6475.004.patch


 With WebHdfs clients connected to a HA HDFS service, the delegation token is 
 previously initialized with the active NN.
 When clients try to issue request, the NN it contacts is stored in a map 
 returned by DFSUtil.getNNServiceRpcAddresses(conf). And the client contact 
 the NN based on the order, so likely the first one it runs into is StandbyNN. 
 If the StandbyNN doesn't have the updated client crediential, it will throw a 
 s SecurityException that wraps StandbyException.
 The client is expected to retry another NN, but due to the insufficient 
 handling of SecurityException mentioned above, it failed.
 Example message:
 {code}
 {RemoteException={message=Failed to obtain user group information: 
 org.apache.hadoop.security.token.SecretManager$InvalidToken: 
 StandbyException, javaCl
 assName=java.lang.SecurityException, exception=SecurityException}}
 org.apache.hadoop.ipc.RemoteException(java.lang.SecurityException): Failed to 
 obtain user group information: 
 org.apache.hadoop.security.token.SecretManager$InvalidToken: StandbyException
 at 
 org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:159)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:325)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$700(WebHdfsFileSystem.java:107)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.getResponse(WebHdfsFileSystem.java:635)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:542)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.run(WebHdfsFileSystem.java:431)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:685)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:696)
 at kclient1.kclient$1.run(kclient.java:64)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:356)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1528)
 at kclient1.kclient.main(kclient.java:58)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6538) Element comment format error in org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry

2014-06-15 Thread debugging (JIRA)
debugging created HDFS-6538:
---

 Summary: Element comment format error in 
org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry
 Key: HDFS-6538
 URL: https://issues.apache.org/jira/browse/HDFS-6538
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.4.0
Reporter: debugging
Priority: Trivial


The element comment for javadoc should be started by /**, but it starts with 
only /* for class ShortCircuitRegistry.
So I think there is a * Omitted. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6538) Element comment format error in org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry

2014-06-15 Thread debugging (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

debugging updated HDFS-6538:


Description: 
The element comment for javadoc should be started by /*/*, but it starts with 
only /* for class ShortCircuitRegistry.
So I think there is a * Omitted. 

  was:
The element comment for javadoc should be started by /**, but it starts with 
only /* for class ShortCircuitRegistry.
So I think there is a * Omitted. 


 Element comment format error in 
 org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry
 ---

 Key: HDFS-6538
 URL: https://issues.apache.org/jira/browse/HDFS-6538
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.4.0
Reporter: debugging
Priority: Trivial
  Labels: documentation
   Original Estimate: 1h
  Remaining Estimate: 1h

 The element comment for javadoc should be started by /*/*, but it starts with 
 only /* for class ShortCircuitRegistry.
 So I think there is a * Omitted. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6538) Element comment format error in org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry

2014-06-15 Thread debugging (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

debugging updated HDFS-6538:


Description: 
The element comment for javadoc should be started by ///*/*, but it starts with 
only /* for class ShortCircuitRegistry.
So I think there is a * Omitted. 

  was:
The element comment for javadoc should be started by /*/*, but it starts with 
only /* for class ShortCircuitRegistry.
So I think there is a * Omitted. 


 Element comment format error in 
 org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry
 ---

 Key: HDFS-6538
 URL: https://issues.apache.org/jira/browse/HDFS-6538
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.4.0
Reporter: debugging
Priority: Trivial
  Labels: documentation
   Original Estimate: 1h
  Remaining Estimate: 1h

 The element comment for javadoc should be started by ///*/*, but it starts 
 with only /* for class ShortCircuitRegistry.
 So I think there is a * Omitted. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6538) Element comment format error in org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry

2014-06-15 Thread debugging (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

debugging updated HDFS-6538:


Description: 
The element comment for javadoc should be started by /**, but it starts with 
only /* for class ShortCircuitRegistry.
So I think there is a * Omitted. 

  was:
The element comment for javadoc should be started by ///*/*, but it starts with 
only /* for class ShortCircuitRegistry.
So I think there is a * Omitted. 


 Element comment format error in 
 org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry
 ---

 Key: HDFS-6538
 URL: https://issues.apache.org/jira/browse/HDFS-6538
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.4.0
Reporter: debugging
Priority: Trivial
  Labels: documentation
   Original Estimate: 1h
  Remaining Estimate: 1h

 The element comment for javadoc should be started by /**, but it starts 
 with only /* for class ShortCircuitRegistry.
 So I think there is a * Omitted. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6538) Element comment format error in org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry

2014-06-15 Thread debugging (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

debugging updated HDFS-6538:


Description: 
The element comment for javadoc should be started by {noformat}/**{noformat}, 
but it starts with only {noformat}/*{noformat} for class ShortCircuitRegistry.
So I think there is a {noformat}*{noformat} Omitted. 

  was:
The element comment for javadoc should be started by /**, but it starts with 
only /* for class ShortCircuitRegistry.
So I think there is a * Omitted. 


 Element comment format error in 
 org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry
 ---

 Key: HDFS-6538
 URL: https://issues.apache.org/jira/browse/HDFS-6538
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.4.0
Reporter: debugging
Priority: Trivial
  Labels: documentation
   Original Estimate: 1h
  Remaining Estimate: 1h

 The element comment for javadoc should be started by {noformat}/**{noformat}, 
 but it starts with only {noformat}/*{noformat} for class ShortCircuitRegistry.
 So I think there is a {noformat}*{noformat} Omitted. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6382) HDFS File/Directory TTL

2014-06-15 Thread Zesheng Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14032082#comment-14032082
 ] 

Zesheng Wu commented on HDFS-6382:
--

[~ste...@apache.org] Thanks for your feedback.
We have discussed that whether to use a MR job or a standalone daemon, and most 
people upstream has come to an agreement that a standalone daemon is reasonable 
and acceptable. You can go through the earlier discussion.  

[~aw] Thanks for your feedback.
Your suggestion is really valuable and firms our confidence to implement it as 
a standalone daemon.

 HDFS File/Directory TTL
 ---

 Key: HDFS-6382
 URL: https://issues.apache.org/jira/browse/HDFS-6382
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client, namenode
Affects Versions: 2.4.0
Reporter: Zesheng Wu
Assignee: Zesheng Wu
 Attachments: HDFS-TTL-Design -2.pdf, HDFS-TTL-Design.pdf


 In production environment, we always have scenario like this, we want to 
 backup files on hdfs for some time and then hope to delete these files 
 automatically. For example, we keep only 1 day's logs on local disk due to 
 limited disk space, but we need to keep about 1 month's logs in order to 
 debug program bugs, so we keep all the logs on hdfs and delete logs which are 
 older than 1 month. This is a typical scenario of HDFS TTL. So here we 
 propose that hdfs can support TTL.
 Following are some details of this proposal:
 1. HDFS can support TTL on a specified file or directory
 2. If a TTL is set on a file, the file will be deleted automatically after 
 the TTL is expired
 3. If a TTL is set on a directory, the child files and directories will be 
 deleted automatically after the TTL is expired
 4. The child file/directory's TTL configuration should override its parent 
 directory's
 5. A global configuration is needed to configure that whether the deleted 
 files/directories should go to the trash or not
 6. A global configuration is needed to configure that whether a directory 
 with TTL should be deleted when it is emptied by TTL mechanism or not.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6539) test_native_mini_dfs is skipped in hadoop-hdfs/pom.xml

2014-06-15 Thread Binglin Chang (JIRA)
Binglin Chang created HDFS-6539:
---

 Summary: test_native_mini_dfs is skipped in hadoop-hdfs/pom.xml
 Key: HDFS-6539
 URL: https://issues.apache.org/jira/browse/HDFS-6539
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HDFS-6539) test_native_mini_dfs is skipped in hadoop-hdfs/pom.xml

2014-06-15 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang reassigned HDFS-6539:
---

Assignee: Binglin Chang

 test_native_mini_dfs is skipped in hadoop-hdfs/pom.xml
 --

 Key: HDFS-6539
 URL: https://issues.apache.org/jira/browse/HDFS-6539
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang





--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6539) test_native_mini_dfs is skipped in hadoop-hdfs/pom.xml

2014-06-15 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HDFS-6539:


Status: Patch Available  (was: Open)

 test_native_mini_dfs is skipped in hadoop-hdfs/pom.xml
 --

 Key: HDFS-6539
 URL: https://issues.apache.org/jira/browse/HDFS-6539
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang





--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6539) test_native_mini_dfs is skipped in hadoop-hdfs/pom.xml

2014-06-15 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HDFS-6539:


Attachment: HDFS-6539.v1.patch

Hi [~cmccabe],
 looks like the patch in HADOOP-8480 has a little error, test_native_mini_dfs 
was changed to test_libhdfs_threaded, so test_libhdfs_threaded was tested 
twice, but  test_native_mini_dfs is skipped.

 test_native_mini_dfs is skipped in hadoop-hdfs/pom.xml
 --

 Key: HDFS-6539
 URL: https://issues.apache.org/jira/browse/HDFS-6539
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang
 Attachments: HDFS-6539.v1.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6475) WebHdfs clients fail without retry because incorrect handling of StandbyException

2014-06-15 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14032096#comment-14032096
 ] 

Yongjun Zhang commented on HDFS-6475:
-

Hi [~jingzhao], 

Thanks a lot for the review and the suggestion! I agree InvalidToken is indeed 
a good place to hold this util method. I made this change and uploaded patch 
005.


 WebHdfs clients fail without retry because incorrect handling of 
 StandbyException
 -

 Key: HDFS-6475
 URL: https://issues.apache.org/jira/browse/HDFS-6475
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha, webhdfs
Affects Versions: 2.4.0
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
 Attachments: HDFS-6475.001.patch, HDFS-6475.002.patch, 
 HDFS-6475.003.patch, HDFS-6475.003.patch, HDFS-6475.004.patch, 
 HDFS-6475.005.patch


 With WebHdfs clients connected to a HA HDFS service, the delegation token is 
 previously initialized with the active NN.
 When clients try to issue request, the NN it contacts is stored in a map 
 returned by DFSUtil.getNNServiceRpcAddresses(conf). And the client contact 
 the NN based on the order, so likely the first one it runs into is StandbyNN. 
 If the StandbyNN doesn't have the updated client crediential, it will throw a 
 s SecurityException that wraps StandbyException.
 The client is expected to retry another NN, but due to the insufficient 
 handling of SecurityException mentioned above, it failed.
 Example message:
 {code}
 {RemoteException={message=Failed to obtain user group information: 
 org.apache.hadoop.security.token.SecretManager$InvalidToken: 
 StandbyException, javaCl
 assName=java.lang.SecurityException, exception=SecurityException}}
 org.apache.hadoop.ipc.RemoteException(java.lang.SecurityException): Failed to 
 obtain user group information: 
 org.apache.hadoop.security.token.SecretManager$InvalidToken: StandbyException
 at 
 org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:159)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:325)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$700(WebHdfsFileSystem.java:107)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.getResponse(WebHdfsFileSystem.java:635)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:542)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.run(WebHdfsFileSystem.java:431)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:685)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:696)
 at kclient1.kclient$1.run(kclient.java:64)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:356)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1528)
 at kclient1.kclient.main(kclient.java:58)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6475) WebHdfs clients fail without retry because incorrect handling of StandbyException

2014-06-15 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-6475:


Attachment: HDFS-6475.005.patch

 WebHdfs clients fail without retry because incorrect handling of 
 StandbyException
 -

 Key: HDFS-6475
 URL: https://issues.apache.org/jira/browse/HDFS-6475
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha, webhdfs
Affects Versions: 2.4.0
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
 Attachments: HDFS-6475.001.patch, HDFS-6475.002.patch, 
 HDFS-6475.003.patch, HDFS-6475.003.patch, HDFS-6475.004.patch, 
 HDFS-6475.005.patch


 With WebHdfs clients connected to a HA HDFS service, the delegation token is 
 previously initialized with the active NN.
 When clients try to issue request, the NN it contacts is stored in a map 
 returned by DFSUtil.getNNServiceRpcAddresses(conf). And the client contact 
 the NN based on the order, so likely the first one it runs into is StandbyNN. 
 If the StandbyNN doesn't have the updated client crediential, it will throw a 
 s SecurityException that wraps StandbyException.
 The client is expected to retry another NN, but due to the insufficient 
 handling of SecurityException mentioned above, it failed.
 Example message:
 {code}
 {RemoteException={message=Failed to obtain user group information: 
 org.apache.hadoop.security.token.SecretManager$InvalidToken: 
 StandbyException, javaCl
 assName=java.lang.SecurityException, exception=SecurityException}}
 org.apache.hadoop.ipc.RemoteException(java.lang.SecurityException): Failed to 
 obtain user group information: 
 org.apache.hadoop.security.token.SecretManager$InvalidToken: StandbyException
 at 
 org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:159)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:325)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$700(WebHdfsFileSystem.java:107)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.getResponse(WebHdfsFileSystem.java:635)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:542)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.run(WebHdfsFileSystem.java:431)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:685)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:696)
 at kclient1.kclient$1.run(kclient.java:64)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:356)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1528)
 at kclient1.kclient.main(kclient.java:58)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6535) HDFS quota update is wrong when file is appended

2014-06-15 Thread George Wong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

George Wong updated HDFS-6535:
--

Attachment: HDFS-6535.patch

upload the patch to fix this issue. 
The patch works for current trunk.

 HDFS quota update is wrong when file is appended
 

 Key: HDFS-6535
 URL: https://issues.apache.org/jira/browse/HDFS-6535
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.0
Reporter: George Wong
 Attachments: HDFS-6535.patch, TestHDFSQuota.java


 when a file in the directory with Quota feature is appended, the cached disk 
 consumption should be updated. 
 But currently, the update is wrong.
 Use the uploaded UT to reproduce this bug.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6539) test_native_mini_dfs is skipped in hadoop-hdfs/pom.xml

2014-06-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14032129#comment-14032129
 ] 

Hadoop QA commented on HDFS-6539:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12650504/HDFS-6539.v1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7124//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7124//console

This message is automatically generated.

 test_native_mini_dfs is skipped in hadoop-hdfs/pom.xml
 --

 Key: HDFS-6539
 URL: https://issues.apache.org/jira/browse/HDFS-6539
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang
 Attachments: HDFS-6539.v1.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)