[jira] [Updated] (HDFS-5492) Port HDFS-2069 (Incorrect default trash interval in the docs) to trunk

2013-11-13 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-5492:


Status: Open  (was: Patch Available)

> Port HDFS-2069 (Incorrect default trash interval in the docs) to trunk
> --
>
> Key: HDFS-5492
> URL: https://issues.apache.org/jira/browse/HDFS-5492
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.2.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
>  Labels: documentation, newbie
> Attachments: HDFS-5492.patch
>
>
> HDFS-2069 is not ported to current document.
> The description of HDFS-2069 is as follows:
> {quote}
> Current HDFS architecture information about Trash is incorrectly documented 
> as -
> The current default policy is to delete files from /trash that are more than 
> 6 hours old. In the future, this policy will be configurable through a well 
> defined interface.
> It should be something like -
> Current default trash interval is set to 0 (Deletes file without storing in 
> trash ) . This value is configurable parameter stored as fs.trash.interval 
> stored in core-site.xml .
> {quote}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5492) Port HDFS-2069 (Incorrect default trash interval in the docs) to trunk

2013-11-13 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-5492:


Status: Patch Available  (was: Open)

For starting jenkins

> Port HDFS-2069 (Incorrect default trash interval in the docs) to trunk
> --
>
> Key: HDFS-5492
> URL: https://issues.apache.org/jira/browse/HDFS-5492
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.2.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
>  Labels: documentation, newbie
> Attachments: HDFS-5492.patch
>
>
> HDFS-2069 is not ported to current document.
> The description of HDFS-2069 is as follows:
> {quote}
> Current HDFS architecture information about Trash is incorrectly documented 
> as -
> The current default policy is to delete files from /trash that are more than 
> 6 hours old. In the future, this policy will be configurable through a well 
> defined interface.
> It should be something like -
> Current default trash interval is set to 0 (Deletes file without storing in 
> trash ) . This value is configurable parameter stored as fs.trash.interval 
> stored in core-site.xml .
> {quote}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5513) CacheAdmin commands fail when using . as the path

2013-11-13 Thread Stephen Chu (JIRA)
Stephen Chu created HDFS-5513:
-

 Summary: CacheAdmin commands fail when using . as the path
 Key: HDFS-5513
 URL: https://issues.apache.org/jira/browse/HDFS-5513
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 3.0.0
Reporter: Stephen Chu
Priority: Minor


The hdfs CLI commands generally accept "." as a path argument.
e.g.
{code}
hdfs dfs -rm .
hdfs dfsadmin -allowSnapshot .
{code}

I don't think it's very common to use the path "." but the CacheAdmin commands 
will fail saying that it cannot create a Path from an empty string.

{code}
[schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -removeDirectives -path .
Exception in thread "main" java.lang.IllegalArgumentException: Can not create a 
Path from an empty string
at org.apache.hadoop.fs.Path.checkPathArg(Path.java:127)
at org.apache.hadoop.fs.Path.(Path.java:184)
at 
org.apache.hadoop.hdfs.protocol.PathBasedCacheDirective$Builder.(PathBasedCacheDirective.java:66)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.listPathBasedCacheDirectives(DistributedFileSystem.java:1639)
at 
org.apache.hadoop.hdfs.tools.CacheAdmin$RemovePathBasedCacheDirectivesCommand.run(CacheAdmin.java:365)
at org.apache.hadoop.hdfs.tools.CacheAdmin.run(CacheAdmin.java:82)
at org.apache.hadoop.hdfs.tools.CacheAdmin.main(CacheAdmin.java:87)
[schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -addDirective -path . -pool schu
Exception in thread "main" java.lang.IllegalArgumentException: Can not create a 
Path from an empty string
at org.apache.hadoop.fs.Path.checkPathArg(Path.java:127)
at org.apache.hadoop.fs.Path.(Path.java:184)
at 
org.apache.hadoop.hdfs.protocol.PathBasedCacheDirective$Builder.(PathBasedCacheDirective.java:66)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.addPathBasedCacheDirective(DistributedFileSystem.java:1598)
at 
org.apache.hadoop.hdfs.tools.CacheAdmin$AddPathBasedCacheDirectiveCommand.run(CacheAdmin.java:180)
at org.apache.hadoop.hdfs.tools.CacheAdmin.run(CacheAdmin.java:82)
at org.apache.hadoop.hdfs.tools.CacheAdmin.main(CacheAdmin.java:87)
{code}




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5506) Use URLConnectionFactory in DelegationTokenFetcher

2013-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822165#comment-13822165
 ] 

Hadoop QA commented on HDFS-5506:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12613771/HDFS-5506.003.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5432//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5432//console

This message is automatically generated.

> Use URLConnectionFactory in DelegationTokenFetcher
> --
>
> Key: HDFS-5506
> URL: https://issues.apache.org/jira/browse/HDFS-5506
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5506.000.patch, HDFS-5506.001.patch, 
> HDFS-5506.002.patch, HDFS-5506.003.patch
>
>
> HftpFileSystem uses DelegationTokenFetcher to get delegation token from the 
> server. DelegationTokenFetcher should use the same URLConnectionFactory to 
> open all HTTP / HTTPS connections so that things like SSL certificates, 
> timeouts are respected.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5489) Use TokenAspect in WebHDFS

2013-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822161#comment-13822161
 ] 

Hadoop QA commented on HDFS-5489:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12613766/HDFS-5489.002.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5430//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5430//console

This message is automatically generated.

> Use TokenAspect in WebHDFS
> --
>
> Key: HDFS-5489
> URL: https://issues.apache.org/jira/browse/HDFS-5489
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5489.000.patch, HDFS-5489.001.patch, 
> HDFS-5489.002.patch
>
>
> HDFS-5440 provides TokenAspect for both HftpFileSystem and WebHdfsFileSystem 
> to handle the delegation tokens. This jira refactors WebHdfsFileSystem to use 
> TokenAspect.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5506) Use URLConnectionFactory in DelegationTokenFetcher

2013-11-13 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5506:
-

Attachment: HDFS-5506.003.patch

> Use URLConnectionFactory in DelegationTokenFetcher
> --
>
> Key: HDFS-5506
> URL: https://issues.apache.org/jira/browse/HDFS-5506
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5506.000.patch, HDFS-5506.001.patch, 
> HDFS-5506.002.patch, HDFS-5506.003.patch
>
>
> HftpFileSystem uses DelegationTokenFetcher to get delegation token from the 
> server. DelegationTokenFetcher should use the same URLConnectionFactory to 
> open all HTTP / HTTPS connections so that things like SSL certificates, 
> timeouts are respected.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5506) Use URLConnectionFactory in DelegationTokenFetcher

2013-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822122#comment-13822122
 ] 

Hadoop QA commented on HDFS-5506:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12613768/HDFS-5506.002.patch
  against trunk revision .

{color:red}-1 patch{color}.  Trunk compilation may be broken.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5431//console

This message is automatically generated.

> Use URLConnectionFactory in DelegationTokenFetcher
> --
>
> Key: HDFS-5506
> URL: https://issues.apache.org/jira/browse/HDFS-5506
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5506.000.patch, HDFS-5506.001.patch, 
> HDFS-5506.002.patch
>
>
> HftpFileSystem uses DelegationTokenFetcher to get delegation token from the 
> server. DelegationTokenFetcher should use the same URLConnectionFactory to 
> open all HTTP / HTTPS connections so that things like SSL certificates, 
> timeouts are respected.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5506) Use URLConnectionFactory in DelegationTokenFetcher

2013-11-13 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5506:
-

Attachment: HDFS-5506.002.patch

> Use URLConnectionFactory in DelegationTokenFetcher
> --
>
> Key: HDFS-5506
> URL: https://issues.apache.org/jira/browse/HDFS-5506
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5506.000.patch, HDFS-5506.001.patch, 
> HDFS-5506.002.patch
>
>
> HftpFileSystem uses DelegationTokenFetcher to get delegation token from the 
> server. DelegationTokenFetcher should use the same URLConnectionFactory to 
> open all HTTP / HTTPS connections so that things like SSL certificates, 
> timeouts are respected.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5393) Serve bootstrap and jQuery locally

2013-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822108#comment-13822108
 ] 

Hadoop QA commented on HDFS-5393:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12613739/HDFS-5393.002.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5428//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5428//console

This message is automatically generated.

> Serve bootstrap and jQuery locally
> --
>
> Key: HDFS-5393
> URL: https://issues.apache.org/jira/browse/HDFS-5393
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>Priority: Minor
> Attachments: HDFS-5393-static.tar.bz2, HDFS-5393.000.patch, 
> HDFS-5393.001.patch, HDFS-5393.002.patch, HDFS-5393.002.patch.gz
>
>
> Currently the webui depends upon bootstrap and jQuery on the CDN. These 
> libraries should be served locally so that the web ui can work when the 
> cluster does not connect to the Internet.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5489) Use TokenAspect in WebHDFS

2013-11-13 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5489:
-

Attachment: HDFS-5489.002.patch

> Use TokenAspect in WebHDFS
> --
>
> Key: HDFS-5489
> URL: https://issues.apache.org/jira/browse/HDFS-5489
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5489.000.patch, HDFS-5489.001.patch, 
> HDFS-5489.002.patch
>
>
> HDFS-5440 provides TokenAspect for both HftpFileSystem and WebHdfsFileSystem 
> to handle the delegation tokens. This jira refactors WebHdfsFileSystem to use 
> TokenAspect.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5506) Use URLConnectionFactory in DelegationTokenFetcher

2013-11-13 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822105#comment-13822105
 ] 

Jing Zhao commented on HDFS-5506:
-

The patch looks good in general. Some comments:
# DelegationTokenFetcher#getDTfromRemote
{code}
  StringBuffer buf = new StringBuffer(nnUri.toString())
.append(GetDelegationTokenServlet.PATH_SPEC);
{code}
Let's just use StringBuilder here.
# This code will be used by HDFS-5502 I guess? Let's merge it there.
{code}
token.setKind(https ? HsftpFileSystem.TOKEN_KIND
: HftpFileSystem.TOKEN_KIND);
{code}
# The following code needs to updated since the run(...) method is used not 
only for
cancelling token. Also "failed" is not updated in the catch section.
{code}
boolean failed = false;
try {
  conn = (HttpURLConnection) factory.openConnection(url, true);
  if (conn.getResponseCode() != HttpURLConnection.HTTP_OK) {
throw new IOException("Error cancelling token: "
+ conn.getResponseMessage());
  }
} catch (IOException ie) {
  LOG.info("error in cancel over HTTP", ie);
{code}


> Use URLConnectionFactory in DelegationTokenFetcher
> --
>
> Key: HDFS-5506
> URL: https://issues.apache.org/jira/browse/HDFS-5506
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5506.000.patch, HDFS-5506.001.patch
>
>
> HftpFileSystem uses DelegationTokenFetcher to get delegation token from the 
> server. DelegationTokenFetcher should use the same URLConnectionFactory to 
> open all HTTP / HTTPS connections so that things like SSL certificates, 
> timeouts are respected.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5487) Introduce unit test for TokenAspect

2013-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822090#comment-13822090
 ] 

Hudson commented on HDFS-5487:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4733 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4733/])
HDFS-5487. Introduce unit test for TokenAspect. Contributed by Haohui Mai. 
(jing9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1541776)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestTokenAspect.java


> Introduce unit test for TokenAspect
> ---
>
> Key: HDFS-5487
> URL: https://issues.apache.org/jira/browse/HDFS-5487
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.3.0
>
> Attachments: HDFS-5487.000.patch, HDFS-5487.001.patch, 
> HDFS-5487.002.patch
>
>
> HDFS-5440 moves token-related logic to TokenAspect. This jira merges unit 
> tests the token-related tests from hftp / hsftp / webhdfs into TestTokenAspect



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5504) In HA mode, OP_DELETE_SNAPSHOT is not decrementing the safemode threshold, leads to NN safemode.

2013-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822089#comment-13822089
 ] 

Hudson commented on HDFS-5504:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4733 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4733/])
HDFS-5504. In HA mode, OP_DELETE_SNAPSHOT is not decrementing the safemode 
threshold, leads to NN safemode. Contributed by Vinay. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1541773)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDeletion.java


> In HA mode, OP_DELETE_SNAPSHOT is not decrementing the safemode threshold, 
> leads to NN safemode.
> 
>
> Key: HDFS-5504
> URL: https://issues.apache.org/jira/browse/HDFS-5504
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Vinay
>Assignee: Vinay
> Fix For: 2.3.0
>
> Attachments: HDFS-5504.patch, HDFS-5504.patch
>
>
> 1. HA installation, standby NN is down.
> 2. delete snapshot is called and it has deleted the blocks from blocksmap and 
> all datanodes. log sync also happened.
> 3. before next log roll NN crashed
> 4. When the namenode restartes then it will fsimage and finalized edits from 
> shared storage and set the safemode threshold. which includes blocks from 
> deleted snapshot also. (because this edits is not yet read as namenode is 
> restarted before the last edits segment is not finalized)
> 5. When it becomes active, it finalizes the edits and read the delete 
> snapshot edits_op. but at this time, it was not reducing the safemode count. 
> and it will continuing in safemode.
> 6. On next restart, as the edits is already finalized, on startup only it 
> will read and set the safemode threshold correctly.
> But one more restart will bring NN out of safemode.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5489) Use TokenAspect in WebHDFS

2013-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822087#comment-13822087
 ] 

Hadoop QA commented on HDFS-5489:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12613757/HDFS-5489.001.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5429//console

This message is automatically generated.

> Use TokenAspect in WebHDFS
> --
>
> Key: HDFS-5489
> URL: https://issues.apache.org/jira/browse/HDFS-5489
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5489.000.patch, HDFS-5489.001.patch
>
>
> HDFS-5440 provides TokenAspect for both HftpFileSystem and WebHdfsFileSystem 
> to handle the delegation tokens. This jira refactors WebHdfsFileSystem to use 
> TokenAspect.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5506) Use URLConnectionFactory in DelegationTokenFetcher

2013-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822081#comment-13822081
 ] 

Hadoop QA commented on HDFS-5506:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12613709/HDFS-5506.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5427//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5427//console

This message is automatically generated.

> Use URLConnectionFactory in DelegationTokenFetcher
> --
>
> Key: HDFS-5506
> URL: https://issues.apache.org/jira/browse/HDFS-5506
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5506.000.patch, HDFS-5506.001.patch
>
>
> HftpFileSystem uses DelegationTokenFetcher to get delegation token from the 
> server. DelegationTokenFetcher should use the same URLConnectionFactory to 
> open all HTTP / HTTPS connections so that things like SSL certificates, 
> timeouts are respected.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5489) Use TokenAspect in WebHDFS

2013-11-13 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5489:
-

Status: Patch Available  (was: Open)

> Use TokenAspect in WebHDFS
> --
>
> Key: HDFS-5489
> URL: https://issues.apache.org/jira/browse/HDFS-5489
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5489.000.patch, HDFS-5489.001.patch
>
>
> HDFS-5440 provides TokenAspect for both HftpFileSystem and WebHdfsFileSystem 
> to handle the delegation tokens. This jira refactors WebHdfsFileSystem to use 
> TokenAspect.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5489) Use TokenAspect in WebHDFS

2013-11-13 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5489:
-

Attachment: HDFS-5489.001.patch

> Use TokenAspect in WebHDFS
> --
>
> Key: HDFS-5489
> URL: https://issues.apache.org/jira/browse/HDFS-5489
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5489.000.patch, HDFS-5489.001.patch
>
>
> HDFS-5440 provides TokenAspect for both HftpFileSystem and WebHdfsFileSystem 
> to handle the delegation tokens. This jira refactors WebHdfsFileSystem to use 
> TokenAspect.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5487) Introduce unit test for TokenAspect

2013-11-13 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5487:


   Resolution: Fixed
Fix Version/s: 2.3.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed this to trunk and branch-2.

> Introduce unit test for TokenAspect
> ---
>
> Key: HDFS-5487
> URL: https://issues.apache.org/jira/browse/HDFS-5487
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.3.0
>
> Attachments: HDFS-5487.000.patch, HDFS-5487.001.patch, 
> HDFS-5487.002.patch
>
>
> HDFS-5440 moves token-related logic to TokenAspect. This jira merges unit 
> tests the token-related tests from hftp / hsftp / webhdfs into TestTokenAspect



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5504) In HA mode, OP_DELETE_SNAPSHOT is not decrementing the safemode threshold, leads to NN safemode.

2013-11-13 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822058#comment-13822058
 ] 

Vinay commented on HDFS-5504:
-

Thanks Jing for the review and commit

> In HA mode, OP_DELETE_SNAPSHOT is not decrementing the safemode threshold, 
> leads to NN safemode.
> 
>
> Key: HDFS-5504
> URL: https://issues.apache.org/jira/browse/HDFS-5504
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Vinay
>Assignee: Vinay
> Fix For: 2.3.0
>
> Attachments: HDFS-5504.patch, HDFS-5504.patch
>
>
> 1. HA installation, standby NN is down.
> 2. delete snapshot is called and it has deleted the blocks from blocksmap and 
> all datanodes. log sync also happened.
> 3. before next log roll NN crashed
> 4. When the namenode restartes then it will fsimage and finalized edits from 
> shared storage and set the safemode threshold. which includes blocks from 
> deleted snapshot also. (because this edits is not yet read as namenode is 
> restarted before the last edits segment is not finalized)
> 5. When it becomes active, it finalizes the edits and read the delete 
> snapshot edits_op. but at this time, it was not reducing the safemode count. 
> and it will continuing in safemode.
> 6. On next restart, as the edits is already finalized, on startup only it 
> will read and set the safemode threshold correctly.
> But one more restart will bring NN out of safemode.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5504) In HA mode, OP_DELETE_SNAPSHOT is not decrementing the safemode threshold, leads to NN safemode.

2013-11-13 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5504:


   Resolution: Fixed
Fix Version/s: 2.3.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed this to trunk and branch-2. Thanks Vinay!

> In HA mode, OP_DELETE_SNAPSHOT is not decrementing the safemode threshold, 
> leads to NN safemode.
> 
>
> Key: HDFS-5504
> URL: https://issues.apache.org/jira/browse/HDFS-5504
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Vinay
>Assignee: Vinay
> Fix For: 2.3.0
>
> Attachments: HDFS-5504.patch, HDFS-5504.patch
>
>
> 1. HA installation, standby NN is down.
> 2. delete snapshot is called and it has deleted the blocks from blocksmap and 
> all datanodes. log sync also happened.
> 3. before next log roll NN crashed
> 4. When the namenode restartes then it will fsimage and finalized edits from 
> shared storage and set the safemode threshold. which includes blocks from 
> deleted snapshot also. (because this edits is not yet read as namenode is 
> restarted before the last edits segment is not finalized)
> 5. When it becomes active, it finalizes the edits and read the delete 
> snapshot edits_op. but at this time, it was not reducing the safemode count. 
> and it will continuing in safemode.
> 6. On next restart, as the edits is already finalized, on startup only it 
> will read and set the safemode threshold correctly.
> But one more restart will bring NN out of safemode.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5487) Introduce unit test for TokenAspect

2013-11-13 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822049#comment-13822049
 ] 

Jing Zhao commented on HDFS-5487:
-

+1. The failed test is unrelated. I will commit the patch shortly.

> Introduce unit test for TokenAspect
> ---
>
> Key: HDFS-5487
> URL: https://issues.apache.org/jira/browse/HDFS-5487
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5487.000.patch, HDFS-5487.001.patch, 
> HDFS-5487.002.patch
>
>
> HDFS-5440 moves token-related logic to TokenAspect. This jira merges unit 
> tests the token-related tests from hftp / hsftp / webhdfs into TestTokenAspect



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5504) In HA mode, OP_DELETE_SNAPSHOT is not decrementing the safemode threshold, leads to NN safemode.

2013-11-13 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822045#comment-13822045
 ] 

Jing Zhao commented on HDFS-5504:
-

+1.  I will commit the patch shortly.

> In HA mode, OP_DELETE_SNAPSHOT is not decrementing the safemode threshold, 
> leads to NN safemode.
> 
>
> Key: HDFS-5504
> URL: https://issues.apache.org/jira/browse/HDFS-5504
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Vinay
>Assignee: Vinay
> Attachments: HDFS-5504.patch, HDFS-5504.patch
>
>
> 1. HA installation, standby NN is down.
> 2. delete snapshot is called and it has deleted the blocks from blocksmap and 
> all datanodes. log sync also happened.
> 3. before next log roll NN crashed
> 4. When the namenode restartes then it will fsimage and finalized edits from 
> shared storage and set the safemode threshold. which includes blocks from 
> deleted snapshot also. (because this edits is not yet read as namenode is 
> restarted before the last edits segment is not finalized)
> 5. When it becomes active, it finalizes the edits and read the delete 
> snapshot edits_op. but at this time, it was not reducing the safemode count. 
> and it will continuing in safemode.
> 6. On next restart, as the edits is already finalized, on startup only it 
> will read and set the safemode threshold correctly.
> But one more restart will bring NN out of safemode.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5487) Introduce unit test for TokenAspect

2013-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822042#comment-13822042
 ] 

Hadoop QA commented on HDFS-5487:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12613717/HDFS-5487.002.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5426//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5426//console

This message is automatically generated.

> Introduce unit test for TokenAspect
> ---
>
> Key: HDFS-5487
> URL: https://issues.apache.org/jira/browse/HDFS-5487
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5487.000.patch, HDFS-5487.001.patch, 
> HDFS-5487.002.patch
>
>
> HDFS-5440 moves token-related logic to TokenAspect. This jira merges unit 
> tests the token-related tests from hftp / hsftp / webhdfs into TestTokenAspect



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5500) Critical datanode threads may terminate silently on uncaught exceptions

2013-11-13 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822035#comment-13822035
 ] 

Kousuke Saruta commented on HDFS-5500:
--

Hi,

I'm investigating this issue.
When DU#refreshInterval > 0, DURefreshThread will run and execute "du" command 
to the directory(DU#dirPath) onece a "refreshInterval" millisecond.
So, normally, the value DU#getUsed returns is refreshed onece a refreshInterval 
millisecond.
When we put some files on the directory which DU#dirPath expresses, 
BlockPoolSlicer#getDfsUsed will return the value considering the size of the 
files we put.

But, if DURefreshThread dies because of some uncaught exceptions, we couldn't 
know it and the value BlockPoolSlicer#getDfsUsed returns will  never  be 
updated.

> Critical datanode threads may terminate silently on uncaught exceptions
> ---
>
> Key: HDFS-5500
> URL: https://issues.apache.org/jira/browse/HDFS-5500
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Priority: Critical
>
> We've seen refreshUsed (DU) thread disappearing on uncaught exceptions. This 
> can go unnoticed for a long time.  If OOM occurs, more things can go wrong.  
> On one occasion, Timer, multiple refreshUsed and DataXceiverServer thread had 
> terminated.  
> DataXceiverServer catches OutOfMemoryError and sleeps for 30 seconds, but I 
> am not sure it is really helpful. In once case, the thread did it multiple 
> times then terminated. I suspect another OOM was thrown while in a catch 
> block.  As a result, the server socket was not closed and clients hung on 
> connect. If it had at least closed the socket, client-side would have been 
> impacted less.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (HDFS-5512) CacheAdmin -listPools fails with NPE when user lacks permissions to view all pools

2013-11-13 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang reassigned HDFS-5512:
-

Assignee: Andrew Wang

> CacheAdmin -listPools fails with NPE when user lacks permissions to view all 
> pools
> --
>
> Key: HDFS-5512
> URL: https://issues.apache.org/jira/browse/HDFS-5512
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Stephen Chu
>Assignee: Andrew Wang
>
> https://issues.apache.org/jira/browse/HDFS-5471 (CacheAdmin -listPools fails 
> when user lacks permissions to view all pools) was recently resolved, but on 
> a build with this fix, I am running into another error when using cacheadmin 
> -listPools.
> Now, when the user does not have permissions to view all the pools, the 
> cacheadmin -listPools command will throw a NullPointerException.
> On a system with a single pool "root" with a mode of 750, I see this as user 
> schu:
> {code}
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -listPools
> Exception in thread "main" java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.tools.CacheAdmin$ListCachePoolsCommand.run(CacheAdmin.java:745)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.run(CacheAdmin.java:82)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.main(CacheAdmin.java:87)
> {code}
> After we modify the root pool to 755, then -listPools works properly.
> {code}
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -listPools
> Found 1 result.
> NAME  OWNER  GROUP  MODE   WEIGHT 
> root  root   root   rwxr-xr-x  100
> [schu@hdfs-c5-nfs ~]$ 
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5444) Choose default web UI based on browser capabilities

2013-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821996#comment-13821996
 ] 

Hudson commented on HDFS-5444:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4732 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4732/])
HDFS-5444. Choose default web UI based on browser capabilities. Contributed by 
Haohui Mai. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1541753)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/index.html


> Choose default web UI based on browser capabilities
> ---
>
> Key: HDFS-5444
> URL: https://issues.apache.org/jira/browse/HDFS-5444
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 3.0.0
>
> Attachments: HDFS-5444.000.patch, HDFS-5444.000.patch, 
> HDFS-5444.001.patch, Screenshot-new.png, Screenshot-old.png
>
>
> This jira changes the entrance of the web UI -- so that modern browsers with 
> JavaScript support are redirected to the new web UI, while other browsers 
> will automatically fall back to the old JSP based UI.
> It also add hyperlinks in both UIs to facilitate testings and evaluation.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5393) Serve bootstrap and jQuery locally

2013-11-13 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5393:
-

Attachment: HDFS-5393.002.patch

> Serve bootstrap and jQuery locally
> --
>
> Key: HDFS-5393
> URL: https://issues.apache.org/jira/browse/HDFS-5393
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>Priority: Minor
> Attachments: HDFS-5393-static.tar.bz2, HDFS-5393.000.patch, 
> HDFS-5393.001.patch, HDFS-5393.002.patch, HDFS-5393.002.patch.gz
>
>
> Currently the webui depends upon bootstrap and jQuery on the CDN. These 
> libraries should be served locally so that the web ui can work when the 
> cluster does not connect to the Internet.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5438) Flaws in block report processing can cause data loss

2013-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821986#comment-13821986
 ] 

Hadoop QA commented on HDFS-5438:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12613682/HDFS-5438-4.trunk.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5424//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5424//console

This message is automatically generated.

> Flaws in block report processing can cause data loss
> 
>
> Key: HDFS-5438
> URL: https://issues.apache.org/jira/browse/HDFS-5438
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 0.23.9, 2.2.0
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Critical
> Attachments: HDFS-5438-1.trunk.patch, HDFS-5438-2.trunk.patch, 
> HDFS-5438-3.trunk.patch, HDFS-5438-4.trunk.patch, HDFS-5438.trunk.patch
>
>
> The incremental block reports from data nodes and block commits are 
> asynchronous. This becomes troublesome when the gen stamp for a block is 
> changed during a write pipeline recovery.
> * If an incremental block report is delayed from a node but NN had enough 
> replicas already, a report with the old gen stamp may be received after block 
> completion. This replica will be correctly marked corrupt. But if the node 
> had participated in the pipeline recovery, a new (delayed) report with the 
> correct gen stamp will come soon. However, this report won't have any effect 
> on the corrupt state of the replica.
> * If block reports are received while the block is still under construction 
> (i.e. client's call to make block committed has not been received by NN), 
> they are blindly accepted regardless of the gen stamp. If a failed node 
> reports in with the old gen stamp while pipeline recovery is on-going, it 
> will be accepted and counted as valid during commit of the block.
> Due to the above two problems, correct replicas can be marked corrupt and 
> corrupt replicas can be accepted during commit.  So far we have observed two 
> cases in production.
> * The client hangs forever to close a file. All replicas are marked corrupt.
> * After the successful close of a file, read fails. Corrupt replicas are 
> accepted during commit and valid replicas are marked corrupt afterward.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5512) CacheAdmin -listPools fails with NPE when user lacks permissions to view all pools

2013-11-13 Thread Stephen Chu (JIRA)
Stephen Chu created HDFS-5512:
-

 Summary: CacheAdmin -listPools fails with NPE when user lacks 
permissions to view all pools
 Key: HDFS-5512
 URL: https://issues.apache.org/jira/browse/HDFS-5512
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 3.0.0
Reporter: Stephen Chu


https://issues.apache.org/jira/browse/HDFS-5471 (CacheAdmin -listPools fails 
when user lacks permissions to view all pools) was recently resolved, but on a 
build with this fix, I am running into another error when using cacheadmin 
-listPools.

Now, when the user does not have permissions to view all the pools, the 
cacheadmin -listPools command will throw a NullPointerException.

On a system with a single pool "root" with a mode of 750, I see this as user 
schu:
{code}
[schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -listPools
Exception in thread "main" java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.tools.CacheAdmin$ListCachePoolsCommand.run(CacheAdmin.java:745)
at org.apache.hadoop.hdfs.tools.CacheAdmin.run(CacheAdmin.java:82)
at org.apache.hadoop.hdfs.tools.CacheAdmin.main(CacheAdmin.java:87)
{code}

After we modify the root pool to 755, then -listPools works properly.
{code}
[schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -listPools
Found 1 result.
NAME  OWNER  GROUP  MODE   WEIGHT 
root  root   root   rwxr-xr-x  100
[schu@hdfs-c5-nfs ~]$ 
{code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5444) Choose default web UI based on browser capabilities

2013-11-13 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5444:


   Resolution: Fixed
Fix Version/s: 3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed this to trunk.

> Choose default web UI based on browser capabilities
> ---
>
> Key: HDFS-5444
> URL: https://issues.apache.org/jira/browse/HDFS-5444
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 3.0.0
>
> Attachments: HDFS-5444.000.patch, HDFS-5444.000.patch, 
> HDFS-5444.001.patch, Screenshot-new.png, Screenshot-old.png
>
>
> This jira changes the entrance of the web UI -- so that modern browsers with 
> JavaScript support are redirected to the new web UI, while other browsers 
> will automatically fall back to the old JSP based UI.
> It also add hyperlinks in both UIs to facilitate testings and evaluation.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5511) improve CacheManipulator interface to allow better unit testing

2013-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821930#comment-13821930
 ] 

Hadoop QA commented on HDFS-5511:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12613671/HDFS-5511.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 eclipse:eclipse{color}.  The patch failed to build with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5423//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5423//console

This message is automatically generated.

> improve CacheManipulator interface to allow better unit testing
> ---
>
> Key: HDFS-5511
> URL: https://issues.apache.org/jira/browse/HDFS-5511
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5511.001.patch
>
>
> The CacheManipulator interface has been helpful in allowing us to stub out 
> {{mlock}} in cases where we don't want to test it.  We should move  the 
> {{getMemlockLimit}} and {{getOperatingSystemPageSize}} functions into this 
> interface as well so that we don't have to skip these tests on machines where 
> these methods would ordinarily not work for us.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5487) Refactor TestHftpDelegationToken into TestTokenAspect

2013-11-13 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5487:
-

Description: HDFS-5440 moves token-related logic to TokenAspect. This jira 
merges unit tests the token-related tests from hftp / hsftp / webhdfs into 
TestTokenAspect  (was: HDFS-5440 moves token-related logic to TokenAspect. 
Therefore, it is appropriate to clean up the unit tests of 
TestHftpDelegationToken and to move them into TestTokenAspect.)

> Refactor TestHftpDelegationToken into TestTokenAspect
> -
>
> Key: HDFS-5487
> URL: https://issues.apache.org/jira/browse/HDFS-5487
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5487.000.patch, HDFS-5487.001.patch, 
> HDFS-5487.002.patch
>
>
> HDFS-5440 moves token-related logic to TokenAspect. This jira merges unit 
> tests the token-related tests from hftp / hsftp / webhdfs into TestTokenAspect



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5487) Introduce unit test for TokenAspect

2013-11-13 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5487:
-

Summary: Introduce unit test for TokenAspect  (was: Refactor 
TestHftpDelegationToken into TestTokenAspect)

> Introduce unit test for TokenAspect
> ---
>
> Key: HDFS-5487
> URL: https://issues.apache.org/jira/browse/HDFS-5487
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5487.000.patch, HDFS-5487.001.patch, 
> HDFS-5487.002.patch
>
>
> HDFS-5440 moves token-related logic to TokenAspect. This jira merges unit 
> tests the token-related tests from hftp / hsftp / webhdfs into TestTokenAspect



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-4995) Make getContentSummary() less expensive

2013-11-13 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821914#comment-13821914
 ] 

Kihwal Lee commented on HDFS-4995:
--

bq. Is switching the order of the calls completely safe? Is there any chance 
you can fall through an inode reference to a directory that will yield?

Nope. computeContentSummary4Snapshot() will calculate the summary using a new 
context instance with yield disabled, and then merge the individual counts.

> Make getContentSummary() less expensive
> ---
>
> Key: HDFS-4995
> URL: https://issues.apache.org/jira/browse/HDFS-4995
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 0.23.9, 2.3.0
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Attachments: HDFS-4995.branch-0.23.3.patch, HDFS-4995.trunk.2.patch, 
> HDFS-4995.trunk.3.patch, HDFS-4995.trunk.patch, HDFS-4995.trunk1.patch
>
>
> When users call du or count DFS command, getContentSummary() method is called 
> against namenode. If the directory has many directories and files, it could 
> hold the namesystem lock for a long time. We've seen it taking over 20 
> seconds. Namenode should not allow regular users to cause extended locking.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5487) Refactor TestHftpDelegationToken into TestTokenAspect

2013-11-13 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5487:
-

Attachment: HDFS-5487.002.patch

Thanks [~jingzhao] for the comments.

> Refactor TestHftpDelegationToken into TestTokenAspect
> -
>
> Key: HDFS-5487
> URL: https://issues.apache.org/jira/browse/HDFS-5487
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5487.000.patch, HDFS-5487.001.patch, 
> HDFS-5487.002.patch
>
>
> HDFS-5440 moves token-related logic to TokenAspect. Therefore, it is 
> appropriate to clean up the unit tests of TestHftpDelegationToken and to move 
> them into TestTokenAspect.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5487) Refactor TestHftpDelegationToken into TestTokenAspect

2013-11-13 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5487:
-

Status: Patch Available  (was: Open)

> Refactor TestHftpDelegationToken into TestTokenAspect
> -
>
> Key: HDFS-5487
> URL: https://issues.apache.org/jira/browse/HDFS-5487
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5487.000.patch, HDFS-5487.001.patch, 
> HDFS-5487.002.patch
>
>
> HDFS-5440 moves token-related logic to TokenAspect. Therefore, it is 
> appropriate to clean up the unit tests of TestHftpDelegationToken and to move 
> them into TestTokenAspect.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5506) Use URLConnectionFactory in DelegationTokenFetcher

2013-11-13 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5506:
-

Attachment: HDFS-5506.001.patch

> Use URLConnectionFactory in DelegationTokenFetcher
> --
>
> Key: HDFS-5506
> URL: https://issues.apache.org/jira/browse/HDFS-5506
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5506.000.patch, HDFS-5506.001.patch
>
>
> HftpFileSystem uses DelegationTokenFetcher to get delegation token from the 
> server. DelegationTokenFetcher should use the same URLConnectionFactory to 
> open all HTTP / HTTPS connections so that things like SSL certificates, 
> timeouts are respected.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5506) Use URLConnectionFactory in DelegationTokenFetcher

2013-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821899#comment-13821899
 ] 

Hadoop QA commented on HDFS-5506:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12613452/HDFS-5506.000.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5425//console

This message is automatically generated.

> Use URLConnectionFactory in DelegationTokenFetcher
> --
>
> Key: HDFS-5506
> URL: https://issues.apache.org/jira/browse/HDFS-5506
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5506.000.patch
>
>
> HftpFileSystem uses DelegationTokenFetcher to get delegation token from the 
> server. DelegationTokenFetcher should use the same URLConnectionFactory to 
> open all HTTP / HTTPS connections so that things like SSL certificates, 
> timeouts are respected.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5487) Refactor TestHftpDelegationToken into TestTokenAspect

2013-11-13 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821880#comment-13821880
 ] 

Jing Zhao commented on HDFS-5487:
-

The new unit test looks good to me. But let's keep TestHftpDelegationToken here 
and only add the new unit test in this jira. Hsftp related unit tests can be 
removed/updated in HDFS-5502. 

> Refactor TestHftpDelegationToken into TestTokenAspect
> -
>
> Key: HDFS-5487
> URL: https://issues.apache.org/jira/browse/HDFS-5487
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5487.000.patch, HDFS-5487.001.patch
>
>
> HDFS-5440 moves token-related logic to TokenAspect. Therefore, it is 
> appropriate to clean up the unit tests of TestHftpDelegationToken and to move 
> them into TestTokenAspect.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5488) Clean up TestHftpURLTimeout

2013-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821852#comment-13821852
 ] 

Hudson commented on HDFS-5488:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4730 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4730/])
Move HDFS-5325 and HDFS-5488 from BUG-FIX to IMPROVEMENT in CHANGES.txt. 
(jing9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1541718)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Clean up TestHftpURLTimeout
> ---
>
> Key: HDFS-5488
> URL: https://issues.apache.org/jira/browse/HDFS-5488
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.3.0
>
> Attachments: HDFS-5488.000.patch
>
>
> HftpFileSystem uses URLConnectionFactory to set the timeout of each http 
> connections. This jira cleans up TestHftpTimeout and merges its unit tests 
> into TestURLConnectionFactory.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5325) Remove WebHdfsFileSystem#ConnRunner

2013-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821853#comment-13821853
 ] 

Hudson commented on HDFS-5325:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4730 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4730/])
Move HDFS-5325 and HDFS-5488 from BUG-FIX to IMPROVEMENT in CHANGES.txt. 
(jing9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1541718)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Remove WebHdfsFileSystem#ConnRunner
> ---
>
> Key: HDFS-5325
> URL: https://issues.apache.org/jira/browse/HDFS-5325
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.3.0
>
> Attachments: HDFS-5325.000.patch
>
>
> The class WebHdfsFileSystem#ConnRunner is only used in unit tests. There are 
> equivalent class (FsPathRunner / URLRunner) to provide the same functionality.
> This jira removes the class to simplify the code.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5506) Use URLConnectionFactory in DelegationTokenFetcher

2013-11-13 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5506:
-

Status: Patch Available  (was: Open)

> Use URLConnectionFactory in DelegationTokenFetcher
> --
>
> Key: HDFS-5506
> URL: https://issues.apache.org/jira/browse/HDFS-5506
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5506.000.patch
>
>
> HftpFileSystem uses DelegationTokenFetcher to get delegation token from the 
> server. DelegationTokenFetcher should use the same URLConnectionFactory to 
> open all HTTP / HTTPS connections so that things like SSL certificates, 
> timeouts are respected.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-4995) Make getContentSummary() less expensive

2013-11-13 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821826#comment-13821826
 ] 

Daryn Sharp commented on HDFS-4995:
---

Looks much cleaner.  One question:

{{code}}
+// Snapshot summary calc won't be relinquishing locks in the middle.
+// Do this first and handover to parent.
+computeContentSummary4Snapshot(summary.getCounts());
+super.computeContentSummary(summary);
{{code}}

Is switching the order of the calls completely safe?  Is there any chance you 
can fall through an inode reference to a directory that will yield?



> Make getContentSummary() less expensive
> ---
>
> Key: HDFS-4995
> URL: https://issues.apache.org/jira/browse/HDFS-4995
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 0.23.9, 2.3.0
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Attachments: HDFS-4995.branch-0.23.3.patch, HDFS-4995.trunk.2.patch, 
> HDFS-4995.trunk.3.patch, HDFS-4995.trunk.patch, HDFS-4995.trunk1.patch
>
>
> When users call du or count DFS command, getContentSummary() method is called 
> against namenode. If the directory has many directories and files, it could 
> hold the namesystem lock for a long time. We've seen it taking over 20 
> seconds. Namenode should not allow regular users to cause extended locking.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5474) Deletesnapshot can make Namenode in safemode on NN restarts.

2013-11-13 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5474:


   Resolution: Fixed
Fix Version/s: 2.3.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks Vinay for rebasing the patch. I've committed this to trunk and branch-2.

> Deletesnapshot can make Namenode in safemode on NN restarts.
> 
>
> Key: HDFS-5474
> URL: https://issues.apache.org/jira/browse/HDFS-5474
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Uma Maheswara Rao G
>Assignee: sathish
> Fix For: 2.3.0
>
> Attachments: HDFS-5474-001.patch, HDFS-5474-002.patch
>
>
> When we deletesnapshot, we are deleting the blocks associated to that 
> snapshot and after that we do logsync to editlog about deleteSnapshot.
> There can be a chance that blocks removed from blocks map but before log sync 
> if there is BR ,  NN may finds that block does not exist in blocks map and 
> may invalidate that block. As part HB, invalidation info also can go. After 
> this steps if Namenode shutdown before actually do logsync,  On restart it 
> will still consider that snapshot Inodes and expect blocks to report from DN.
> Simple solution is, we should simply move down that blocks removal after 
> logsync only. Similar to delete op.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5075) httpfs-config.sh calls out incorrect env script name

2013-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821814#comment-13821814
 ] 

Hudson commented on HDFS-5075:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4729 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4729/])
HDFS-5075 httpfs-config.sh calls out incorrect env script name (stevel: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1541692)
* /hadoop/common/trunk
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/libexec/httpfs-config.sh
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> httpfs-config.sh calls out incorrect env script name
> 
>
> Key: HDFS-5075
> URL: https://issues.apache.org/jira/browse/HDFS-5075
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Timothy St. Clair
>Assignee: Timothy St. Clair
>  Labels: newbie
> Fix For: 2.3.0
>
> Attachments: HDFS-5075.patch
>
>
> looks just like a simple mistake



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5438) Flaws in block report processing can cause data loss

2013-11-13 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-5438:
-

Attachment: HDFS-5438-4.trunk.patch

New patch attached.

> Flaws in block report processing can cause data loss
> 
>
> Key: HDFS-5438
> URL: https://issues.apache.org/jira/browse/HDFS-5438
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 0.23.9, 2.2.0
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Critical
> Attachments: HDFS-5438-1.trunk.patch, HDFS-5438-2.trunk.patch, 
> HDFS-5438-3.trunk.patch, HDFS-5438-4.trunk.patch, HDFS-5438.trunk.patch
>
>
> The incremental block reports from data nodes and block commits are 
> asynchronous. This becomes troublesome when the gen stamp for a block is 
> changed during a write pipeline recovery.
> * If an incremental block report is delayed from a node but NN had enough 
> replicas already, a report with the old gen stamp may be received after block 
> completion. This replica will be correctly marked corrupt. But if the node 
> had participated in the pipeline recovery, a new (delayed) report with the 
> correct gen stamp will come soon. However, this report won't have any effect 
> on the corrupt state of the replica.
> * If block reports are received while the block is still under construction 
> (i.e. client's call to make block committed has not been received by NN), 
> they are blindly accepted regardless of the gen stamp. If a failed node 
> reports in with the old gen stamp while pipeline recovery is on-going, it 
> will be accepted and counted as valid during commit of the block.
> Due to the above two problems, correct replicas can be marked corrupt and 
> corrupt replicas can be accepted during commit.  So far we have observed two 
> cases in production.
> * The client hangs forever to close a file. All replicas are marked corrupt.
> * After the successful close of a file, read fails. Corrupt replicas are 
> accepted during commit and valid replicas are marked corrupt afterward.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5474) Deletesnapshot can make Namenode in safemode on NN restarts.

2013-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821788#comment-13821788
 ] 

Hudson commented on HDFS-5474:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4728 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4728/])
HDFS-5474. Deletesnapshot can make Namenode in safemode on NN restarts. 
Contributed by Sathish. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1541685)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


> Deletesnapshot can make Namenode in safemode on NN restarts.
> 
>
> Key: HDFS-5474
> URL: https://issues.apache.org/jira/browse/HDFS-5474
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Uma Maheswara Rao G
>Assignee: sathish
> Attachments: HDFS-5474-001.patch, HDFS-5474-002.patch
>
>
> When we deletesnapshot, we are deleting the blocks associated to that 
> snapshot and after that we do logsync to editlog about deleteSnapshot.
> There can be a chance that blocks removed from blocks map but before log sync 
> if there is BR ,  NN may finds that block does not exist in blocks map and 
> may invalidate that block. As part HB, invalidation info also can go. After 
> this steps if Namenode shutdown before actually do logsync,  On restart it 
> will still consider that snapshot Inodes and expect blocks to report from DN.
> Simple solution is, we should simply move down that blocks removal after 
> logsync only. Similar to delete op.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5075) httpfs-config.sh calls out incorrect env script name

2013-11-13 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-5075:
-

   Resolution: Fixed
Fix Version/s: 2.3.0
 Assignee: Timothy St. Clair
   Status: Resolved  (was: Patch Available)

> httpfs-config.sh calls out incorrect env script name
> 
>
> Key: HDFS-5075
> URL: https://issues.apache.org/jira/browse/HDFS-5075
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Timothy St. Clair
>Assignee: Timothy St. Clair
>  Labels: newbie
> Fix For: 2.3.0
>
> Attachments: HDFS-5075.patch
>
>
> looks just like a simple mistake



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5511) improve CacheManipulator interface to allow better unit testing

2013-11-13 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-5511:
---

Status: Patch Available  (was: Open)

> improve CacheManipulator interface to allow better unit testing
> ---
>
> Key: HDFS-5511
> URL: https://issues.apache.org/jira/browse/HDFS-5511
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5511.001.patch
>
>
> The CacheManipulator interface has been helpful in allowing us to stub out 
> {{mlock}} in cases where we don't want to test it.  We should move  the 
> {{getMemlockLimit}} and {{getOperatingSystemPageSize}} functions into this 
> interface as well so that we don't have to skip these tests on machines where 
> these methods would ordinarily not work for us.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5511) improve CacheManipulator interface to allow better unit testing

2013-11-13 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-5511:
---

Attachment: HDFS-5511.001.patch

> improve CacheManipulator interface to allow better unit testing
> ---
>
> Key: HDFS-5511
> URL: https://issues.apache.org/jira/browse/HDFS-5511
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5511.001.patch
>
>
> The CacheManipulator interface has been helpful in allowing us to stub out 
> {{mlock}} in cases where we don't want to test it.  We should move  the 
> {{getMemlockLimit}} and {{getOperatingSystemPageSize}} functions into this 
> interface as well so that we don't have to skip these tests on machines where 
> these methods would ordinarily not work for us.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5511) improve CacheManipulator interface to allow better unit testing

2013-11-13 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-5511:
--

 Summary: improve CacheManipulator interface to allow better unit 
testing
 Key: HDFS-5511
 URL: https://issues.apache.org/jira/browse/HDFS-5511
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: 3.0.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


The CacheManipulator interface has been helpful in allowing us to stub out 
{{mlock}} in cases where we don't want to test it.  We should move  the 
{{getMemlockLimit}} and {{getOperatingSystemPageSize}} functions into this 
interface as well so that we don't have to skip these tests on machines where 
these methods would ordinarily not work for us.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5143) Hadoop cryptographic file system

2013-11-13 Thread Avik Dey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821749#comment-13821749
 ] 

Avik Dey commented on HDFS-5143:


@Owen If you don't think you are misquoting me, then you must be confused. 
Don't confuse our discussion on CFS and the other discussion on encryption 
support in various file formats. Just because we are working on the later does 
not mean one should conclude we are not working on the former. As you can see a 
patch of this size could hardly have been produced overnight.

I don't think I can add any more to what I have said, so this will be my last 
post on the topic.


> Hadoop cryptographic file system
> 
>
> Key: HDFS-5143
> URL: https://issues.apache.org/jira/browse/HDFS-5143
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Yi Liu
>Assignee: Yi Liu
>  Labels: rhino
> Fix For: 3.0.0
>
> Attachments: CryptographicFileSystem.patch, HADOOP cryptographic file 
> system.pdf
>
>
> There is an increasing need for securing data when Hadoop customers use 
> various upper layer applications, such as Map-Reduce, Hive, Pig, HBase and so 
> on.
> HADOOP CFS (HADOOP Cryptographic File System) is used to secure data, based 
> on HADOOP “FilterFileSystem” decorating DFS or other file systems, and 
> transparent to upper layer applications. It’s configurable, scalable and fast.
> High level requirements:
> 1.Transparent to and no modification required for upper layer 
> applications.
> 2.“Seek”, “PositionedReadable” are supported for input stream of CFS if 
> the wrapped file system supports them.
> 3.Very high performance for encryption and decryption, they will not 
> become bottleneck.
> 4.Can decorate HDFS and all other file systems in Hadoop, and will not 
> modify existing structure of file system, such as namenode and datanode 
> structure if the wrapped file system is HDFS.
> 5.Admin can configure encryption policies, such as which directory will 
> be encrypted.
> 6.A robust key management framework.
> 7.Support Pread and append operations if the wrapped file system supports 
> them.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5440) Extract the logic of handling delegation tokens in HftpFileSystem to the TokenAspect class

2013-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821740#comment-13821740
 ] 

Hudson commented on HDFS-5440:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4726 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4726/])
HDFS-5440. Extract the logic of handling delegation tokens in HftpFileSystem to 
the TokenAspect class. Contributed by Haohui Mai. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1541665)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/HftpFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/TokenAspect.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenRenewer
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestHftpDelegationToken.java


> Extract the logic of handling delegation tokens in HftpFileSystem to the 
> TokenAspect class
> --
>
> Key: HDFS-5440
> URL: https://issues.apache.org/jira/browse/HDFS-5440
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.3.0
>
> Attachments: HDFS-5440.000.patch, HDFS-5440.001.patch, 
> HDFS-5440.002.patch, HDFS-5440.003.patch, HDFS-5440.004.patch
>
>
> The logic of handling delegation token in HftpFileSystem and 
> WebHdfsFileSystem are mostly identical. To simplify the code, this jira 
> proposes to extract the common code into a new class named TokenAspect.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5440) Extract the logic of handling delegation tokens in HftpFileSystem to the TokenAspect class

2013-11-13 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5440:


   Resolution: Fixed
Fix Version/s: 2.3.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

+1 for the latest patch. I've committed this to trunk and branch-2.

> Extract the logic of handling delegation tokens in HftpFileSystem to the 
> TokenAspect class
> --
>
> Key: HDFS-5440
> URL: https://issues.apache.org/jira/browse/HDFS-5440
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.3.0
>
> Attachments: HDFS-5440.000.patch, HDFS-5440.001.patch, 
> HDFS-5440.002.patch, HDFS-5440.003.patch, HDFS-5440.004.patch
>
>
> The logic of handling delegation token in HftpFileSystem and 
> WebHdfsFileSystem are mostly identical. To simplify the code, this jira 
> proposes to extract the common code into a new class named TokenAspect.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5143) Hadoop cryptographic file system

2013-11-13 Thread Owen O'Malley (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821728#comment-13821728
 ] 

Owen O'Malley commented on HDFS-5143:
-

[~hitliuyi] In the design document, the IV was always 0, but in the comments 
you are suggesting putting a random IV in the start of the underlying file. I 
think that the security advantage of having a random IV is relatively small and 
we'd do better without it. It only protects against having multiple files with 
the same key and the same plain text  co-located in the file.

I think that putting it at the front of the file has a couple of disadvantages:
* Any read of the file has to read the beginning 16 bytes of the file.
* Block boundaries are offset from the expectation. This will cause MapReduce 
input splits to straddle blocks in cases that wouldn't otherwise require it.

I think we should always have an IV of 0 or alternatively encode it in the 
underlying filesystem's filenames. In particular, we could base 64 encode the 
IV and append it onto the filename. If we add 16 characters of base64 that 
would give use 96 bits of IV and it would be easy to strip off. It would look 
like:

cfs://hdfs@nn/dir1/dir2/file -> hdfs://nn/dir1/dir2/file_1234567890ABCDEF

> Hadoop cryptographic file system
> 
>
> Key: HDFS-5143
> URL: https://issues.apache.org/jira/browse/HDFS-5143
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Yi Liu
>Assignee: Yi Liu
>  Labels: rhino
> Fix For: 3.0.0
>
> Attachments: CryptographicFileSystem.patch, HADOOP cryptographic file 
> system.pdf
>
>
> There is an increasing need for securing data when Hadoop customers use 
> various upper layer applications, such as Map-Reduce, Hive, Pig, HBase and so 
> on.
> HADOOP CFS (HADOOP Cryptographic File System) is used to secure data, based 
> on HADOOP “FilterFileSystem” decorating DFS or other file systems, and 
> transparent to upper layer applications. It’s configurable, scalable and fast.
> High level requirements:
> 1.Transparent to and no modification required for upper layer 
> applications.
> 2.“Seek”, “PositionedReadable” are supported for input stream of CFS if 
> the wrapped file system supports them.
> 3.Very high performance for encryption and decryption, they will not 
> become bottleneck.
> 4.Can decorate HDFS and all other file systems in Hadoop, and will not 
> modify existing structure of file system, such as namenode and datanode 
> structure if the wrapped file system is HDFS.
> 5.Admin can configure encryption policies, such as which directory will 
> be encrypted.
> 6.A robust key management framework.
> 7.Support Pread and append operations if the wrapped file system supports 
> them.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5143) Hadoop cryptographic file system

2013-11-13 Thread Owen O'Malley (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821679#comment-13821679
 ] 

Owen O'Malley commented on HDFS-5143:
-

[~avik_...@yahoo.com] I'm not misquoting you. You were very clear that you 
weren't planning on working on this in the immediate future and that instead 
you wanted to change all of the file formats.

> Hadoop cryptographic file system
> 
>
> Key: HDFS-5143
> URL: https://issues.apache.org/jira/browse/HDFS-5143
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Yi Liu
>Assignee: Yi Liu
>  Labels: rhino
> Fix For: 3.0.0
>
> Attachments: CryptographicFileSystem.patch, HADOOP cryptographic file 
> system.pdf
>
>
> There is an increasing need for securing data when Hadoop customers use 
> various upper layer applications, such as Map-Reduce, Hive, Pig, HBase and so 
> on.
> HADOOP CFS (HADOOP Cryptographic File System) is used to secure data, based 
> on HADOOP “FilterFileSystem” decorating DFS or other file systems, and 
> transparent to upper layer applications. It’s configurable, scalable and fast.
> High level requirements:
> 1.Transparent to and no modification required for upper layer 
> applications.
> 2.“Seek”, “PositionedReadable” are supported for input stream of CFS if 
> the wrapped file system supports them.
> 3.Very high performance for encryption and decryption, they will not 
> become bottleneck.
> 4.Can decorate HDFS and all other file systems in Hadoop, and will not 
> modify existing structure of file system, such as namenode and datanode 
> structure if the wrapped file system is HDFS.
> 5.Admin can configure encryption policies, such as which directory will 
> be encrypted.
> 6.A robust key management framework.
> 7.Support Pread and append operations if the wrapped file system supports 
> them.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5143) Hadoop cryptographic file system

2013-11-13 Thread Avik Dey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821655#comment-13821655
 ] 

Avik Dey commented on HDFS-5143:


@Owen - Not only did we talk at Strata we talked last night as well. In both of 
those, I confirmed that Yi would make the patch available shortly. Don't 
misquote me please. Thanks for assigning it to Yi.

> Hadoop cryptographic file system
> 
>
> Key: HDFS-5143
> URL: https://issues.apache.org/jira/browse/HDFS-5143
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Yi Liu
>Assignee: Yi Liu
>  Labels: rhino
> Fix For: 3.0.0
>
> Attachments: CryptographicFileSystem.patch, HADOOP cryptographic file 
> system.pdf
>
>
> There is an increasing need for securing data when Hadoop customers use 
> various upper layer applications, such as Map-Reduce, Hive, Pig, HBase and so 
> on.
> HADOOP CFS (HADOOP Cryptographic File System) is used to secure data, based 
> on HADOOP “FilterFileSystem” decorating DFS or other file systems, and 
> transparent to upper layer applications. It’s configurable, scalable and fast.
> High level requirements:
> 1.Transparent to and no modification required for upper layer 
> applications.
> 2.“Seek”, “PositionedReadable” are supported for input stream of CFS if 
> the wrapped file system supports them.
> 3.Very high performance for encryption and decryption, they will not 
> become bottleneck.
> 4.Can decorate HDFS and all other file systems in Hadoop, and will not 
> modify existing structure of file system, such as namenode and datanode 
> structure if the wrapped file system is HDFS.
> 5.Admin can configure encryption policies, such as which directory will 
> be encrypted.
> 6.A robust key management framework.
> 7.Support Pread and append operations if the wrapped file system supports 
> them.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5461) fallback to non-ssr(local short circuit reads) while oom detected

2013-11-13 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821648#comment-13821648
 ] 

Colin Patrick McCabe commented on HDFS-5461:


We need to have a try/catch around both the creation of {{BlockReaderLocal}} 
and {{BlockReaderFactory.getLegacyBlockReaderLocal}} that catches 
{{OutOfMemoryException}}, since if one fails, the other definitely will do.

One thing that I don't like about this is that we're not actually testing the 
failure case in the unit test.  How about adding a configurable upper limit for 
the size of the {{DirectBufferPool}}?  Then a unit test could set this and 
trigger the failure case.  You're already tracking the used bytes, so it should 
be simple to implement via {{compareAndSet}}.  This will also be good for users 
who don't want to use all their native memory for these buffers.

{code}
+  /**
+   * Return the currently using memory sum size in MB.
+   */
+  public long getUsingMemoryMB() {
+return usingMemoryBytes.get()/(1024 * 1024);
+  }
{code}

Let's just return this in bytes.  Using megabytes just opens up a can of worms 
(some people think it's base-10, others think it's base-2, etc).  And obviously 
it's less precise.

> fallback to non-ssr(local short circuit reads) while oom detected
> -
>
> Key: HDFS-5461
> URL: https://issues.apache.org/jira/browse/HDFS-5461
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Liang Xie
>Assignee: Liang Xie
> Attachments: HDFS-5461.txt
>
>
> Currently, the DirectBufferPool used by ssr feature seems doesn't have a 
> upper-bound limit except DirectMemory VM option. So there's a risk to 
> encounter direct memory oom. see HBASE-8143 for example.
> IMHO, maybe we could improve it a bit:
> 1) detect OOM or reach a setting up-limit from caller, then fallback to 
> non-ssr
> 2) add a new metric about current raw consumed direct memory size.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5366) recaching improvements

2013-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821644#comment-13821644
 ] 

Hudson commented on HDFS-5366:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4725 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4725/])
HDFS-5366. recaching improvements (cmccabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1541647)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CacheReplicationMonitor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/MappableBlock.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestPathBasedCacheRequests.java


> recaching improvements
> --
>
> Key: HDFS-5366
> URL: https://issues.apache.org/jira/browse/HDFS-5366
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5366-caching.001.patch, HDFS-5366.002.patch, 
> HDFS-5366.005.patch, HDFS-5366.007.patch
>
>
> There are a few things about our HDFS-4949 recaching strategy that could be 
> improved.
> * We should monitor the DN's maximum and current mlock'ed memory consumption 
> levels, so that we don't ask the DN to do stuff it can't.
> * We should not try to initiate caching on stale or decomissioning DataNodes 
> (although we should not recache things stored on such nodes until they're 
> declared dead).
> * We might want to resend the {{DNA_CACHE}} or {{DNA_UNCACHE}} command a few 
> times before giving up.  Currently, we only send it once.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5143) Hadoop cryptographic file system

2013-11-13 Thread Owen O'Malley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley updated HDFS-5143:


Status: Open  (was: Patch Available)

It should only be marked Patch Available when Yi thinks it is ready to commit.

> Hadoop cryptographic file system
> 
>
> Key: HDFS-5143
> URL: https://issues.apache.org/jira/browse/HDFS-5143
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Yi Liu
>Assignee: Yi Liu
>  Labels: rhino
> Fix For: 3.0.0
>
> Attachments: CryptographicFileSystem.patch, HADOOP cryptographic file 
> system.pdf
>
>
> There is an increasing need for securing data when Hadoop customers use 
> various upper layer applications, such as Map-Reduce, Hive, Pig, HBase and so 
> on.
> HADOOP CFS (HADOOP Cryptographic File System) is used to secure data, based 
> on HADOOP “FilterFileSystem” decorating DFS or other file systems, and 
> transparent to upper layer applications. It’s configurable, scalable and fast.
> High level requirements:
> 1.Transparent to and no modification required for upper layer 
> applications.
> 2.“Seek”, “PositionedReadable” are supported for input stream of CFS if 
> the wrapped file system supports them.
> 3.Very high performance for encryption and decryption, they will not 
> become bottleneck.
> 4.Can decorate HDFS and all other file systems in Hadoop, and will not 
> modify existing structure of file system, such as namenode and datanode 
> structure if the wrapped file system is HDFS.
> 5.Admin can configure encryption policies, such as which directory will 
> be encrypted.
> 6.A robust key management framework.
> 7.Support Pread and append operations if the wrapped file system supports 
> them.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5143) Hadoop cryptographic file system

2013-11-13 Thread Owen O'Malley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley updated HDFS-5143:


Assignee: Yi Liu  (was: Owen O'Malley)

It wasn't assigned and no one seemed to be working on this. Talking to Avik at 
Strata, he said no one was going to be working on this for 9 months. I'm glad 
to see that Yi has posted a patch.

> Hadoop cryptographic file system
> 
>
> Key: HDFS-5143
> URL: https://issues.apache.org/jira/browse/HDFS-5143
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Yi Liu
>Assignee: Yi Liu
>  Labels: rhino
> Fix For: 3.0.0
>
> Attachments: CryptographicFileSystem.patch, HADOOP cryptographic file 
> system.pdf
>
>
> There is an increasing need for securing data when Hadoop customers use 
> various upper layer applications, such as Map-Reduce, Hive, Pig, HBase and so 
> on.
> HADOOP CFS (HADOOP Cryptographic File System) is used to secure data, based 
> on HADOOP “FilterFileSystem” decorating DFS or other file systems, and 
> transparent to upper layer applications. It’s configurable, scalable and fast.
> High level requirements:
> 1.Transparent to and no modification required for upper layer 
> applications.
> 2.“Seek”, “PositionedReadable” are supported for input stream of CFS if 
> the wrapped file system supports them.
> 3.Very high performance for encryption and decryption, they will not 
> become bottleneck.
> 4.Can decorate HDFS and all other file systems in Hadoop, and will not 
> modify existing structure of file system, such as namenode and datanode 
> structure if the wrapped file system is HDFS.
> 5.Admin can configure encryption policies, such as which directory will 
> be encrypted.
> 6.A robust key management framework.
> 7.Support Pread and append operations if the wrapped file system supports 
> them.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5143) Hadoop cryptographic file system

2013-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821631#comment-13821631
 ] 

Hadoop QA commented on HDFS-5143:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12613629/CryptographicFileSystem.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-common-project/hadoop-crypto 
hadoop-dist hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.conf.TestConfiguration

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5422//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5422//console

This message is automatically generated.

> Hadoop cryptographic file system
> 
>
> Key: HDFS-5143
> URL: https://issues.apache.org/jira/browse/HDFS-5143
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Yi Liu
>Assignee: Owen O'Malley
>  Labels: rhino
> Fix For: 3.0.0
>
> Attachments: CryptographicFileSystem.patch, HADOOP cryptographic file 
> system.pdf
>
>
> There is an increasing need for securing data when Hadoop customers use 
> various upper layer applications, such as Map-Reduce, Hive, Pig, HBase and so 
> on.
> HADOOP CFS (HADOOP Cryptographic File System) is used to secure data, based 
> on HADOOP “FilterFileSystem” decorating DFS or other file systems, and 
> transparent to upper layer applications. It’s configurable, scalable and fast.
> High level requirements:
> 1.Transparent to and no modification required for upper layer 
> applications.
> 2.“Seek”, “PositionedReadable” are supported for input stream of CFS if 
> the wrapped file system supports them.
> 3.Very high performance for encryption and decryption, they will not 
> become bottleneck.
> 4.Can decorate HDFS and all other file systems in Hadoop, and will not 
> modify existing structure of file system, such as namenode and datanode 
> structure if the wrapped file system is HDFS.
> 5.Admin can configure encryption policies, such as which directory will 
> be encrypted.
> 6.A robust key management framework.
> 7.Support Pread and append operations if the wrapped file system supports 
> them.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5461) fallback to non-ssr(local short circuit reads) while oom detected

2013-11-13 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821560#comment-13821560
 ] 

Lars Hofhansl commented on HDFS-5461:
-

The issue is that the JDK only collects direct byte buffers during a full GC, 
and there are different limits for the direct buffer and the general heap. 
HBase keeps a reader open for each store file and thus we end up with a lot of 
direct memory used.

I was actually curious about 1mb as the default size; it seems even as little 
8kb should be OK.

> fallback to non-ssr(local short circuit reads) while oom detected
> -
>
> Key: HDFS-5461
> URL: https://issues.apache.org/jira/browse/HDFS-5461
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Liang Xie
>Assignee: Liang Xie
> Attachments: HDFS-5461.txt
>
>
> Currently, the DirectBufferPool used by ssr feature seems doesn't have a 
> upper-bound limit except DirectMemory VM option. So there's a risk to 
> encounter direct memory oom. see HBASE-8143 for example.
> IMHO, maybe we could improve it a bit:
> 1) detect OOM or reach a setting up-limit from caller, then fallback to 
> non-ssr
> 2) add a new metric about current raw consumed direct memory size.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5143) Hadoop cryptographic file system

2013-11-13 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821553#comment-13821553
 ] 

Andrew Purtell commented on HDFS-5143:
--

Shouldn't this issue be assigned to the reporter, who has done all the work and 
submitted the patch for consideration? 

> Hadoop cryptographic file system
> 
>
> Key: HDFS-5143
> URL: https://issues.apache.org/jira/browse/HDFS-5143
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Yi Liu
>Assignee: Owen O'Malley
>  Labels: rhino
> Fix For: 3.0.0
>
> Attachments: CryptographicFileSystem.patch, HADOOP cryptographic file 
> system.pdf
>
>
> There is an increasing need for securing data when Hadoop customers use 
> various upper layer applications, such as Map-Reduce, Hive, Pig, HBase and so 
> on.
> HADOOP CFS (HADOOP Cryptographic File System) is used to secure data, based 
> on HADOOP “FilterFileSystem” decorating DFS or other file systems, and 
> transparent to upper layer applications. It’s configurable, scalable and fast.
> High level requirements:
> 1.Transparent to and no modification required for upper layer 
> applications.
> 2.“Seek”, “PositionedReadable” are supported for input stream of CFS if 
> the wrapped file system supports them.
> 3.Very high performance for encryption and decryption, they will not 
> become bottleneck.
> 4.Can decorate HDFS and all other file systems in Hadoop, and will not 
> modify existing structure of file system, such as namenode and datanode 
> structure if the wrapped file system is HDFS.
> 5.Admin can configure encryption policies, such as which directory will 
> be encrypted.
> 6.A robust key management framework.
> 7.Support Pread and append operations if the wrapped file system supports 
> them.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-2832) Enable support for heterogeneous storages in HDFS

2013-11-13 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821541#comment-13821541
 ] 

Arpit Agarwal commented on HDFS-2832:
-

TestOfflineEditsViewer is due to missing binary file editsStored. 
TestListCorruptFileBlocks looks like  spurious timeout, I cannot duplicate the 
failure.

Looking at TestDFSStartupVersions.

> Enable support for heterogeneous storages in HDFS
> -
>
> Key: HDFS-2832
> URL: https://issues.apache.org/jira/browse/HDFS-2832
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 0.24.0
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Attachments: 20130813-HeterogeneousStorage.pdf, H2832_20131107.patch, 
> editsStored, h2832_20131023.patch, h2832_20131023b.patch, 
> h2832_20131025.patch, h2832_20131028.patch, h2832_20131028b.patch, 
> h2832_20131029.patch, h2832_20131103.patch, h2832_20131104.patch, 
> h2832_20131105.patch, h2832_20131107b.patch, h2832_20131108.patch, 
> h2832_20131110.patch, h2832_20131110b.patch, h2832_2013.patch, 
> h2832_20131112.patch, h2832_20131112b.patch
>
>
> HDFS currently supports configuration where storages are a list of 
> directories. Typically each of these directories correspond to a volume with 
> its own file system. All these directories are homogeneous and therefore 
> identified as a single storage at the namenode. I propose, change to the 
> current model where Datanode * is a * storage, to Datanode * is a collection 
> * of strorages. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5143) Hadoop cryptographic file system

2013-11-13 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-5143:
-

Status: Patch Available  (was: Open)

> Hadoop cryptographic file system
> 
>
> Key: HDFS-5143
> URL: https://issues.apache.org/jira/browse/HDFS-5143
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Yi Liu
>Assignee: Owen O'Malley
>  Labels: rhino
> Fix For: 3.0.0
>
> Attachments: CryptographicFileSystem.patch, HADOOP cryptographic file 
> system.pdf
>
>
> There is an increasing need for securing data when Hadoop customers use 
> various upper layer applications, such as Map-Reduce, Hive, Pig, HBase and so 
> on.
> HADOOP CFS (HADOOP Cryptographic File System) is used to secure data, based 
> on HADOOP “FilterFileSystem” decorating DFS or other file systems, and 
> transparent to upper layer applications. It’s configurable, scalable and fast.
> High level requirements:
> 1.Transparent to and no modification required for upper layer 
> applications.
> 2.“Seek”, “PositionedReadable” are supported for input stream of CFS if 
> the wrapped file system supports them.
> 3.Very high performance for encryption and decryption, they will not 
> become bottleneck.
> 4.Can decorate HDFS and all other file systems in Hadoop, and will not 
> modify existing structure of file system, such as namenode and datanode 
> structure if the wrapped file system is HDFS.
> 5.Admin can configure encryption policies, such as which directory will 
> be encrypted.
> 6.A robust key management framework.
> 7.Support Pread and append operations if the wrapped file system supports 
> them.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5143) Hadoop cryptographic file system

2013-11-13 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-5143:
-

Attachment: CryptographicFileSystem.patch

This patch is an initial version(still need to refine) of implementation for 
cryptographic file system aligned with design doc discussed in this jira:
1)  Basic functionalities of cryptographic file system, including 
transparently read/write data to HDFS(currently only HDFS has been tested) 
using filesystem API, transparently using cryptographic filesystem in upper 
layer applications(MapReduce has been tested), hdfs commands support(ls, du, 
etc.) and so on.
2)   Currently different IV are used for encryption files to enhance 
security. And Length of IV is fixed 16 bytes and is stored at the beginning of 
encryption file.
3)  In the patch, crypto policy interface is defined, developers/users can 
implement their own crypto policy to decide how and while files/directories 
will be encrypted. By default, a simple crypto policy is implemented and admin 
can configured the encrypted directory list and encrypted file list, each 
encrypted directory has different encryption key, and the file stored into this 
directory will be automatically encrypted.
4)  For key management, in the patch, key management protocol interface is 
defined, and there is default implementation and users/developers can have 
their own implementation. In the patch a simple key management server is 
implemented which uses java keystore to store keys.   Currently the key 
management server is still under development.
5)  The patch includes a mvn project: hadoop-crypto, it uses OpenSSL to 
implement Cipher which is much more faster than java cipher, especially when 
AES-NI is enabled.
6)  This patch also includes Encryptor/Decryptor interfaces and other 
encryption facility, such as buffered EncryptorStream and DecryptorStream.
7)  fs.default.name is “cfs://hdfs@hostname:9000” when cryptographi 
filesystem is used on hdfs, and additionally “cfs-site.xml” need to be 
configured.

This is an all-in-one patch, and later I will create several sub JIRAs and 
split this patch for convenience of code review. I will make the patch stable 
and extend the functionalities in further steps.


> Hadoop cryptographic file system
> 
>
> Key: HDFS-5143
> URL: https://issues.apache.org/jira/browse/HDFS-5143
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Yi Liu
>Assignee: Owen O'Malley
>  Labels: rhino
> Fix For: 3.0.0
>
> Attachments: CryptographicFileSystem.patch, HADOOP cryptographic file 
> system.pdf
>
>
> There is an increasing need for securing data when Hadoop customers use 
> various upper layer applications, such as Map-Reduce, Hive, Pig, HBase and so 
> on.
> HADOOP CFS (HADOOP Cryptographic File System) is used to secure data, based 
> on HADOOP “FilterFileSystem” decorating DFS or other file systems, and 
> transparent to upper layer applications. It’s configurable, scalable and fast.
> High level requirements:
> 1.Transparent to and no modification required for upper layer 
> applications.
> 2.“Seek”, “PositionedReadable” are supported for input stream of CFS if 
> the wrapped file system supports them.
> 3.Very high performance for encryption and decryption, they will not 
> become bottleneck.
> 4.Can decorate HDFS and all other file systems in Hadoop, and will not 
> modify existing structure of file system, such as namenode and datanode 
> structure if the wrapped file system is HDFS.
> 5.Admin can configure encryption policies, such as which directory will 
> be encrypted.
> 6.A robust key management framework.
> 7.Support Pread and append operations if the wrapped file system supports 
> them.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5504) In HA mode, OP_DELETE_SNAPSHOT is not decrementing the safemode threshold, leads to NN safemode.

2013-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821354#comment-13821354
 ] 

Hadoop QA commented on HDFS-5504:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12613578/HDFS-5504.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5421//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5421//console

This message is automatically generated.

> In HA mode, OP_DELETE_SNAPSHOT is not decrementing the safemode threshold, 
> leads to NN safemode.
> 
>
> Key: HDFS-5504
> URL: https://issues.apache.org/jira/browse/HDFS-5504
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Vinay
>Assignee: Vinay
> Attachments: HDFS-5504.patch, HDFS-5504.patch
>
>
> 1. HA installation, standby NN is down.
> 2. delete snapshot is called and it has deleted the blocks from blocksmap and 
> all datanodes. log sync also happened.
> 3. before next log roll NN crashed
> 4. When the namenode restartes then it will fsimage and finalized edits from 
> shared storage and set the safemode threshold. which includes blocks from 
> deleted snapshot also. (because this edits is not yet read as namenode is 
> restarted before the last edits segment is not finalized)
> 5. When it becomes active, it finalizes the edits and read the delete 
> snapshot edits_op. but at this time, it was not reducing the safemode count. 
> and it will continuing in safemode.
> 6. On next restart, as the edits is already finalized, on startup only it 
> will read and set the safemode threshold correctly.
> But one more restart will bring NN out of safemode.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5471) CacheAdmin -listPools fails when user lacks permissions to view all pools

2013-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821350#comment-13821350
 ] 

Hudson commented on HDFS-5471:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1581 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1581/])
HDFS-5471. CacheAdmin -listPools fails when user lacks permissions to view all 
pools (Andrew Wang via Colin Patrick McCabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1541323)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/IdNotFoundException.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/InvalidRequestException.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CacheManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CachePool.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestPathBasedCacheRequests.java


> CacheAdmin -listPools fails when user lacks permissions to view all pools
> -
>
> Key: HDFS-5471
> URL: https://issues.apache.org/jira/browse/HDFS-5471
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Stephen Chu
>Assignee: Andrew Wang
> Fix For: 3.0.0
>
> Attachments: hdfs-5471-1.patch, hdfs-5471-2.patch, hdfs-5471-3.patch
>
>
> When a user does not have read permissions to a cache pool and executes "hdfs 
> cacheadmin -listPools" the command will error complaining about missing 
> required fields with something like:
> {code}
> [schu@hdfs-nfs ~]$ hdfs cacheadmin -listPools
> Exception in thread "main" 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RemoteException): 
> Message missing required fields: ownerName, groupName, mode, weight
>   at 
> com.google.protobuf.AbstractMessage$Builder.newUninitializedMessageException(AbstractMessage.java:770)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ListCachePoolsResponseElementProto$Builder.build(ClientNamenodeProtocolProtos.java:51722)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.listCachePools(ClientNamenodeProtocolServerSideTranslatorPB.java:1200)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:605)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:932)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2057)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2053)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1515)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2051)
>   at 
> org.apache.hadoop.hdfs.tools.CacheAdmin$ListCachePoolsCommand.run(CacheAdmin.java:675)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.run(CacheAdmin.java:85)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.main(CacheAdmin.java:90)
> [schu@hdfs-nfs ~]$ 
> {code}
> In this example, the pool "root" has 750 permissions, and the root superuser 
> is able to successfully -listPools:
> {code}
> [root@hdfs-nfs ~]# hdfs cacheadmin -listPools
> Found 4 results.
> NAME  OWNER  GROUP  MODE   WEIGHT 
> bar   root   root   rwxr-xr-x  100
> foo   root   root   rwxr-xr-x  100
> root  root   root   rwxr-x---  100
> schu  root   root   rwxr-xr-x  100
> [root@hdfs-nfs ~]# 
> {code}
> When we modify the root pool to mode 755, schu user can now -listPools 
> successfully without error.
> {code}
> [schu@hdfs-nfs ~]$ hdfs cach

[jira] [Commented] (HDFS-5467) Remove tab characters in hdfs-default.xml

2013-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821337#comment-13821337
 ] 

Hudson commented on HDFS-5467:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1581 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1581/])
HDFS-5467. Remove tab characters in hdfs-default.xml. Contributed by Shinichi 
Yamashita. (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1540816)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


> Remove tab characters in hdfs-default.xml
> -
>
> Key: HDFS-5467
> URL: https://issues.apache.org/jira/browse/HDFS-5467
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Shinichi Yamashita
>Priority: Trivial
>  Labels: newbie
> Fix For: 2.3.0
>
> Attachments: HDFS-5467.patch
>
>
> The retrycache parameters are indented with tabs rather than the normal 2 
> spaces.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5450) better API for getting the cached blocks locations

2013-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821339#comment-13821339
 ] 

Hudson commented on HDFS-5450:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1581 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1581/])
HDFS-5450. better API for getting the cached blocks locations. Contributed by 
Andrew Wang. (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1541338)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/BlockLocation.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestBlockLocation.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/HdfsBlockLocation.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestPathBasedCacheRequests.java


> better API for getting the cached blocks locations
> --
>
> Key: HDFS-5450
> URL: https://issues.apache.org/jira/browse/HDFS-5450
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Andrew Wang
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: hdfs-5450-1.patch, hdfs-5450-2.patch, hdfs-5450-3.patch, 
> hdfs-5450-4.patch, hdfs-5450-5.patch
>
>
> Currently, we have to downcast the {{BlockLocation}} to {{HdfsBlockLocation}} 
> to get information about whether a replica is cached.  We should have this 
> information in {{BlockLocation}} instead.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5485) add command-line support for modifyDirective

2013-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821338#comment-13821338
 ] 

Hudson commented on HDFS-5485:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1581 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1581/])
HDFS-5485. add command-line support for modifyDirective (cmccabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1541377)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testCacheAdminConf.xml


> add command-line support for modifyDirective
> 
>
> Key: HDFS-5485
> URL: https://issues.apache.org/jira/browse/HDFS-5485
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5485.001.patch, HDFS-5485.002.patch
>
>
> add command-line support for modifyDirective



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5488) Clean up TestHftpURLTimeout

2013-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821336#comment-13821336
 ] 

Hudson commented on HDFS-5488:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1581 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1581/])
HDFS-5488. Clean up TestHftpURLTimeout. Contributed by Haohui Mai. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1540894)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestHftpFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestHftpURLTimeouts.java


> Clean up TestHftpURLTimeout
> ---
>
> Key: HDFS-5488
> URL: https://issues.apache.org/jira/browse/HDFS-5488
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.3.0
>
> Attachments: HDFS-5488.000.patch
>
>
> HftpFileSystem uses URLConnectionFactory to set the timeout of each http 
> connections. This jira cleans up TestHftpTimeout and merges its unit tests 
> into TestURLConnectionFactory.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5320) Add datanode caching metrics

2013-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821342#comment-13821342
 ] 

Hudson commented on HDFS-5320:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1581 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1581/])
HDFS-5320. Add datanode caching metrics. Contributed by Andrew Wang. (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1540796)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolServerSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/FSDatasetMBean.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/DatanodeProtocol.proto
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java


> Add datanode caching metrics
> 
>
> Key: HDFS-5320
> URL: https://issues.apache.org/jira/browse/HDFS-5320
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: hdfs-5320-1.patch, hdfs-5320-2.patch, hdfs-5320-3.patch
>
>
> It'd be good to hook up datanode metrics for # (blocks/bytes) 
> (cached/uncached/failed to cache) over different time windows 
> (eternity/1hr/10min/1min).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5425) Renaming underconstruction file with snapshots can make NN failure on restart

2013-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821348#comment-13821348
 ] 

Hudson commented on HDFS-5425:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1581 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1581/])
HDFS-5425. Renaming underconstruction file with snapshots can make NN failure 
on restart. Contributed by Vinay and Jing Zhao. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1541261)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectoryWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java


> Renaming underconstruction file with snapshots can make NN failure on restart
> -
>
> Key: HDFS-5425
> URL: https://issues.apache.org/jira/browse/HDFS-5425
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, snapshots
>Affects Versions: 3.0.0, 2.2.0
>Reporter: sathish
>Assignee: Jing Zhao
> Fix For: 2.3.0
>
> Attachments: HDFS-5425.001.patch, HDFS-5425.patch, HDFS-5425.patch, 
> HDFS-5425.patch
>
>
> I faced this When i am doing some snapshot operations like 
> createSnapshot,renameSnapshot,i restarted my NN,it is shutting down with 
> exception,
> 2013-10-24 21:07:03,040 FATAL 
> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
> java.lang.IllegalStateException
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:133)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$ChildrenDiff.replace(INodeDirectoryWithSnapshot.java:82)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$ChildrenDiff.access$700(INodeDirectoryWithSnapshot.java:62)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$DirectoryDiffList.replaceChild(INodeDirectoryWithSnapshot.java:397)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$DirectoryDiffList.access$900(INodeDirectoryWithSnapshot.java:376)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot.replaceChild(INodeDirectoryWithSnapshot.java:598)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedReplaceINodeFile(FSDirectory.java:1548)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.replaceINodeFile(FSDirectory.java:1537)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.loadFilesUnderConstruction(FSImageFormat.java:855)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.load(FSImageFormat.java:350)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:910)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:899)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:751)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:720)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:266)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:784)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:563)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:422)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:472)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:670)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:655)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1245)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1311)
> 2013-10-24 21:07:03,050 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1
> 2013-10-24 21:07:03,052 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG: 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5495) Remove further JUnit3 usages from HDFS

2013-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821345#comment-13821345
 ] 

Hudson commented on HDFS-5495:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1581 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1581/])
Move HDFS-5495 to 2.3.0 in CHANGES.txt (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1540917)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
HDFS-5495. Remove further JUnit3 usages from HDFS. Contributed by Jarek Jarcec 
Cecho. (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1540914)
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSKerberosAuthenticationHandler.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSWithKerberos.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/lib/service/security/TestDelegationTokenManagerService.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestWrites.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestAbandonBlock.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestConnCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientExcludedNodes.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileInputStreamCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockReport.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestCachingStrategy.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestMultipleNNDataBlockScanner.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/TestAvailableSpaceVolumeChoosingPolicy.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestPathBasedCacheRequests.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/util/TestXMLUtils.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestURLConnectionFactory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/net/TestNetworkTopology.java


> Remove further JUnit3 usages from HDFS
> --
>
> Key: HDFS-5495
> URL: https://issues.apache.org/jira/browse/HDFS-5495
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Andrew Wang
>Assignee: Jarek Jarcec Cecho
>  Labels: newbie
> Fix For: 2.3.0
>
> Attachments: HDFS-5495-trunk.patch
>
>
> We're trying to move away from junit3 in hadoop to junit4. One easy way of 
> testing for this is something like the following:
> {code}
> -> % ack junit.framework -l
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestAbandonBlock.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestConnCache.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientExcludedNodes.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileInputStreamCache.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockReport.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestCachingStrategy.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestMultipleNNDataBlockScanner.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/TestAvailableSpaceVolumeChoosingPolicy.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestPathBasedCacheRequests.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/util/TestXMLUtils.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestURLConnectionFactory.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/net/TestNetworkTopology.jav

[jira] [Commented] (HDFS-5467) Remove tab characters in hdfs-default.xml

2013-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821314#comment-13821314
 ] 

Hudson commented on HDFS-5467:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1607 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1607/])
HDFS-5467. Remove tab characters in hdfs-default.xml. Contributed by Shinichi 
Yamashita. (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1540816)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


> Remove tab characters in hdfs-default.xml
> -
>
> Key: HDFS-5467
> URL: https://issues.apache.org/jira/browse/HDFS-5467
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Shinichi Yamashita
>Priority: Trivial
>  Labels: newbie
> Fix For: 2.3.0
>
> Attachments: HDFS-5467.patch
>
>
> The retrycache parameters are indented with tabs rather than the normal 2 
> spaces.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5488) Clean up TestHftpURLTimeout

2013-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821313#comment-13821313
 ] 

Hudson commented on HDFS-5488:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1607 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1607/])
HDFS-5488. Clean up TestHftpURLTimeout. Contributed by Haohui Mai. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1540894)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestHftpFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestHftpURLTimeouts.java


> Clean up TestHftpURLTimeout
> ---
>
> Key: HDFS-5488
> URL: https://issues.apache.org/jira/browse/HDFS-5488
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.3.0
>
> Attachments: HDFS-5488.000.patch
>
>
> HftpFileSystem uses URLConnectionFactory to set the timeout of each http 
> connections. This jira cleans up TestHftpTimeout and merges its unit tests 
> into TestURLConnectionFactory.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5495) Remove further JUnit3 usages from HDFS

2013-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821322#comment-13821322
 ] 

Hudson commented on HDFS-5495:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1607 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1607/])
Move HDFS-5495 to 2.3.0 in CHANGES.txt (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1540917)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
HDFS-5495. Remove further JUnit3 usages from HDFS. Contributed by Jarek Jarcec 
Cecho. (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1540914)
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSKerberosAuthenticationHandler.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSWithKerberos.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/lib/service/security/TestDelegationTokenManagerService.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestWrites.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestAbandonBlock.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestConnCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientExcludedNodes.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileInputStreamCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockReport.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestCachingStrategy.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestMultipleNNDataBlockScanner.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/TestAvailableSpaceVolumeChoosingPolicy.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestPathBasedCacheRequests.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/util/TestXMLUtils.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestURLConnectionFactory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/net/TestNetworkTopology.java


> Remove further JUnit3 usages from HDFS
> --
>
> Key: HDFS-5495
> URL: https://issues.apache.org/jira/browse/HDFS-5495
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Andrew Wang
>Assignee: Jarek Jarcec Cecho
>  Labels: newbie
> Fix For: 2.3.0
>
> Attachments: HDFS-5495-trunk.patch
>
>
> We're trying to move away from junit3 in hadoop to junit4. One easy way of 
> testing for this is something like the following:
> {code}
> -> % ack junit.framework -l
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestAbandonBlock.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestConnCache.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientExcludedNodes.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileInputStreamCache.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockReport.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestCachingStrategy.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestMultipleNNDataBlockScanner.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/TestAvailableSpaceVolumeChoosingPolicy.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestPathBasedCacheRequests.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/util/TestXMLUtils.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestURLConnectionFactory.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/net/TestNetworkTo

[jira] [Commented] (HDFS-5485) add command-line support for modifyDirective

2013-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821315#comment-13821315
 ] 

Hudson commented on HDFS-5485:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1607 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1607/])
HDFS-5485. add command-line support for modifyDirective (cmccabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1541377)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testCacheAdminConf.xml


> add command-line support for modifyDirective
> 
>
> Key: HDFS-5485
> URL: https://issues.apache.org/jira/browse/HDFS-5485
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5485.001.patch, HDFS-5485.002.patch
>
>
> add command-line support for modifyDirective



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5320) Add datanode caching metrics

2013-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821319#comment-13821319
 ] 

Hudson commented on HDFS-5320:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1607 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1607/])
HDFS-5320. Add datanode caching metrics. Contributed by Andrew Wang. (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1540796)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolServerSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/FSDatasetMBean.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/DatanodeProtocol.proto
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java


> Add datanode caching metrics
> 
>
> Key: HDFS-5320
> URL: https://issues.apache.org/jira/browse/HDFS-5320
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: hdfs-5320-1.patch, hdfs-5320-2.patch, hdfs-5320-3.patch
>
>
> It'd be good to hook up datanode metrics for # (blocks/bytes) 
> (cached/uncached/failed to cache) over different time windows 
> (eternity/1hr/10min/1min).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5450) better API for getting the cached blocks locations

2013-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821316#comment-13821316
 ] 

Hudson commented on HDFS-5450:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1607 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1607/])
HDFS-5450. better API for getting the cached blocks locations. Contributed by 
Andrew Wang. (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1541338)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/BlockLocation.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestBlockLocation.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/HdfsBlockLocation.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestPathBasedCacheRequests.java


> better API for getting the cached blocks locations
> --
>
> Key: HDFS-5450
> URL: https://issues.apache.org/jira/browse/HDFS-5450
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Andrew Wang
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: hdfs-5450-1.patch, hdfs-5450-2.patch, hdfs-5450-3.patch, 
> hdfs-5450-4.patch, hdfs-5450-5.patch
>
>
> Currently, we have to downcast the {{BlockLocation}} to {{HdfsBlockLocation}} 
> to get information about whether a replica is cached.  We should have this 
> information in {{BlockLocation}} instead.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5425) Renaming underconstruction file with snapshots can make NN failure on restart

2013-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821325#comment-13821325
 ] 

Hudson commented on HDFS-5425:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1607 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1607/])
HDFS-5425. Renaming underconstruction file with snapshots can make NN failure 
on restart. Contributed by Vinay and Jing Zhao. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1541261)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectoryWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java


> Renaming underconstruction file with snapshots can make NN failure on restart
> -
>
> Key: HDFS-5425
> URL: https://issues.apache.org/jira/browse/HDFS-5425
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, snapshots
>Affects Versions: 3.0.0, 2.2.0
>Reporter: sathish
>Assignee: Jing Zhao
> Fix For: 2.3.0
>
> Attachments: HDFS-5425.001.patch, HDFS-5425.patch, HDFS-5425.patch, 
> HDFS-5425.patch
>
>
> I faced this When i am doing some snapshot operations like 
> createSnapshot,renameSnapshot,i restarted my NN,it is shutting down with 
> exception,
> 2013-10-24 21:07:03,040 FATAL 
> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
> java.lang.IllegalStateException
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:133)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$ChildrenDiff.replace(INodeDirectoryWithSnapshot.java:82)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$ChildrenDiff.access$700(INodeDirectoryWithSnapshot.java:62)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$DirectoryDiffList.replaceChild(INodeDirectoryWithSnapshot.java:397)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$DirectoryDiffList.access$900(INodeDirectoryWithSnapshot.java:376)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot.replaceChild(INodeDirectoryWithSnapshot.java:598)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedReplaceINodeFile(FSDirectory.java:1548)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.replaceINodeFile(FSDirectory.java:1537)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.loadFilesUnderConstruction(FSImageFormat.java:855)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.load(FSImageFormat.java:350)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:910)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:899)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:751)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:720)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:266)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:784)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:563)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:422)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:472)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:670)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:655)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1245)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1311)
> 2013-10-24 21:07:03,050 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1
> 2013-10-24 21:07:03,052 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG: 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5471) CacheAdmin -listPools fails when user lacks permissions to view all pools

2013-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821327#comment-13821327
 ] 

Hudson commented on HDFS-5471:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1607 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1607/])
HDFS-5471. CacheAdmin -listPools fails when user lacks permissions to view all 
pools (Andrew Wang via Colin Patrick McCabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1541323)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/IdNotFoundException.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/InvalidRequestException.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CacheManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CachePool.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestPathBasedCacheRequests.java


> CacheAdmin -listPools fails when user lacks permissions to view all pools
> -
>
> Key: HDFS-5471
> URL: https://issues.apache.org/jira/browse/HDFS-5471
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Stephen Chu
>Assignee: Andrew Wang
> Fix For: 3.0.0
>
> Attachments: hdfs-5471-1.patch, hdfs-5471-2.patch, hdfs-5471-3.patch
>
>
> When a user does not have read permissions to a cache pool and executes "hdfs 
> cacheadmin -listPools" the command will error complaining about missing 
> required fields with something like:
> {code}
> [schu@hdfs-nfs ~]$ hdfs cacheadmin -listPools
> Exception in thread "main" 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RemoteException): 
> Message missing required fields: ownerName, groupName, mode, weight
>   at 
> com.google.protobuf.AbstractMessage$Builder.newUninitializedMessageException(AbstractMessage.java:770)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ListCachePoolsResponseElementProto$Builder.build(ClientNamenodeProtocolProtos.java:51722)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.listCachePools(ClientNamenodeProtocolServerSideTranslatorPB.java:1200)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:605)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:932)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2057)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2053)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1515)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2051)
>   at 
> org.apache.hadoop.hdfs.tools.CacheAdmin$ListCachePoolsCommand.run(CacheAdmin.java:675)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.run(CacheAdmin.java:85)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.main(CacheAdmin.java:90)
> [schu@hdfs-nfs ~]$ 
> {code}
> In this example, the pool "root" has 750 permissions, and the root superuser 
> is able to successfully -listPools:
> {code}
> [root@hdfs-nfs ~]# hdfs cacheadmin -listPools
> Found 4 results.
> NAME  OWNER  GROUP  MODE   WEIGHT 
> bar   root   root   rwxr-xr-x  100
> foo   root   root   rwxr-xr-x  100
> root  root   root   rwxr-x---  100
> schu  root   root   rwxr-xr-x  100
> [root@hdfs-nfs ~]# 
> {code}
> When we modify the root pool to mode 755, schu user can now -listPools 
> successfully without error.
> {code}
> [schu@hdfs-nfs ~]$

[jira] [Commented] (HDFS-5461) fallback to non-ssr(local short circuit reads) while oom detected

2013-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821258#comment-13821258
 ] 

Hadoop QA commented on HDFS-5461:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12612520/HDFS-5461.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 eclipse:eclipse{color}.  The patch failed to build with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.namenode.TestPathBasedCacheRequests

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5420//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5420//console

This message is automatically generated.

> fallback to non-ssr(local short circuit reads) while oom detected
> -
>
> Key: HDFS-5461
> URL: https://issues.apache.org/jira/browse/HDFS-5461
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Liang Xie
>Assignee: Liang Xie
> Attachments: HDFS-5461.txt
>
>
> Currently, the DirectBufferPool used by ssr feature seems doesn't have a 
> upper-bound limit except DirectMemory VM option. So there's a risk to 
> encounter direct memory oom. see HBASE-8143 for example.
> IMHO, maybe we could improve it a bit:
> 1) detect OOM or reach a setting up-limit from caller, then fallback to 
> non-ssr
> 2) add a new metric about current raw consumed direct memory size.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5504) In HA mode, OP_DELETE_SNAPSHOT is not decrementing the safemode threshold, leads to NN safemode.

2013-11-13 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HDFS-5504:


Attachment: HDFS-5504.patch

Attached the updated patch as per comments.
Please review.

> In HA mode, OP_DELETE_SNAPSHOT is not decrementing the safemode threshold, 
> leads to NN safemode.
> 
>
> Key: HDFS-5504
> URL: https://issues.apache.org/jira/browse/HDFS-5504
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Vinay
>Assignee: Vinay
> Attachments: HDFS-5504.patch, HDFS-5504.patch
>
>
> 1. HA installation, standby NN is down.
> 2. delete snapshot is called and it has deleted the blocks from blocksmap and 
> all datanodes. log sync also happened.
> 3. before next log roll NN crashed
> 4. When the namenode restartes then it will fsimage and finalized edits from 
> shared storage and set the safemode threshold. which includes blocks from 
> deleted snapshot also. (because this edits is not yet read as namenode is 
> restarted before the last edits segment is not finalized)
> 5. When it becomes active, it finalizes the edits and read the delete 
> snapshot edits_op. but at this time, it was not reducing the safemode count. 
> and it will continuing in safemode.
> 6. On next restart, as the edits is already finalized, on startup only it 
> will read and set the safemode threshold correctly.
> But one more restart will bring NN out of safemode.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5485) add command-line support for modifyDirective

2013-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821203#comment-13821203
 ] 

Hudson commented on HDFS-5485:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #390 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/390/])
HDFS-5485. add command-line support for modifyDirective (cmccabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1541377)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testCacheAdminConf.xml


> add command-line support for modifyDirective
> 
>
> Key: HDFS-5485
> URL: https://issues.apache.org/jira/browse/HDFS-5485
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5485.001.patch, HDFS-5485.002.patch
>
>
> add command-line support for modifyDirective



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5495) Remove further JUnit3 usages from HDFS

2013-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821210#comment-13821210
 ] 

Hudson commented on HDFS-5495:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #390 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/390/])
Move HDFS-5495 to 2.3.0 in CHANGES.txt (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1540917)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
HDFS-5495. Remove further JUnit3 usages from HDFS. Contributed by Jarek Jarcec 
Cecho. (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1540914)
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSKerberosAuthenticationHandler.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSWithKerberos.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/lib/service/security/TestDelegationTokenManagerService.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestWrites.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestAbandonBlock.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestConnCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientExcludedNodes.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileInputStreamCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockReport.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestCachingStrategy.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestMultipleNNDataBlockScanner.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/TestAvailableSpaceVolumeChoosingPolicy.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestPathBasedCacheRequests.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/util/TestXMLUtils.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestURLConnectionFactory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/net/TestNetworkTopology.java


> Remove further JUnit3 usages from HDFS
> --
>
> Key: HDFS-5495
> URL: https://issues.apache.org/jira/browse/HDFS-5495
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Andrew Wang
>Assignee: Jarek Jarcec Cecho
>  Labels: newbie
> Fix For: 2.3.0
>
> Attachments: HDFS-5495-trunk.patch
>
>
> We're trying to move away from junit3 in hadoop to junit4. One easy way of 
> testing for this is something like the following:
> {code}
> -> % ack junit.framework -l
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestAbandonBlock.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestConnCache.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientExcludedNodes.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileInputStreamCache.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockReport.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestCachingStrategy.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestMultipleNNDataBlockScanner.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/TestAvailableSpaceVolumeChoosingPolicy.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestPathBasedCacheRequests.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/util/TestXMLUtils.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestURLConnectionFactory.java
> hadoop-hdfs/src/test/java/org/apache/hadoop/net/TestNetworkTopology.java

[jira] [Commented] (HDFS-5467) Remove tab characters in hdfs-default.xml

2013-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821202#comment-13821202
 ] 

Hudson commented on HDFS-5467:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #390 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/390/])
HDFS-5467. Remove tab characters in hdfs-default.xml. Contributed by Shinichi 
Yamashita. (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1540816)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


> Remove tab characters in hdfs-default.xml
> -
>
> Key: HDFS-5467
> URL: https://issues.apache.org/jira/browse/HDFS-5467
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Shinichi Yamashita
>Priority: Trivial
>  Labels: newbie
> Fix For: 2.3.0
>
> Attachments: HDFS-5467.patch
>
>
> The retrycache parameters are indented with tabs rather than the normal 2 
> spaces.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5320) Add datanode caching metrics

2013-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821207#comment-13821207
 ] 

Hudson commented on HDFS-5320:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #390 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/390/])
HDFS-5320. Add datanode caching metrics. Contributed by Andrew Wang. (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1540796)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolServerSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/FSDatasetMBean.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/DatanodeProtocol.proto
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java


> Add datanode caching metrics
> 
>
> Key: HDFS-5320
> URL: https://issues.apache.org/jira/browse/HDFS-5320
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: hdfs-5320-1.patch, hdfs-5320-2.patch, hdfs-5320-3.patch
>
>
> It'd be good to hook up datanode metrics for # (blocks/bytes) 
> (cached/uncached/failed to cache) over different time windows 
> (eternity/1hr/10min/1min).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5471) CacheAdmin -listPools fails when user lacks permissions to view all pools

2013-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821215#comment-13821215
 ] 

Hudson commented on HDFS-5471:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #390 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/390/])
HDFS-5471. CacheAdmin -listPools fails when user lacks permissions to view all 
pools (Andrew Wang via Colin Patrick McCabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1541323)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/IdNotFoundException.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/InvalidRequestException.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CacheManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CachePool.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestPathBasedCacheRequests.java


> CacheAdmin -listPools fails when user lacks permissions to view all pools
> -
>
> Key: HDFS-5471
> URL: https://issues.apache.org/jira/browse/HDFS-5471
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Stephen Chu
>Assignee: Andrew Wang
> Fix For: 3.0.0
>
> Attachments: hdfs-5471-1.patch, hdfs-5471-2.patch, hdfs-5471-3.patch
>
>
> When a user does not have read permissions to a cache pool and executes "hdfs 
> cacheadmin -listPools" the command will error complaining about missing 
> required fields with something like:
> {code}
> [schu@hdfs-nfs ~]$ hdfs cacheadmin -listPools
> Exception in thread "main" 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RemoteException): 
> Message missing required fields: ownerName, groupName, mode, weight
>   at 
> com.google.protobuf.AbstractMessage$Builder.newUninitializedMessageException(AbstractMessage.java:770)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ListCachePoolsResponseElementProto$Builder.build(ClientNamenodeProtocolProtos.java:51722)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.listCachePools(ClientNamenodeProtocolServerSideTranslatorPB.java:1200)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:605)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:932)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2057)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2053)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1515)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2051)
>   at 
> org.apache.hadoop.hdfs.tools.CacheAdmin$ListCachePoolsCommand.run(CacheAdmin.java:675)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.run(CacheAdmin.java:85)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.main(CacheAdmin.java:90)
> [schu@hdfs-nfs ~]$ 
> {code}
> In this example, the pool "root" has 750 permissions, and the root superuser 
> is able to successfully -listPools:
> {code}
> [root@hdfs-nfs ~]# hdfs cacheadmin -listPools
> Found 4 results.
> NAME  OWNER  GROUP  MODE   WEIGHT 
> bar   root   root   rwxr-xr-x  100
> foo   root   root   rwxr-xr-x  100
> root  root   root   rwxr-x---  100
> schu  root   root   rwxr-xr-x  100
> [root@hdfs-nfs ~]# 
> {code}
> When we modify the root pool to mode 755, schu user can now -listPools 
> successfully without error.
> {code}
> [schu@hdfs-nfs ~]$ hdfs cachea

[jira] [Commented] (HDFS-5425) Renaming underconstruction file with snapshots can make NN failure on restart

2013-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821213#comment-13821213
 ] 

Hudson commented on HDFS-5425:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #390 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/390/])
HDFS-5425. Renaming underconstruction file with snapshots can make NN failure 
on restart. Contributed by Vinay and Jing Zhao. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1541261)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectoryWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java


> Renaming underconstruction file with snapshots can make NN failure on restart
> -
>
> Key: HDFS-5425
> URL: https://issues.apache.org/jira/browse/HDFS-5425
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, snapshots
>Affects Versions: 3.0.0, 2.2.0
>Reporter: sathish
>Assignee: Jing Zhao
> Fix For: 2.3.0
>
> Attachments: HDFS-5425.001.patch, HDFS-5425.patch, HDFS-5425.patch, 
> HDFS-5425.patch
>
>
> I faced this When i am doing some snapshot operations like 
> createSnapshot,renameSnapshot,i restarted my NN,it is shutting down with 
> exception,
> 2013-10-24 21:07:03,040 FATAL 
> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
> java.lang.IllegalStateException
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:133)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$ChildrenDiff.replace(INodeDirectoryWithSnapshot.java:82)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$ChildrenDiff.access$700(INodeDirectoryWithSnapshot.java:62)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$DirectoryDiffList.replaceChild(INodeDirectoryWithSnapshot.java:397)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$DirectoryDiffList.access$900(INodeDirectoryWithSnapshot.java:376)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot.replaceChild(INodeDirectoryWithSnapshot.java:598)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedReplaceINodeFile(FSDirectory.java:1548)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.replaceINodeFile(FSDirectory.java:1537)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.loadFilesUnderConstruction(FSImageFormat.java:855)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.load(FSImageFormat.java:350)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:910)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:899)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:751)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:720)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:266)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:784)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:563)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:422)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:472)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:670)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:655)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1245)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1311)
> 2013-10-24 21:07:03,050 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1
> 2013-10-24 21:07:03,052 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG: 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5450) better API for getting the cached blocks locations

2013-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821204#comment-13821204
 ] 

Hudson commented on HDFS-5450:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #390 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/390/])
HDFS-5450. better API for getting the cached blocks locations. Contributed by 
Andrew Wang. (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1541338)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/BlockLocation.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestBlockLocation.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/HdfsBlockLocation.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestPathBasedCacheRequests.java


> better API for getting the cached blocks locations
> --
>
> Key: HDFS-5450
> URL: https://issues.apache.org/jira/browse/HDFS-5450
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Andrew Wang
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: hdfs-5450-1.patch, hdfs-5450-2.patch, hdfs-5450-3.patch, 
> hdfs-5450-4.patch, hdfs-5450-5.patch
>
>
> Currently, we have to downcast the {{BlockLocation}} to {{HdfsBlockLocation}} 
> to get information about whether a replica is cached.  We should have this 
> information in {{BlockLocation}} instead.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5488) Clean up TestHftpURLTimeout

2013-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821201#comment-13821201
 ] 

Hudson commented on HDFS-5488:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #390 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/390/])
HDFS-5488. Clean up TestHftpURLTimeout. Contributed by Haohui Mai. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1540894)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestHftpFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestHftpURLTimeouts.java


> Clean up TestHftpURLTimeout
> ---
>
> Key: HDFS-5488
> URL: https://issues.apache.org/jira/browse/HDFS-5488
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.3.0
>
> Attachments: HDFS-5488.000.patch
>
>
> HftpFileSystem uses URLConnectionFactory to set the timeout of each http 
> connections. This jira cleans up TestHftpTimeout and merges its unit tests 
> into TestURLConnectionFactory.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5499) Provide way to throttle per FileSystem read/write bandwidth

2013-11-13 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821185#comment-13821185
 ] 

Steve Loughran commented on HDFS-5499:
--

I've looked at it a bit within the context of YARN.

YARN containers are where this would be ideal, as then you'd be able to request 
IO capacity as well as CPU and RAM. For that to work, the throttling would have 
to be outside the App, as you are trying to limit code whether or not it wants 
to be, and because you probably want to give it more bandwidth if the system is 
otherwise idle. Self-throttling doesn't pick up spare IO

* you can use cgroups in YARN to throttle local disk IO through the file:// 
URLs or the java filesystem APIs -such as for MR temp data
* you can't c-group throttle HDFS per YARN container, which would be the ideal 
use case for it. The IO is taking place in the DN, and cgroups only limits IO 
in the throttled process group.
* implementing it in the DN would require a lot more complex code there to 
prioritise work based on block ID (sole identifier that goes around everywhere) 
or input source (local sockets for HBase IO vs TCP stack)
* One you go to a heterogenous filesystem you need to think about IO load per 
storage layer as well as/alongside per-volume
* There's also generic RPC request throttle to prevent DoS against the NN and 
other HDFS services. That would need to be server side, but once implemented in 
the RPC code be universal. 

You also need to define what is the load you are trying to throttle, pure 
RPCs/second, read bandwidth, write bandwidth, seeks or IOPs. Once a file is 
lined up for sequential reading, you'd almost want it to stream through the 
next blocks until a high priority request came through, but operations like a 
seek which would involve a disk head movement backwards would be something to 
throttle (hence you need to be storage type aware as SSD seeks costs less). You 
also need to consider that although the cost of writes is high, it's usually 
being done with the goal of preserving data -and you don't want to impact 
durability.


> Provide way to throttle per FileSystem read/write bandwidth
> ---
>
> Key: HDFS-5499
> URL: https://issues.apache.org/jira/browse/HDFS-5499
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lohit Vijayarenu
> Attachments: HDFS-5499.1.patch
>
>
> In some cases it might be worth to throttle read/writer bandwidth on per JVM 
> basis so that clients do not spawn too many thread and start data movement 
> causing other JVMs to starve. Ability to throttle read/write bandwidth on per 
> FileSystem would help avoid such issues. 
> Challenge seems to be how well this can be fit into FileSystem code. If one 
> enables throttling around FileSystem APIs, then any hidden data transfer 
> within cluster using them might also be affected. Eg. copying job jar during 
> job submission, localizing resources for distributed cache and such. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (HDFS-5461) fallback to non-ssr(local short circuit reads) while oom detected

2013-11-13 Thread Liang Xie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liang Xie reassigned HDFS-5461:
---

Assignee: Liang Xie

> fallback to non-ssr(local short circuit reads) while oom detected
> -
>
> Key: HDFS-5461
> URL: https://issues.apache.org/jira/browse/HDFS-5461
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Liang Xie
>Assignee: Liang Xie
> Attachments: HDFS-5461.txt
>
>
> Currently, the DirectBufferPool used by ssr feature seems doesn't have a 
> upper-bound limit except DirectMemory VM option. So there's a risk to 
> encounter direct memory oom. see HBASE-8143 for example.
> IMHO, maybe we could improve it a bit:
> 1) detect OOM or reach a setting up-limit from caller, then fallback to 
> non-ssr
> 2) add a new metric about current raw consumed direct memory size.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5461) fallback to non-ssr(local short circuit reads) while oom detected

2013-11-13 Thread Liang Xie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liang Xie updated HDFS-5461:


Status: Patch Available  (was: Open)

> fallback to non-ssr(local short circuit reads) while oom detected
> -
>
> Key: HDFS-5461
> URL: https://issues.apache.org/jira/browse/HDFS-5461
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.2.0, 3.0.0
>Reporter: Liang Xie
>Assignee: Liang Xie
> Attachments: HDFS-5461.txt
>
>
> Currently, the DirectBufferPool used by ssr feature seems doesn't have a 
> upper-bound limit except DirectMemory VM option. So there's a risk to 
> encounter direct memory oom. see HBASE-8143 for example.
> IMHO, maybe we could improve it a bit:
> 1) detect OOM or reach a setting up-limit from caller, then fallback to 
> non-ssr
> 2) add a new metric about current raw consumed direct memory size.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5428) under construction files deletion after snapshot+checkpoint+nn restart leads nn safemode

2013-11-13 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821075#comment-13821075
 ] 

Vinay commented on HDFS-5428:
-

Since HDFS-5425 is committed, this latest patch needs re-base on trunk.

I verified the rename+append scenario mentioned in previous comments. I found 
no issues with that.

+1, from my side once the patch is rebased.


> under construction files deletion after snapshot+checkpoint+nn restart leads 
> nn safemode
> 
>
> Key: HDFS-5428
> URL: https://issues.apache.org/jira/browse/HDFS-5428
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Vinay
>Assignee: Vinay
> Attachments: HDFS-5428-v2.patch, HDFS-5428.000.patch, 
> HDFS-5428.001.patch, HDFS-5428.patch
>
>
> 1. allow snapshots under dir /foo
> 2. create a file /foo/test/bar and start writing to it
> 3. create a snapshot s1 under /foo after block is allocated and some data has 
> been written to it
> 4. Delete the directory /foo/test
> 5. wait till checkpoint or do saveNameSpace
> 6. restart NN.
> NN enters to safemode.
> Analysis:
> Snapshot nodes loaded from fsimage are always complete and all blocks will be 
> in COMPLETE state. 
> So when the Datanode reports RBW blocks those will not be updated in 
> blocksmap.
> Some of the FINALIZED blocks will be marked as corrupt due to length mismatch.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5461) fallback to non-ssr(local short circuit reads) while oom detected

2013-11-13 Thread Liang Xie (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821074#comment-13821074
 ] 

Liang Xie commented on HDFS-5461:
-

Could some guys help to review this one? thanks!

> fallback to non-ssr(local short circuit reads) while oom detected
> -
>
> Key: HDFS-5461
> URL: https://issues.apache.org/jira/browse/HDFS-5461
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Liang Xie
> Attachments: HDFS-5461.txt
>
>
> Currently, the DirectBufferPool used by ssr feature seems doesn't have a 
> upper-bound limit except DirectMemory VM option. So there's a risk to 
> encounter direct memory oom. see HBASE-8143 for example.
> IMHO, maybe we could improve it a bit:
> 1) detect OOM or reach a setting up-limit from caller, then fallback to 
> non-ssr
> 2) add a new metric about current raw consumed direct memory size.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5444) Choose default web UI based on browser capabilities

2013-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821072#comment-13821072
 ] 

Hadoop QA commented on HDFS-5444:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12613519/Screenshot-old.png
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5419//console

This message is automatically generated.

> Choose default web UI based on browser capabilities
> ---
>
> Key: HDFS-5444
> URL: https://issues.apache.org/jira/browse/HDFS-5444
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5444.000.patch, HDFS-5444.000.patch, 
> HDFS-5444.001.patch, Screenshot-new.png, Screenshot-old.png
>
>
> This jira changes the entrance of the web UI -- so that modern browsers with 
> JavaScript support are redirected to the new web UI, while other browsers 
> will automatically fall back to the old JSP based UI.
> It also add hyperlinks in both UIs to facilitate testings and evaluation.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


  1   2   >