[jira] [Commented] (HDFS-4968) Provide configuration option for FileSystem symlink resolution

2013-07-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13706702#comment-13706702
 ] 

Hadoop QA commented on HDFS-4968:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12591967/hdfs-4968-1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 
release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4640//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4640//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4640//console

This message is automatically generated.

> Provide configuration option for FileSystem symlink resolution
> --
>
> Key: HDFS-4968
> URL: https://issues.apache.org/jira/browse/HDFS-4968
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hdfs-4968-1.patch
>
>
> With FileSystem symlink support incoming in HADOOP-8040, some clients will 
> wish to not transparently resolve symlinks. This is somewhat similar to 
> O_NOFOLLOW in open(2).
> Rationale for is for a security model where a user can invoke a third-party 
> service running as a service user to operate on the user's data. For 
> instance, users might want to use Hive to query data in their homedirs, where 
> Hive runs as the Hive user and the data is readable by the Hive user. This 
> leads to a security issue with symlinks:
> # User Mallory invokes Hive to process data files in {{/user/mallory/hive/}}
> # Hive checks permissions on the files in {{/user/mallory/hive/}} and allows 
> the query to proceed.
> # RACE: Mallory replaces the files in {{/user/mallory/hive}} with symlinks 
> that point to user Ann's Hive files in {{/user/ann/hive}}. These files aren't 
> readable by Mallory, but she can create whatever symlinks she wants in her 
> own scratch directory.
> # Hive's MR jobs happily resolve the symlinks and accesses Ann's private data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4912) Cleanup FSNamesystem#startFileInternal

2013-07-11 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13706700#comment-13706700
 ] 

Jing Zhao commented on HDFS-4912:
-

+1 for the new patch.

> Cleanup FSNamesystem#startFileInternal
> --
>
> Key: HDFS-4912
> URL: https://issues.apache.org/jira/browse/HDFS-4912
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Attachments: HDFS-4912.1.patch, HDFS-4912.2.patch, HDFS-4912.patch
>
>
> FSNamesystem#startFileInternal is used by both create and append. This 
> results in ugly if else conditions to consider append/create scenarios. This 
> method can be refactored and the code can be cleaned up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4912) Cleanup FSNamesystem#startFileInternal

2013-07-11 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4912:
--

Attachment: HDFS-4912.2.patch

Jing, thanks for the comments. Updated patch to address the comments.

> Cleanup FSNamesystem#startFileInternal
> --
>
> Key: HDFS-4912
> URL: https://issues.apache.org/jira/browse/HDFS-4912
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Attachments: HDFS-4912.1.patch, HDFS-4912.2.patch, HDFS-4912.patch
>
>
> FSNamesystem#startFileInternal is used by both create and append. This 
> results in ugly if else conditions to consider append/create scenarios. This 
> method can be refactored and the code can be cleaned up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4912) Cleanup FSNamesystem#startFileInternal

2013-07-11 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13706684#comment-13706684
 ] 

Jing Zhao commented on HDFS-4912:
-

The patch looks good to me. Only some minor:

1. In the javadoc of startFileInternal,
{code}
* 
-   * @return the last block locations if the block is partial or null otherwise
+   * Note that in this method
+   * 
+   * For description of parameters and exceptions thrown see
{code}

Looks like some content is missing here?

2. In appendFileInternal,
{code}
+final INodeFile myFile = INodeFile.valueOf(inode, src, true);
+if (isPermissionEnabled) {
+  checkPathAccess(pc, src, FsAction.WRITE);
+}
+
+try {
+  if (myFile == null) {
+throw new FileNotFoundException("failed to append to non-existent file 
"
+  + src + " on client " + clientMachine);
+  }
{code}

For append, maybe we can check if inode is null before we convert it to an 
INodeFile? I.e., we may add a null check before INodeFile.valueOf call. 

Besides of these two minors, +1 for the patch.

> Cleanup FSNamesystem#startFileInternal
> --
>
> Key: HDFS-4912
> URL: https://issues.apache.org/jira/browse/HDFS-4912
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Attachments: HDFS-4912.1.patch, HDFS-4912.patch
>
>
> FSNamesystem#startFileInternal is used by both create and append. This 
> results in ugly if else conditions to consider append/create scenarios. This 
> method can be refactored and the code can be cleaned up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4912) Cleanup FSNamesystem#startFileInternal

2013-07-11 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13706669#comment-13706669
 ] 

Suresh Srinivas commented on HDFS-4912:
---

Release audit warning is complaining about "hs_err_pid12966.log" that is not 
part of this patch. It probably is because of some cleanup issue on Jenkins 
machine?

eclipse issue that is flagged is also not related to this patch:
{quote}
[WARNING] The POM for org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 is missing, 
no dependency information available
[WARNING] Failed to retrieve plugin descriptor for 
org.eclipse.m2e:lifecycle-mapping:1.0.0: Plugin 
org.eclipse.m2e:lifecycle-mapping:1.0.0 or one of its dependencies could not be 
resolved: Failed to read artifact descriptor for 
org.eclipse.m2e:lifecycle-mapping:jar:1.0.0
{quote}

> Cleanup FSNamesystem#startFileInternal
> --
>
> Key: HDFS-4912
> URL: https://issues.apache.org/jira/browse/HDFS-4912
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Attachments: HDFS-4912.1.patch, HDFS-4912.patch
>
>
> FSNamesystem#startFileInternal is used by both create and append. This 
> results in ugly if else conditions to consider append/create scenarios. This 
> method can be refactored and the code can be cleaned up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4374) Display NameNode startup progress in UI

2013-07-11 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-4374:


   Resolution: Fixed
Fix Version/s: 2.1.0-beta
   Status: Resolved  (was: Patch Available)

This has been committed to trunk, branch-2, branch-2.1-beta, and 
branch-2.1.0-beta.

> Display NameNode startup progress in UI
> ---
>
> Key: HDFS-4374
> URL: https://issues.apache.org/jira/browse/HDFS-4374
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 3.0.0, 2.1.0-beta
>
> Attachments: HDFS-4374.1.patch, HDFS-4374.2.patch
>
>
> Display the information about the NameNode's startup progress in the NameNode 
> web UI.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4249) Add status NameNode startup to webUI

2013-07-11 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HDFS-4249.
-

   Resolution: Fixed
Fix Version/s: 2.1.0-beta
   3.0.0
 Hadoop Flags: Reviewed

All related patches have been committed to trunk, branch-2, branch-2.1-beta, 
and branch-2.1.0-beta.

> Add status NameNode startup to webUI 
> -
>
> Key: HDFS-4249
> URL: https://issues.apache.org/jira/browse/HDFS-4249
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Suresh Srinivas
>Assignee: Chris Nauroth
> Fix For: 3.0.0, 2.1.0-beta
>
> Attachments: HDFS-4249.1.pdf, HDFS-4249-1.png, HDFS-4249-2.png, 
> HDFS-4249-3.png, HDFS-4249-4.png, HDFS-4249-5.png
>
>
> Currently NameNode WebUI server starts only after the fsimage is loaded, 
> edits are applied and checkpoint is complete. Any status related to namenode 
> startin up is available only in the logs. I propose starting the webserver 
> before loading namespace and providing namenode startup information.
> More details in the next comment.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4373) Add HTTP API for querying NameNode startup progress

2013-07-11 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-4373:


   Resolution: Fixed
Fix Version/s: 2.1.0-beta
   Status: Resolved  (was: Patch Available)

This has been committed to trunk, branch-2, branch-2.1-beta, and 
branch-2.1.0-beta.

> Add HTTP API for querying NameNode startup progress
> ---
>
> Key: HDFS-4373
> URL: https://issues.apache.org/jira/browse/HDFS-4373
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 3.0.0, 2.1.0-beta
>
> Attachments: HDFS-4373.1.patch, HDFS-4373.2.patch, HDFS-4373.3.patch
>
>
> Provide an HTTP API for non-browser clients to query the NameNode's current 
> progress through startup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4372) Track NameNode startup progress

2013-07-11 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-4372:


   Resolution: Fixed
Fix Version/s: 2.1.0-beta
   Status: Resolved  (was: Patch Available)

This has been committed to trunk, branch-2, branch-2.1-beta, and 
branch-2.1.0-beta.

> Track NameNode startup progress
> ---
>
> Key: HDFS-4372
> URL: https://issues.apache.org/jira/browse/HDFS-4372
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 3.0.0, 2.1.0-beta
>
> Attachments: HDFS-4372.1.patch, HDFS-4372.2.patch, HDFS-4372.3.patch, 
> HDFS-4372.4.patch, HDFS-4372.4.rebase.patch
>
>
> Track detailed progress information about the steps of NameNode startup to 
> enable display to users.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4968) Provide configuration option for FileSystem symlink resolution

2013-07-11 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-4968:
--

Attachment: (was: hdfs-4968-1.patch)

> Provide configuration option for FileSystem symlink resolution
> --
>
> Key: HDFS-4968
> URL: https://issues.apache.org/jira/browse/HDFS-4968
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hdfs-4968-1.patch
>
>
> With FileSystem symlink support incoming in HADOOP-8040, some clients will 
> wish to not transparently resolve symlinks. This is somewhat similar to 
> O_NOFOLLOW in open(2).
> Rationale for is for a security model where a user can invoke a third-party 
> service running as a service user to operate on the user's data. For 
> instance, users might want to use Hive to query data in their homedirs, where 
> Hive runs as the Hive user and the data is readable by the Hive user. This 
> leads to a security issue with symlinks:
> # User Mallory invokes Hive to process data files in {{/user/mallory/hive/}}
> # Hive checks permissions on the files in {{/user/mallory/hive/}} and allows 
> the query to proceed.
> # RACE: Mallory replaces the files in {{/user/mallory/hive}} with symlinks 
> that point to user Ann's Hive files in {{/user/ann/hive}}. These files aren't 
> readable by Mallory, but she can create whatever symlinks she wants in her 
> own scratch directory.
> # Hive's MR jobs happily resolve the symlinks and accesses Ann's private data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4968) Provide configuration option for FileSystem symlink resolution

2013-07-11 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-4968:
--

Attachment: hdfs-4968-1.patch

Reattaching the same patch, I think the release warning is spurious based on 
the mailing list.

mvn eclipse:eclipse also works fine for me locally, not sure what's up with 
that.

> Provide configuration option for FileSystem symlink resolution
> --
>
> Key: HDFS-4968
> URL: https://issues.apache.org/jira/browse/HDFS-4968
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hdfs-4968-1.patch
>
>
> With FileSystem symlink support incoming in HADOOP-8040, some clients will 
> wish to not transparently resolve symlinks. This is somewhat similar to 
> O_NOFOLLOW in open(2).
> Rationale for is for a security model where a user can invoke a third-party 
> service running as a service user to operate on the user's data. For 
> instance, users might want to use Hive to query data in their homedirs, where 
> Hive runs as the Hive user and the data is readable by the Hive user. This 
> leads to a security issue with symlinks:
> # User Mallory invokes Hive to process data files in {{/user/mallory/hive/}}
> # Hive checks permissions on the files in {{/user/mallory/hive/}} and allows 
> the query to proceed.
> # RACE: Mallory replaces the files in {{/user/mallory/hive}} with symlinks 
> that point to user Ann's Hive files in {{/user/ann/hive}}. These files aren't 
> readable by Mallory, but she can create whatever symlinks she wants in her 
> own scratch directory.
> # Hive's MR jobs happily resolve the symlinks and accesses Ann's private data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Work logged] (HDFS-4901) Site Scripting and Phishing Through Frames in browseDirectory.jsp

2013-07-11 Thread Vivek Ganesan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4901?focusedWorklogId=14639&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-14639
 ]

Vivek Ganesan logged work on HDFS-4901:
---

Author: Vivek Ganesan
Created on: 12/Jul/13 05:22
Start Date: 12/Jul/13 05:21
Worklog Time Spent: 5.9h 
  Work Description: previously logged minutes instead of hours. adjusting 
the same

Issue Time Tracking
---

Worklog Id: (was: 14639)
Time Spent: 8.2h  (was: 2.3h)
Remaining Estimate: 15.8h  (was: 21.7h)

> Site Scripting and Phishing Through Frames in browseDirectory.jsp
> -
>
> Key: HDFS-4901
> URL: https://issues.apache.org/jira/browse/HDFS-4901
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security, webhdfs
>Affects Versions: 1.1.1, 2.0.3-alpha
>Reporter: Jeffrey E  Rodriguez
>Assignee: Vivek Ganesan
>   Original Estimate: 24h
>  Time Spent: 8.2h
>  Remaining Estimate: 15.8h
>
> It is possible to steal or manipulate customer session and cookies, which 
> might be used to impersonate a legitimate user,
> allowing the hacker to view or alter user records, and to perform 
> transactions as that user.
> e.g.
> GET /browseDirectory.jsp? dir=%2Fhadoop'"/>alert(759) 
> &namenodeInfoPort=50070
> Also;
> Phishing Through Frames
> Try:
> GET /browseDirectory.jsp? 
> dir=%2Fhadoop%27%22%3E%3Ciframe+src%3Dhttp%3A%2F%2Fdemo.testfire.net%2Fphishing.html%3E
> &namenodeInfoPort=50070 HTTP/1.1
> Cookie: JSESSIONID=qd9i8tuccuam1cme71swr9nfi
> Accept-Language: en-US
> Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Work logged] (HDFS-4901) Site Scripting and Phishing Through Frames in browseDirectory.jsp

2013-07-11 Thread Vivek Ganesan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4901?focusedWorklogId=14638&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-14638
 ]

Vivek Ganesan logged work on HDFS-4901:
---

Author: Vivek Ganesan
Created on: 12/Jul/13 05:20
Start Date: 10/Jul/13 05:19
Worklog Time Spent: 16m 
  Work Description: Done with patch and unit test.  Running patch tester 
script.  Trying to fix (-1) reported by patch tester.

Issue Time Tracking
---

Worklog Id: (was: 14638)
Time Spent: 2.3h  (was: 2h 2m)
Remaining Estimate: 21.7h  (was: 21h 58m)

> Site Scripting and Phishing Through Frames in browseDirectory.jsp
> -
>
> Key: HDFS-4901
> URL: https://issues.apache.org/jira/browse/HDFS-4901
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security, webhdfs
>Affects Versions: 1.1.1, 2.0.3-alpha
>Reporter: Jeffrey E  Rodriguez
>Assignee: Vivek Ganesan
>   Original Estimate: 24h
>  Time Spent: 2.3h
>  Remaining Estimate: 21.7h
>
> It is possible to steal or manipulate customer session and cookies, which 
> might be used to impersonate a legitimate user,
> allowing the hacker to view or alter user records, and to perform 
> transactions as that user.
> e.g.
> GET /browseDirectory.jsp? dir=%2Fhadoop'"/>alert(759) 
> &namenodeInfoPort=50070
> Also;
> Phishing Through Frames
> Try:
> GET /browseDirectory.jsp? 
> dir=%2Fhadoop%27%22%3E%3Ciframe+src%3Dhttp%3A%2F%2Fdemo.testfire.net%2Fphishing.html%3E
> &namenodeInfoPort=50070 HTTP/1.1
> Cookie: JSESSIONID=qd9i8tuccuam1cme71swr9nfi
> Accept-Language: en-US
> Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4983) Numeric usernames do not work with WebHDFS FS

2013-07-11 Thread Harsh J (JIRA)
Harsh J created HDFS-4983:
-

 Summary: Numeric usernames do not work with WebHDFS FS
 Key: HDFS-4983
 URL: https://issues.apache.org/jira/browse/HDFS-4983
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: webhdfs
Affects Versions: 2.0.0-alpha
Reporter: Harsh J


Per the file 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserParam.java,
 the DOMAIN pattern is set to: {{^[A-Za-z_][A-Za-z0-9._-]*[$]?$}}.

Given this, using a username such as "123" seems to fail for some reason (tried 
on insecure setup):

{code}
[123@host-1 ~]$ whoami
123
[123@host-1 ~]$ hadoop fs -fs webhdfs://host-2.domain.com -ls /
-ls: Invalid value: "123" does not belong to the domain 
^[A-Za-z_][A-Za-z0-9._-]*[$]?$
Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [ ...]
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4817) make HDFS advisory caching configurable on a per-file basis

2013-07-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13706613#comment-13706613
 ] 

Hadoop QA commented on HDFS-4817:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12591945/HDFS-4817.008.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4639//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4639//console

This message is automatically generated.

> make HDFS advisory caching configurable on a per-file basis
> ---
>
> Key: HDFS-4817
> URL: https://issues.apache.org/jira/browse/HDFS-4817
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-4817.001.patch, HDFS-4817.002.patch, 
> HDFS-4817.004.patch, HDFS-4817.006.patch, HDFS-4817.007.patch, 
> HDFS-4817.008.patch
>
>
> HADOOP-7753 and related JIRAs introduced some performance optimizations for 
> the DataNode.  One of them was readahead.  When readahead is enabled, the 
> DataNode starts reading the next bytes it thinks it will need in the block 
> file, before the client requests them.  This helps hide the latency of 
> rotational media and send larger reads down to the device.  Another 
> optimization was "drop-behind."  Using this optimization, we could remove 
> files from the Linux page cache after they were no longer needed.
> Using {{dfs.datanode.drop.cache.behind.writes}} and 
> {{dfs.datanode.drop.cache.behind.reads}} can improve performance  
> substantially on many MapReduce jobs.  In our internal benchmarks, we have 
> seen speedups of 40% on certain workloads.  The reason is because if we know 
> the block data will not be read again any time soon, keeping it out of memory 
> allows more memory to be used by the other processes on the system.  See 
> HADOOP-7714 for more benchmarks.
> We would like to turn on these configurations on a per-file or per-client 
> basis, rather than on the DataNode as a whole.  This will allow more users to 
> actually make use of them.  It would also be good to add unit tests for the 
> drop-cache code path, to ensure that it is functioning as we expect.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HDFS-4265) BKJM doesn't take advantage of speculative reads

2013-07-11 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R reassigned HDFS-4265:
--

Assignee: Rakesh R

> BKJM doesn't take advantage of speculative reads
> 
>
> Key: HDFS-4265
> URL: https://issues.apache.org/jira/browse/HDFS-4265
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha
>Reporter: Ivan Kelly
>Assignee: Rakesh R
> Attachments: 001-HDFS-4265.patch
>
>
> BookKeeperEditLogInputStream reads entry at a time, so it doesn't take 
> advantage of the speculative read mechanism introduced by BOOKKEEPER-336.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4265) BKJM doesn't take advantage of speculative reads

2013-07-11 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-4265:
---

Attachment: 001-HDFS-4265.patch

> BKJM doesn't take advantage of speculative reads
> 
>
> Key: HDFS-4265
> URL: https://issues.apache.org/jira/browse/HDFS-4265
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha
>Reporter: Ivan Kelly
> Attachments: 001-HDFS-4265.patch
>
>
> BookKeeperEditLogInputStream reads entry at a time, so it doesn't take 
> advantage of the speculative read mechanism introduced by BOOKKEEPER-336.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


TYCHUN�]T.Y. CHUN ���}���^ is out of the office.

2013-07-11 Thread tychun

I will be out of the office starting  2013/07/12 and will not return until
2013/07/13.

I will respond to your message when I return.
 --- 
 TSMC PROPERTY   
 This email communication (and any attachments) is proprietary information   
 for the sole use of its 
 intended recipient. Any unauthorized review, use or distribution by anyone  
 other than the intended 
 recipient is strictly prohibited.  If you are not the intended recipient,   
 please notify the sender by 
 replying to this email, and then delete this email and any copies of it 
 immediately. Thank you. 
 --- 






[jira] [Updated] (HDFS-4266) BKJM: Separate write and ack quorum

2013-07-11 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-4266:
---

Attachment: 001-HDFS-4266.patch

> BKJM: Separate write and ack quorum
> ---
>
> Key: HDFS-4266
> URL: https://issues.apache.org/jira/browse/HDFS-4266
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha
>Reporter: Ivan Kelly
> Attachments: 001-HDFS-4266.patch
>
>
> BOOKKEEPER-208 allows the ack and write quorums to be different sizes to 
> allow writes to be unaffected by any bookie failure. BKJM should be able to 
> take advantage of this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HDFS-4266) BKJM: Separate write and ack quorum

2013-07-11 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R reassigned HDFS-4266:
--

Assignee: Rakesh R

> BKJM: Separate write and ack quorum
> ---
>
> Key: HDFS-4266
> URL: https://issues.apache.org/jira/browse/HDFS-4266
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha
>Reporter: Ivan Kelly
>Assignee: Rakesh R
> Attachments: 001-HDFS-4266.patch
>
>
> BOOKKEEPER-208 allows the ack and write quorums to be different sizes to 
> allow writes to be unaffected by any bookie failure. BKJM should be able to 
> take advantage of this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4266) BKJM: Separate write and ack quorum

2013-07-11 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-4266:
---

Attachment: (was: 001-HDFS-4266.patch)

> BKJM: Separate write and ack quorum
> ---
>
> Key: HDFS-4266
> URL: https://issues.apache.org/jira/browse/HDFS-4266
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha
>Reporter: Ivan Kelly
>
> BOOKKEEPER-208 allows the ack and write quorums to be different sizes to 
> allow writes to be unaffected by any bookie failure. BKJM should be able to 
> take advantage of this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4266) BKJM: Separate write and ack quorum

2013-07-11 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-4266:
---

Attachment: 001-HDFS-4266.patch

> BKJM: Separate write and ack quorum
> ---
>
> Key: HDFS-4266
> URL: https://issues.apache.org/jira/browse/HDFS-4266
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha
>Reporter: Ivan Kelly
> Attachments: 001-HDFS-4266.patch
>
>
> BOOKKEEPER-208 allows the ack and write quorums to be different sizes to 
> allow writes to be unaffected by any bookie failure. BKJM should be able to 
> take advantage of this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4794) Browsing filesystem via webui throws kerberos exception when NN service RPC is enabled in a secure cluster

2013-07-11 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13706560#comment-13706560
 ] 

Jitendra Nath Pandey commented on HDFS-4794:


Binoy, 
  JspHelper.namenodeAddr is a public static variable, which may be used in many 
places, changing that might break something else. Note that jsp files could 
also be referencing it where corresponding java files are auto-generated.
 A safer approach is to leave this variable as it is and introduce another 
variable like namenodeRpcAddr and use that other variable from the servlets 
that are responsible for this error. Does that make sense?
 If you are confident that it is not being used anywhere then we could consider 
changing it into a private variable.

> Browsing filesystem via webui throws kerberos exception when NN service RPC 
> is enabled in a secure cluster
> --
>
> Key: HDFS-4794
> URL: https://issues.apache.org/jira/browse/HDFS-4794
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.1.2
>Reporter: Benoy Antony
>Assignee: Benoy Antony
> Attachments: HDFS-4794.patch
>
>
> Browsing filesystem via webui throws kerberos exception when NN service RPC 
> is enabled in a secure cluster
> To reproduce this error, 
> Enable security 
> Enable serviceRPC by setting dfs.namenode.servicerpc-address and use a 
> different port than the rpc port.
> Click on "Browse the filesystem" on NameNode web.
> The following error will be shown :
> Call to NN001/12.123.123.01:8030 failed on local exception: 
> java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed 
> [Caused by GSSException: No valid credentials provided (Mechanism level: 
> Failed to find any Kerberos tgt)]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4817) make HDFS advisory caching configurable on a per-file basis

2013-07-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13706545#comment-13706545
 ] 

Hadoop QA commented on HDFS-4817:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12591944/HDFS-4817.008.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4638//console

This message is automatically generated.

> make HDFS advisory caching configurable on a per-file basis
> ---
>
> Key: HDFS-4817
> URL: https://issues.apache.org/jira/browse/HDFS-4817
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-4817.001.patch, HDFS-4817.002.patch, 
> HDFS-4817.004.patch, HDFS-4817.006.patch, HDFS-4817.007.patch, 
> HDFS-4817.008.patch
>
>
> HADOOP-7753 and related JIRAs introduced some performance optimizations for 
> the DataNode.  One of them was readahead.  When readahead is enabled, the 
> DataNode starts reading the next bytes it thinks it will need in the block 
> file, before the client requests them.  This helps hide the latency of 
> rotational media and send larger reads down to the device.  Another 
> optimization was "drop-behind."  Using this optimization, we could remove 
> files from the Linux page cache after they were no longer needed.
> Using {{dfs.datanode.drop.cache.behind.writes}} and 
> {{dfs.datanode.drop.cache.behind.reads}} can improve performance  
> substantially on many MapReduce jobs.  In our internal benchmarks, we have 
> seen speedups of 40% on certain workloads.  The reason is because if we know 
> the block data will not be read again any time soon, keeping it out of memory 
> allows more memory to be used by the other processes on the system.  See 
> HADOOP-7714 for more benchmarks.
> We would like to turn on these configurations on a per-file or per-client 
> basis, rather than on the DataNode as a whole.  This will allow more users to 
> actually make use of them.  It would also be good to add unit tests for the 
> drop-cache code path, to ensure that it is functioning as we expect.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4978) Make disallowSnapshot idempotent

2013-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13706544#comment-13706544
 ] 

Hudson commented on HDFS-4978:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4075 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4075/])
HDFS-4978. Make disallowSnapshot idempotent. Contributed by Jing Zhao. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1502400)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestNestedSnapshots.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshot.java


> Make disallowSnapshot idempotent
> 
>
> Key: HDFS-4978
> URL: https://issues.apache.org/jira/browse/HDFS-4978
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: 2.1.0-beta
>
> Attachments: HDFS-4978.001.patch, HDFS-4978.002.patch, 
> HDFS-4978.003.patch, HDFS-4978.004.patch, HDFS-4978.005.patch
>
>
> Currently disallowSnapshot is not idempotent: an exception will be thrown 
> when the directory is already non-snapshottable. This jira tries to make it 
> idempotent.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4817) make HDFS advisory caching configurable on a per-file basis

2013-07-11 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4817:
---

Attachment: (was: HDFS-4817.008.patch)

> make HDFS advisory caching configurable on a per-file basis
> ---
>
> Key: HDFS-4817
> URL: https://issues.apache.org/jira/browse/HDFS-4817
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-4817.001.patch, HDFS-4817.002.patch, 
> HDFS-4817.004.patch, HDFS-4817.006.patch, HDFS-4817.007.patch, 
> HDFS-4817.008.patch
>
>
> HADOOP-7753 and related JIRAs introduced some performance optimizations for 
> the DataNode.  One of them was readahead.  When readahead is enabled, the 
> DataNode starts reading the next bytes it thinks it will need in the block 
> file, before the client requests them.  This helps hide the latency of 
> rotational media and send larger reads down to the device.  Another 
> optimization was "drop-behind."  Using this optimization, we could remove 
> files from the Linux page cache after they were no longer needed.
> Using {{dfs.datanode.drop.cache.behind.writes}} and 
> {{dfs.datanode.drop.cache.behind.reads}} can improve performance  
> substantially on many MapReduce jobs.  In our internal benchmarks, we have 
> seen speedups of 40% on certain workloads.  The reason is because if we know 
> the block data will not be read again any time soon, keeping it out of memory 
> allows more memory to be used by the other processes on the system.  See 
> HADOOP-7714 for more benchmarks.
> We would like to turn on these configurations on a per-file or per-client 
> basis, rather than on the DataNode as a whole.  This will allow more users to 
> actually make use of them.  It would also be good to add unit tests for the 
> drop-cache code path, to ensure that it is functioning as we expect.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4817) make HDFS advisory caching configurable on a per-file basis

2013-07-11 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4817:
---

Attachment: (was: HDFS-4817.008.patch)

> make HDFS advisory caching configurable on a per-file basis
> ---
>
> Key: HDFS-4817
> URL: https://issues.apache.org/jira/browse/HDFS-4817
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-4817.001.patch, HDFS-4817.002.patch, 
> HDFS-4817.004.patch, HDFS-4817.006.patch, HDFS-4817.007.patch, 
> HDFS-4817.008.patch
>
>
> HADOOP-7753 and related JIRAs introduced some performance optimizations for 
> the DataNode.  One of them was readahead.  When readahead is enabled, the 
> DataNode starts reading the next bytes it thinks it will need in the block 
> file, before the client requests them.  This helps hide the latency of 
> rotational media and send larger reads down to the device.  Another 
> optimization was "drop-behind."  Using this optimization, we could remove 
> files from the Linux page cache after they were no longer needed.
> Using {{dfs.datanode.drop.cache.behind.writes}} and 
> {{dfs.datanode.drop.cache.behind.reads}} can improve performance  
> substantially on many MapReduce jobs.  In our internal benchmarks, we have 
> seen speedups of 40% on certain workloads.  The reason is because if we know 
> the block data will not be read again any time soon, keeping it out of memory 
> allows more memory to be used by the other processes on the system.  See 
> HADOOP-7714 for more benchmarks.
> We would like to turn on these configurations on a per-file or per-client 
> basis, rather than on the DataNode as a whole.  This will allow more users to 
> actually make use of them.  It would also be good to add unit tests for the 
> drop-cache code path, to ensure that it is functioning as we expect.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4817) make HDFS advisory caching configurable on a per-file basis

2013-07-11 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4817:
---

Attachment: HDFS-4817.008.patch

> make HDFS advisory caching configurable on a per-file basis
> ---
>
> Key: HDFS-4817
> URL: https://issues.apache.org/jira/browse/HDFS-4817
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-4817.001.patch, HDFS-4817.002.patch, 
> HDFS-4817.004.patch, HDFS-4817.006.patch, HDFS-4817.007.patch, 
> HDFS-4817.008.patch
>
>
> HADOOP-7753 and related JIRAs introduced some performance optimizations for 
> the DataNode.  One of them was readahead.  When readahead is enabled, the 
> DataNode starts reading the next bytes it thinks it will need in the block 
> file, before the client requests them.  This helps hide the latency of 
> rotational media and send larger reads down to the device.  Another 
> optimization was "drop-behind."  Using this optimization, we could remove 
> files from the Linux page cache after they were no longer needed.
> Using {{dfs.datanode.drop.cache.behind.writes}} and 
> {{dfs.datanode.drop.cache.behind.reads}} can improve performance  
> substantially on many MapReduce jobs.  In our internal benchmarks, we have 
> seen speedups of 40% on certain workloads.  The reason is because if we know 
> the block data will not be read again any time soon, keeping it out of memory 
> allows more memory to be used by the other processes on the system.  See 
> HADOOP-7714 for more benchmarks.
> We would like to turn on these configurations on a per-file or per-client 
> basis, rather than on the DataNode as a whole.  This will allow more users to 
> actually make use of them.  It would also be good to add unit tests for the 
> drop-cache code path, to ensure that it is functioning as we expect.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4817) make HDFS advisory caching configurable on a per-file basis

2013-07-11 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4817:
---

Attachment: HDFS-4817.008.patch

> make HDFS advisory caching configurable on a per-file basis
> ---
>
> Key: HDFS-4817
> URL: https://issues.apache.org/jira/browse/HDFS-4817
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-4817.001.patch, HDFS-4817.002.patch, 
> HDFS-4817.004.patch, HDFS-4817.006.patch, HDFS-4817.007.patch, 
> HDFS-4817.008.patch
>
>
> HADOOP-7753 and related JIRAs introduced some performance optimizations for 
> the DataNode.  One of them was readahead.  When readahead is enabled, the 
> DataNode starts reading the next bytes it thinks it will need in the block 
> file, before the client requests them.  This helps hide the latency of 
> rotational media and send larger reads down to the device.  Another 
> optimization was "drop-behind."  Using this optimization, we could remove 
> files from the Linux page cache after they were no longer needed.
> Using {{dfs.datanode.drop.cache.behind.writes}} and 
> {{dfs.datanode.drop.cache.behind.reads}} can improve performance  
> substantially on many MapReduce jobs.  In our internal benchmarks, we have 
> seen speedups of 40% on certain workloads.  The reason is because if we know 
> the block data will not be read again any time soon, keeping it out of memory 
> allows more memory to be used by the other processes on the system.  See 
> HADOOP-7714 for more benchmarks.
> We would like to turn on these configurations on a per-file or per-client 
> basis, rather than on the DataNode as a whole.  This will allow more users to 
> actually make use of them.  It would also be good to add unit tests for the 
> drop-cache code path, to ensure that it is functioning as we expect.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4817) make HDFS advisory caching configurable on a per-file basis

2013-07-11 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4817:
---

Attachment: HDFS-4817.008.patch

> make HDFS advisory caching configurable on a per-file basis
> ---
>
> Key: HDFS-4817
> URL: https://issues.apache.org/jira/browse/HDFS-4817
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-4817.001.patch, HDFS-4817.002.patch, 
> HDFS-4817.004.patch, HDFS-4817.006.patch, HDFS-4817.007.patch, 
> HDFS-4817.008.patch
>
>
> HADOOP-7753 and related JIRAs introduced some performance optimizations for 
> the DataNode.  One of them was readahead.  When readahead is enabled, the 
> DataNode starts reading the next bytes it thinks it will need in the block 
> file, before the client requests them.  This helps hide the latency of 
> rotational media and send larger reads down to the device.  Another 
> optimization was "drop-behind."  Using this optimization, we could remove 
> files from the Linux page cache after they were no longer needed.
> Using {{dfs.datanode.drop.cache.behind.writes}} and 
> {{dfs.datanode.drop.cache.behind.reads}} can improve performance  
> substantially on many MapReduce jobs.  In our internal benchmarks, we have 
> seen speedups of 40% on certain workloads.  The reason is because if we know 
> the block data will not be read again any time soon, keeping it out of memory 
> allows more memory to be used by the other processes on the system.  See 
> HADOOP-7714 for more benchmarks.
> We would like to turn on these configurations on a per-file or per-client 
> basis, rather than on the DataNode as a whole.  This will allow more users to 
> actually make use of them.  It would also be good to add unit tests for the 
> drop-cache code path, to ensure that it is functioning as we expect.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4817) make HDFS advisory caching configurable on a per-file basis

2013-07-11 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13706533#comment-13706533
 ] 

Colin Patrick McCabe commented on HDFS-4817:


After doing some related work, I think that the whole "return false" thing is 
not very consistent with the way we handle other unimplemented functions.  So I 
changed it to throw {{UnsupportedOperationException}} when the stream doesn't 
support the operation, similar to how we handle other unsupported operations in 
{{FSDataInputStream}}.

Previously, I had {{FSDataInputStream}} test to see if the wrapped stream was 
an {{FSInputStream}} prior to calling {{setReadahead}} or {{setDropBehind}} on 
it.  However, that is too limiting.  You could wrap an {{FSDataInputstream}} in 
another {{FSDataInputStream}}.  In that case, you should still be able to 
manipulate the cache of the underlying stream.  To fix this issue, I started 
using the {{CanSetDropCache}} interface for {{FSDataInputStream}} (previously 
it was only used on output streams).  I also introduced the {{CanSetReadahead}} 
interface.

I also realized that HarFSInputStream should implement {{CanSetDropBehind}} and 
{{CanSetReadahead}}.  Since HarFSInputStream is essentially a wrapper stream, 
it should just pass those directives through to the wrapped stream.  Most other 
wrapper streams inherit from {{FSDataInputStream}}, and so inherit the 
pass-through methods.

> make HDFS advisory caching configurable on a per-file basis
> ---
>
> Key: HDFS-4817
> URL: https://issues.apache.org/jira/browse/HDFS-4817
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-4817.001.patch, HDFS-4817.002.patch, 
> HDFS-4817.004.patch, HDFS-4817.006.patch, HDFS-4817.007.patch, 
> HDFS-4817.008.patch
>
>
> HADOOP-7753 and related JIRAs introduced some performance optimizations for 
> the DataNode.  One of them was readahead.  When readahead is enabled, the 
> DataNode starts reading the next bytes it thinks it will need in the block 
> file, before the client requests them.  This helps hide the latency of 
> rotational media and send larger reads down to the device.  Another 
> optimization was "drop-behind."  Using this optimization, we could remove 
> files from the Linux page cache after they were no longer needed.
> Using {{dfs.datanode.drop.cache.behind.writes}} and 
> {{dfs.datanode.drop.cache.behind.reads}} can improve performance  
> substantially on many MapReduce jobs.  In our internal benchmarks, we have 
> seen speedups of 40% on certain workloads.  The reason is because if we know 
> the block data will not be read again any time soon, keeping it out of memory 
> allows more memory to be used by the other processes on the system.  See 
> HADOOP-7714 for more benchmarks.
> We would like to turn on these configurations on a per-file or per-client 
> basis, rather than on the DataNode as a whole.  This will allow more users to 
> actually make use of them.  It would also be good to add unit tests for the 
> drop-cache code path, to ensure that it is functioning as we expect.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4974) Analyze and add annotations to Namenode and Datanode protocol methods to enable retry

2013-07-11 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13706518#comment-13706518
 ] 

Jing Zhao commented on HDFS-4974:
-

Filed and committed HDFS-4978 to make disallowSnapshot idempotent.

> Analyze and add annotations to Namenode and Datanode protocol methods to 
> enable retry
> -
>
> Key: HDFS-4974
> URL: https://issues.apache.org/jira/browse/HDFS-4974
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, namenode
>Reporter: Suresh Srinivas
>
> This jira is intended for:
> # Discussing current @Idempotent annotations in HDFS protocols and adding 
> that annotation where it is missing.
> # Discuss how retry should be enabled for non-idempotent requests.
> I will post the analysis of current methods in subsequent comment.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4978) Make disallowSnapshot idempotent

2013-07-11 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-4978:


   Resolution: Fixed
Fix Version/s: 2.1.0-beta
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks for the review, Suresh! I've committed this to trunk, branch-2 and 
branch-2.1.0-beta.

> Make disallowSnapshot idempotent
> 
>
> Key: HDFS-4978
> URL: https://issues.apache.org/jira/browse/HDFS-4978
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: 2.1.0-beta
>
> Attachments: HDFS-4978.001.patch, HDFS-4978.002.patch, 
> HDFS-4978.003.patch, HDFS-4978.004.patch, HDFS-4978.005.patch
>
>
> Currently disallowSnapshot is not idempotent: an exception will be thrown 
> when the directory is already non-snapshottable. This jira tries to make it 
> idempotent.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4968) Provide configuration option for FileSystem symlink resolution

2013-07-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13706498#comment-13706498
 ] 

Hadoop QA commented on HDFS-4968:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12591917/hdfs-4968-1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 eclipse:eclipse{color}.  The patch failed to build with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 
release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4636//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4636//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4636//console

This message is automatically generated.

> Provide configuration option for FileSystem symlink resolution
> --
>
> Key: HDFS-4968
> URL: https://issues.apache.org/jira/browse/HDFS-4968
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hdfs-4968-1.patch
>
>
> With FileSystem symlink support incoming in HADOOP-8040, some clients will 
> wish to not transparently resolve symlinks. This is somewhat similar to 
> O_NOFOLLOW in open(2).
> Rationale for is for a security model where a user can invoke a third-party 
> service running as a service user to operate on the user's data. For 
> instance, users might want to use Hive to query data in their homedirs, where 
> Hive runs as the Hive user and the data is readable by the Hive user. This 
> leads to a security issue with symlinks:
> # User Mallory invokes Hive to process data files in {{/user/mallory/hive/}}
> # Hive checks permissions on the files in {{/user/mallory/hive/}} and allows 
> the query to proceed.
> # RACE: Mallory replaces the files in {{/user/mallory/hive}} with symlinks 
> that point to user Ann's Hive files in {{/user/ann/hive}}. These files aren't 
> readable by Mallory, but she can create whatever symlinks she wants in her 
> own scratch directory.
> # Hive's MR jobs happily resolve the symlinks and accesses Ann's private data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4942) Add retry cache support in Namenode

2013-07-11 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13706469#comment-13706469
 ] 

Suresh Srinivas commented on HDFS-4942:
---

bq. Suresh, it looks like the issue is rapidly spreading into multiple jiras. I 
counted 8 by now.
I have 1 or 2 more jiras to add to this.

The changes are being done in small increments. Each of the part can be done 
independently, without rendering trunk in a non-working state. Hence the work 
is happening in trunk. Doing it in small increments helps in multiple people 
collaborating on this, instead of one person working on a single large, hard to 
review patch. 

bq. Otherwise porting of the feature in other branches becomes challenging as 
cherry picking of multiple patches interleaved with other changes is non-trivial
Sorry I do not understand what the issue is. How does it make porting of the 
feature hard?


> Add retry cache support in Namenode
> ---
>
> Key: HDFS-4942
> URL: https://issues.apache.org/jira/browse/HDFS-4942
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha, namenode
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Attachments: HDFSRetryCache.pdf
>
>
> In current HA mechanism with FailoverProxyProvider and non HA setups with 
> RetryProxy retry a request from the RPC layer. If the retried request has 
> already been processed at the namenode, the subsequent attempts fail for 
> non-idempotent operations such as  create, append, delete, rename etc. This 
> will cause application failures during HA failover, network issues etc.
> This jira proposes adding retry cache at the namenode to handle these 
> failures. More details in the comments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4978) Make disallowSnapshot idempotent

2013-07-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13706464#comment-13706464
 ] 

Hadoop QA commented on HDFS-4978:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12591916/HDFS-4978.005.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4635//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4635//console

This message is automatically generated.

> Make disallowSnapshot idempotent
> 
>
> Key: HDFS-4978
> URL: https://issues.apache.org/jira/browse/HDFS-4978
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-4978.001.patch, HDFS-4978.002.patch, 
> HDFS-4978.003.patch, HDFS-4978.004.patch, HDFS-4978.005.patch
>
>
> Currently disallowSnapshot is not idempotent: an exception will be thrown 
> when the directory is already non-snapshottable. This jira tries to make it 
> idempotent.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4982) JournalNode should relogin from keytab before fetching logs from other JNs

2013-07-11 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13706428#comment-13706428
 ] 

Eli Collins commented on HDFS-4982:
---

+1  lgtm

> JournalNode should relogin from keytab before fetching logs from other JNs
> --
>
> Key: HDFS-4982
> URL: https://issues.apache.org/jira/browse/HDFS-4982
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node, security
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hdfs-4982.txt
>
>
> We've seen an issue in a secure cluster where, after a failover, the new NN 
> isn't able to properly coordinate QJM recovery. The JNs fail to fetch logs 
> from each other due to apparently not having a Kerberos TGT. It seems that we 
> need to add the {{checkTGTAndReloginFromKeytab}} call prior to making the 
> HTTP connection, since the java HTTP stuff doesn't do an automatic relogin

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4982) JournalNode should relogin from keytab before fetching logs from other JNs

2013-07-11 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-4982:
--

Attachment: hdfs-4982.txt

Attached patch should fix the issue. I haven't verified on a real cluster, but 
this is the same issue we saw with the 2NN in the past, so I'm pretty confident 
in the fix.

We should have results from a real cluster some time in the next couple of days.

> JournalNode should relogin from keytab before fetching logs from other JNs
> --
>
> Key: HDFS-4982
> URL: https://issues.apache.org/jira/browse/HDFS-4982
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node, security
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hdfs-4982.txt
>
>
> We've seen an issue in a secure cluster where, after a failover, the new NN 
> isn't able to properly coordinate QJM recovery. The JNs fail to fetch logs 
> from each other due to apparently not having a Kerberos TGT. It seems that we 
> need to add the {{checkTGTAndReloginFromKeytab}} call prior to making the 
> HTTP connection, since the java HTTP stuff doesn't do an automatic relogin

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4982) JournalNode should relogin from keytab before fetching logs from other JNs

2013-07-11 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-4982:
--

Status: Patch Available  (was: Open)

> JournalNode should relogin from keytab before fetching logs from other JNs
> --
>
> Key: HDFS-4982
> URL: https://issues.apache.org/jira/browse/HDFS-4982
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node, security
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hdfs-4982.txt
>
>
> We've seen an issue in a secure cluster where, after a failover, the new NN 
> isn't able to properly coordinate QJM recovery. The JNs fail to fetch logs 
> from each other due to apparently not having a Kerberos TGT. It seems that we 
> need to add the {{checkTGTAndReloginFromKeytab}} call prior to making the 
> HTTP connection, since the java HTTP stuff doesn't do an automatic relogin

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4982) JournalNode should relogin from keytab before fetching logs from other JNs

2013-07-11 Thread Todd Lipcon (JIRA)
Todd Lipcon created HDFS-4982:
-

 Summary: JournalNode should relogin from keytab before fetching 
logs from other JNs
 Key: HDFS-4982
 URL: https://issues.apache.org/jira/browse/HDFS-4982
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: journal-node, security
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Todd Lipcon
Assignee: Todd Lipcon


We've seen an issue in a secure cluster where, after a failover, the new NN 
isn't able to properly coordinate QJM recovery. The JNs fail to fetch logs from 
each other due to apparently not having a Kerberos TGT. It seems that we need 
to add the {{checkTGTAndReloginFromKeytab}} call prior to making the HTTP 
connection, since the java HTTP stuff doesn't do an automatic relogin

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4942) Add retry cache support in Namenode

2013-07-11 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13706406#comment-13706406
 ] 

Konstantin Shvachko commented on HDFS-4942:
---

Suresh, it looks like the issue is rapidly spreading into multiple jiras. I 
counted 8 by now.
If you think it will take more than that should we consider moving this to a 
branch?
Otherwise porting of the feature in other branches becomes challenging as 
cherry picking of multiple patches interleaved with other changes is 
non-trivial.

> Add retry cache support in Namenode
> ---
>
> Key: HDFS-4942
> URL: https://issues.apache.org/jira/browse/HDFS-4942
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha, namenode
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Attachments: HDFSRetryCache.pdf
>
>
> In current HA mechanism with FailoverProxyProvider and non HA setups with 
> RetryProxy retry a request from the RPC layer. If the retried request has 
> already been processed at the namenode, the subsequent attempts fail for 
> non-idempotent operations such as  create, append, delete, rename etc. This 
> will cause application failures during HA failover, network issues etc.
> This jira proposes adding retry cache at the namenode to handle these 
> failures. More details in the comments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4849) Enable retries for create and append operations.

2013-07-11 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13706397#comment-13706397
 ] 

Colin Patrick McCabe commented on HDFS-4849:


Hi Konstantin,

I understand your frustration with the slow pace of progress on this JIRA.  
Believe me.  But this is different than some of the other JIRAs we've done.  
Usually, there is a configuration option that can turn off the new thing if 
there is a problem (as with HDFS-347) or people can continue using the old 
system (as with QuorumJournalManager).  People can't exactly turn off file 
creation-- it's a core part of the filesystem.  So we want to make sure that we 
get everything right the first time, even if that means some pain up front.

I read through the comments here, right from the start, as well as the patch.  
It seems like the main issues raised were:
1. semantics-- some people are uncomfortable using the annotation "Idempotent"
2. problems with multithreaded behavior

Point #1 seems like something we can resolve without too much difficulty.  We 
can change the '@Idempotent' annotation to '@SkipRetryCache' or something like 
that.  I see that you've already renamed the JIRA to "something like that" :)

Point #2 was originally brought up by [~tlipcon].  He wrote:
bq. The idempotent create is also potentially problematic with multi-threaded 
clients. A given client may have multiple threads race to create the same file, 
and those threads would share the same client name (and hence lease).

If I understand correctly, Konstantin's response was that users could use 
{{FileSystem#newInstance}} to give each thread a separate DFSClient (and hence 
their own client names).  However, that doesn't help the people who are relying 
on the current behavior.

In HDFS-4942, Suresh proposes "adding a UUID as the request ID in the RPC 
client."  It seems like that might be helpful in resolving #2.  Does it make 
sense to collaborate on this?

> Enable retries for create and append operations.
> 
>
> Key: HDFS-4849
> URL: https://issues.apache.org/jira/browse/HDFS-4849
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.0.4-alpha
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Blocker
> Attachments: idempotentCreate-branch2.patch, idempotentCreate.patch, 
> idempotentCreate.patch, idempotentCreate.patch, idempotentCreate.patch
>
>
> create, append and delete operations can be made retriable. This will reduce 
> chances for a job or other app failures when NN fails over.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4465) Optimize datanode ReplicasMap and ReplicaInfo

2013-07-11 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13706322#comment-13706322
 ] 

Jing Zhao commented on HDFS-4465:
-

I merged this to branch-2.1-beta.

> Optimize datanode ReplicasMap and ReplicaInfo
> -
>
> Key: HDFS-4465
> URL: https://issues.apache.org/jira/browse/HDFS-4465
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.0.5-alpha
>Reporter: Suresh Srinivas
>Assignee: Aaron T. Myers
> Fix For: 2.1.0-beta
>
> Attachments: dn-memory-improvements.patch, HDFS-4465.patch, 
> HDFS-4465.patch, HDFS-4465.patch
>
>
> In Hadoop a lot of optimization has been done in namenode data structures to 
> be memory efficient. Similar optimizations are necessary for Datanode 
> process. With the growth in storage per datanode and number of blocks hosted 
> on datanode, this jira intends to optimize long lived ReplicasMap and 
> ReplicaInfo objects.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4849) Enable retries for create and append operations.

2013-07-11 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4849:
--

Description: create, append and delete operations can be made retriable. 
This will reduce chances for a job or other app failures when NN fails over.  
(was: create, append and delete operations can be made idempotent. This will 
reduce chances for a job or other app failures when NN fails over.)
Summary: Enable retries for create and append operations.  (was: 
retry-enabled or some thing like that create and append operations.)

> Enable retries for create and append operations.
> 
>
> Key: HDFS-4849
> URL: https://issues.apache.org/jira/browse/HDFS-4849
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.0.4-alpha
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Blocker
> Attachments: idempotentCreate-branch2.patch, idempotentCreate.patch, 
> idempotentCreate.patch, idempotentCreate.patch, idempotentCreate.patch
>
>
> create, append and delete operations can be made retriable. This will reduce 
> chances for a job or other app failures when NN fails over.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4849) retry-enabled or some thing like that create and append operations.

2013-07-11 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13706318#comment-13706318
 ] 

Suresh Srinivas commented on HDFS-4849:
---

bq. summary - "retry-enabled or some thing like that create and append 
operations."
Wow! I have fixed the summary.

I think this is a premature and unnecessary optimization. In fact it is 
incomplete without unique DFSClient ID.

bq. It's been almost 2 months now with a lot of words exchanged.
This has many reasons. Including long silence from 19th June to 4th July.

bq. I mean I have things to do other than rewriting the same 30 lines over and 
over again.
Let me know if the comments related to the patch were unnecessary that you 
needed to address.

bq. I will plan to commit this by the end of the day. And you know what to do 
if you feel strongly.
I am generally not a fan of -1s. But you leave me no options.

I am -1 on this change. To summarize my objections:
# Currently it is incorrect with DFSClient ID not being unique.
# It is a premature optimizations. I have my doubts about it being optimization 
at all.
# I do not see a reason why 10 methods should handle retries in one way and 
another one in a special way. Saving hashmap entries is not a good enough 
reason in my opinion.


> retry-enabled or some thing like that create and append operations.
> ---
>
> Key: HDFS-4849
> URL: https://issues.apache.org/jira/browse/HDFS-4849
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.0.4-alpha
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Blocker
> Attachments: idempotentCreate-branch2.patch, idempotentCreate.patch, 
> idempotentCreate.patch, idempotentCreate.patch, idempotentCreate.patch
>
>
> create, append and delete operations can be made idempotent. This will reduce 
> chances for a job or other app failures when NN fails over.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4968) Provide configuration option for FileSystem symlink resolution

2013-07-11 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-4968:
--

Target Version/s: 3.0.0, 2.2.0
  Status: Patch Available  (was: Open)

> Provide configuration option for FileSystem symlink resolution
> --
>
> Key: HDFS-4968
> URL: https://issues.apache.org/jira/browse/HDFS-4968
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hdfs-4968-1.patch
>
>
> With FileSystem symlink support incoming in HADOOP-8040, some clients will 
> wish to not transparently resolve symlinks. This is somewhat similar to 
> O_NOFOLLOW in open(2).
> Rationale for is for a security model where a user can invoke a third-party 
> service running as a service user to operate on the user's data. For 
> instance, users might want to use Hive to query data in their homedirs, where 
> Hive runs as the Hive user and the data is readable by the Hive user. This 
> leads to a security issue with symlinks:
> # User Mallory invokes Hive to process data files in {{/user/mallory/hive/}}
> # Hive checks permissions on the files in {{/user/mallory/hive/}} and allows 
> the query to proceed.
> # RACE: Mallory replaces the files in {{/user/mallory/hive}} with symlinks 
> that point to user Ann's Hive files in {{/user/ann/hive}}. These files aren't 
> readable by Mallory, but she can create whatever symlinks she wants in her 
> own scratch directory.
> # Hive's MR jobs happily resolve the symlinks and accesses Ann's private data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4968) Provide configuration option for FileSystem symlink resolution

2013-07-11 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-4968:
--

Attachment: hdfs-4968-1.patch

Here's a patch that adds a new config key, fs.symlinks.resolve, to common. It 
disables resolution in both FSLinkResolver (FileContext) and in 
FileSystemLinkResolver(FileSystem).

Included test is for HDFS since the local filesystems automatically resolve 
symlinks.

> Provide configuration option for FileSystem symlink resolution
> --
>
> Key: HDFS-4968
> URL: https://issues.apache.org/jira/browse/HDFS-4968
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hdfs-4968-1.patch
>
>
> With FileSystem symlink support incoming in HADOOP-8040, some clients will 
> wish to not transparently resolve symlinks. This is somewhat similar to 
> O_NOFOLLOW in open(2).
> Rationale for is for a security model where a user can invoke a third-party 
> service running as a service user to operate on the user's data. For 
> instance, users might want to use Hive to query data in their homedirs, where 
> Hive runs as the Hive user and the data is readable by the Hive user. This 
> leads to a security issue with symlinks:
> # User Mallory invokes Hive to process data files in {{/user/mallory/hive/}}
> # Hive checks permissions on the files in {{/user/mallory/hive/}} and allows 
> the query to proceed.
> # RACE: Mallory replaces the files in {{/user/mallory/hive}} with symlinks 
> that point to user Ann's Hive files in {{/user/ann/hive}}. These files aren't 
> readable by Mallory, but she can create whatever symlinks she wants in her 
> own scratch directory.
> # Hive's MR jobs happily resolve the symlinks and accesses Ann's private data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4978) Make disallowSnapshot idempotent

2013-07-11 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-4978:


Attachment: HDFS-4978.005.patch

Rebase the patch for HADOOP-9418.

> Make disallowSnapshot idempotent
> 
>
> Key: HDFS-4978
> URL: https://issues.apache.org/jira/browse/HDFS-4978
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-4978.001.patch, HDFS-4978.002.patch, 
> HDFS-4978.003.patch, HDFS-4978.004.patch, HDFS-4978.005.patch
>
>
> Currently disallowSnapshot is not idempotent: an exception will be thrown 
> when the directory is already non-snapshottable. This jira tries to make it 
> idempotent.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4981) chmod 777 the .snapshot directory does not error that modification on RO snapshot is disallowed

2013-07-11 Thread Stephen Chu (JIRA)
Stephen Chu created HDFS-4981:
-

 Summary: chmod 777 the .snapshot directory does not error that 
modification on RO snapshot is disallowed
 Key: HDFS-4981
 URL: https://issues.apache.org/jira/browse/HDFS-4981
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: snapshots
Affects Versions: 2.0.4-alpha, 3.0.0
Reporter: Stephen Chu
Priority: Trivial


Snapshots currently are RO, so it's expected that when someone tries to modify 
the .snapshot directory s/he is denied.

However, if the user tries to chmod 777 the .snapshot directory, the operation 
does not error. The user should be alerted that modifications are not allowed, 
even if this operation didn't actually change anything.

Using other modes will trigger the error, though.

{code}
[schu@hdfs-snapshots-1 hdfs]$ sudo -u hdfs hdfs dfs -chmod 777 
/user/schu/test_dir_1/.snapshot/
[schu@hdfs-snapshots-1 hdfs]$ sudo -u hdfs hdfs dfs -chmod 755 
/user/schu/test_dir_1/.snapshot/
chmod: changing permissions of '/user/schu/test_dir_1/.snapshot': Modification 
on a read-only snapshot is disallowed
[schu@hdfs-snapshots-1 hdfs]$ sudo -u hdfs hdfs dfs -chmod 435 
/user/schu/test_dir_1/.snapshot/
chmod: changing permissions of '/user/schu/test_dir_1/.snapshot': Modification 
on a read-only snapshot is disallowed
[schu@hdfs-snapshots-1 hdfs]$ sudo -u hdfs hdfs dfs -chown hdfs 
/user/schu/test_dir_1/.snapshot/
chown: changing ownership of '/user/schu/test_dir_1/.snapshot': Modification on 
a read-only snapshot is disallowed
[schu@hdfs-snapshots-1 hdfs]$ sudo -u hdfs hdfs dfs -chown schu 
/user/schu/test_dir_1/.snapshot/
chown: changing ownership of '/user/schu/test_dir_1/.snapshot': Modification on 
a read-only snapshot is disallowed
[schu@hdfs-snapshots-1 hdfs]$ 
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4849) retry-enabled or some thing like that create and append operations.

2013-07-11 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-4849:
--

Summary: retry-enabled or some thing like that create and append 
operations.  (was: Idempotent create and append operations.)

> retry-enabled or some thing like that create and append operations.
> ---
>
> Key: HDFS-4849
> URL: https://issues.apache.org/jira/browse/HDFS-4849
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.0.4-alpha
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Blocker
> Attachments: idempotentCreate-branch2.patch, idempotentCreate.patch, 
> idempotentCreate.patch, idempotentCreate.patch, idempotentCreate.patch
>
>
> create, append and delete operations can be made idempotent. This will reduce 
> chances for a job or other app failures when NN fails over.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4849) Idempotent create and append operations.

2013-07-11 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13706244#comment-13706244
 ] 

Konstantin Shvachko commented on HDFS-4849:
---

> Is there any reason why this ought to go in before HDFS-4942?

Is there any reason to postpone it any longer?
I don't think we talked about what goes first what goes later. Just that both 
make sense.

It's been almost 2 months now with a lot of words exchanged.
I personally still don't understand what is the resistance based on and don't 
see why we should not move this forward.
Let's have some action here. I will plan to commit this by the end of the day. 
And you know what to do if you feel strongly.
I mean I have things to do other than rewriting the same 30 lines over and over 
again.

> Idempotent create and append operations.
> 
>
> Key: HDFS-4849
> URL: https://issues.apache.org/jira/browse/HDFS-4849
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.0.4-alpha
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Blocker
> Attachments: idempotentCreate-branch2.patch, idempotentCreate.patch, 
> idempotentCreate.patch, idempotentCreate.patch, idempotentCreate.patch
>
>
> create, append and delete operations can be made idempotent. This will reduce 
> chances for a job or other app failures when NN fails over.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4978) Make disallowSnapshot idempotent

2013-07-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13706204#comment-13706204
 ] 

Hadoop QA commented on HDFS-4978:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12591889/HDFS-4978.004.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4634//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4634//console

This message is automatically generated.

> Make disallowSnapshot idempotent
> 
>
> Key: HDFS-4978
> URL: https://issues.apache.org/jira/browse/HDFS-4978
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-4978.001.patch, HDFS-4978.002.patch, 
> HDFS-4978.003.patch, HDFS-4978.004.patch
>
>
> Currently disallowSnapshot is not idempotent: an exception will be thrown 
> when the directory is already non-snapshottable. This jira tries to make it 
> idempotent.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4975) Branch-1-win TestReplicationPolicy failed caused by stale data node handling

2013-07-11 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HDFS-4975.
-

  Resolution: Fixed
Target Version/s: 1-win
Hadoop Flags: Reviewed

+1 for the patch.  I tested successfully on Mac and Windows.  I committed this 
to branch-1-win.  Thank you for the patch, Xi.

> Branch-1-win TestReplicationPolicy failed caused by stale data node handling
> 
>
> Key: HDFS-4975
> URL: https://issues.apache.org/jira/browse/HDFS-4975
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 1-win
>Reporter: Xi Fang
>Assignee: Xi Fang
> Fix For: 1-win
>
> Attachments: HADOOP-9714.1.patch, HDFS-4975.2.patch
>
>
> TestReplicationPolicy failed on 
> * testChooseTargetWithMoreThanAvailableNodes()
> * testChooseTargetWithStaleNodes()
> * testChooseTargetWithHalfStaleNodes()
> The root of cause of testChooseTargetWithMoreThanAvailableNodes failing is 
> the following:
> In BlockPlacementPolicyDefault#chooseTarget()
> {code}
>   chooseRandom(numOfReplicas, NodeBase.ROOT, excludedNodes, 
> blocksize, maxNodesPerRack, results);
> } catch (NotEnoughReplicasException e) {
>   FSNamesystem.LOG.warn("Not able to place enough replicas, still in need 
> of " + numOfReplicas);
> {code}
> However, numOfReplicas is passed into chooseRandom() as int (primitive type 
> in java) by value. The updating operation for numOfReplicas in chooseRandom() 
> will not change the value in chooseTarget(). 
> The root cause for testChooseTargetWithStaleNodes() and 
> testChooseTargetWithHalfStaleNodes() is the current 
> BlockPlacementPolicyDefault#chooseTarget() doesn't check if a node is stale.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4968) Provide configuration option for FileSystem symlink resolution

2013-07-11 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13706157#comment-13706157
 ] 

Colin Patrick McCabe commented on HDFS-4968:


Very good idea.  It also would be nice to also start using inode numbers for 
things like directory listing, to avoid potential races there.

> Provide configuration option for FileSystem symlink resolution
> --
>
> Key: HDFS-4968
> URL: https://issues.apache.org/jira/browse/HDFS-4968
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>
> With FileSystem symlink support incoming in HADOOP-8040, some clients will 
> wish to not transparently resolve symlinks. This is somewhat similar to 
> O_NOFOLLOW in open(2).
> Rationale for is for a security model where a user can invoke a third-party 
> service running as a service user to operate on the user's data. For 
> instance, users might want to use Hive to query data in their homedirs, where 
> Hive runs as the Hive user and the data is readable by the Hive user. This 
> leads to a security issue with symlinks:
> # User Mallory invokes Hive to process data files in {{/user/mallory/hive/}}
> # Hive checks permissions on the files in {{/user/mallory/hive/}} and allows 
> the query to proceed.
> # RACE: Mallory replaces the files in {{/user/mallory/hive}} with symlinks 
> that point to user Ann's Hive files in {{/user/ann/hive}}. These files aren't 
> readable by Mallory, but she can create whatever symlinks she wants in her 
> own scratch directory.
> # Hive's MR jobs happily resolve the symlinks and accesses Ann's private data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4977) Change "Checkpoint Size" of web ui of SecondaryNameNode

2013-07-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13706140#comment-13706140
 ] 

Hadoop QA commented on HDFS-4977:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12591878/HDFS-4977.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4632//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4632//console

This message is automatically generated.

> Change "Checkpoint Size" of web ui of SecondaryNameNode
> ---
>
> Key: HDFS-4977
> URL: https://issues.apache.org/jira/browse/HDFS-4977
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0, 2.0.4-alpha
>Reporter: Shinichi Yamashita
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-4977-2.patch, HDFS-4977.patch, HDFS-4977.patch
>
>
> The checkpoint of SecondaryNameNode after 2.0 is carried out by 
> "dfs.namenode.checkpoint.period" and "dfs.namenode.checkpoint.txns".
> Because "Checkpoint Size" displayed in status.jsp of SecondaryNameNode, it 
> shuold make modifications.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4374) Display NameNode startup progress in UI

2013-07-11 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-4374:


Fix Version/s: 3.0.0
 Hadoop Flags: Reviewed

Thanks, Jing!  I committed to trunk.  Merges through to branch-2.1.0-beta will 
happen later today, as per comments on HDFS-4372.

> Display NameNode startup progress in UI
> ---
>
> Key: HDFS-4374
> URL: https://issues.apache.org/jira/browse/HDFS-4374
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 3.0.0
>
> Attachments: HDFS-4374.1.patch, HDFS-4374.2.patch
>
>
> Display the information about the NameNode's startup progress in the NameNode 
> web UI.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4374) Display NameNode startup progress in UI

2013-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13706135#comment-13706135
 ] 

Hudson commented on HDFS-4374:
--

Integrated in Hadoop-trunk-Commit #4072 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4072/])
HDFS-4374. Display NameNode startup progress in UI. Contributed by Chris 
Nauroth. (Revision 1502331)

 Result = SUCCESS
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1502331
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeJspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/hadoop.css
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeJspHelper.java


> Display NameNode startup progress in UI
> ---
>
> Key: HDFS-4374
> URL: https://issues.apache.org/jira/browse/HDFS-4374
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-4374.1.patch, HDFS-4374.2.patch
>
>
> Display the information about the NameNode's startup progress in the NameNode 
> web UI.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4373) Add HTTP API for querying NameNode startup progress

2013-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13706128#comment-13706128
 ] 

Hudson commented on HDFS-4373:
--

Integrated in Hadoop-trunk-Commit #4071 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4071/])
HDFS-4373. Add HTTP API for querying NameNode startup progress. Contributed 
by Chris Nauroth. (Revision 1502328)

 Result = SUCCESS
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1502328
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/StartupProgressServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestStartupProgressServlet.java


> Add HTTP API for querying NameNode startup progress
> ---
>
> Key: HDFS-4373
> URL: https://issues.apache.org/jira/browse/HDFS-4373
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 3.0.0
>
> Attachments: HDFS-4373.1.patch, HDFS-4373.2.patch, HDFS-4373.3.patch
>
>
> Provide an HTTP API for non-browser clients to query the NameNode's current 
> progress through startup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4373) Add HTTP API for querying NameNode startup progress

2013-07-11 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-4373:


Fix Version/s: 3.0.0

> Add HTTP API for querying NameNode startup progress
> ---
>
> Key: HDFS-4373
> URL: https://issues.apache.org/jira/browse/HDFS-4373
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 3.0.0
>
> Attachments: HDFS-4373.1.patch, HDFS-4373.2.patch, HDFS-4373.3.patch
>
>
> Provide an HTTP API for non-browser clients to query the NameNode's current 
> progress through startup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4974) Analyze and add annotations to Namenode and Datanode protocol methods to enable retry

2013-07-11 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13706114#comment-13706114
 ] 

Chris Nauroth commented on HDFS-4974:
-

I don't believe {{DatanodeProtocol#blockReceivedAndDeleted}} is safe to retry, 
due to things like modifying the pending replication count.  (i.e. A retry 
could cause 2 decrements of the block's pending replication count for the same 
replica.)

{{ClientProtocol#metaSave}} opens its output file for append, so retries would 
cause duplication of data in the output file.  This could be made idempotent by 
opening the output file for overwrite instead, though I don't know if the 
current append behavior is desired.

Is it correct that {{ClientProtocol#rollEdits}} is idempotent, but 
{{NamenodeProtocol#rollEditLog}} is non-idempotent?  The two methods are nearly 
identical.  They both call {{FSNamesystem#rollEditLog}}, which I believe is 
idempotent.

> Analyze and add annotations to Namenode and Datanode protocol methods to 
> enable retry
> -
>
> Key: HDFS-4974
> URL: https://issues.apache.org/jira/browse/HDFS-4974
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, namenode
>Reporter: Suresh Srinivas
>
> This jira is intended for:
> # Discussing current @Idempotent annotations in HDFS protocols and adding 
> that annotation where it is missing.
> # Discuss how retry should be enabled for non-idempotent requests.
> I will post the analysis of current methods in subsequent comment.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4978) Make disallowSnapshot idempotent

2013-07-11 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13706107#comment-13706107
 ] 

Suresh Srinivas commented on HDFS-4978:
---

+1




> Make disallowSnapshot idempotent
> 
>
> Key: HDFS-4978
> URL: https://issues.apache.org/jira/browse/HDFS-4978
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-4978.001.patch, HDFS-4978.002.patch, 
> HDFS-4978.003.patch, HDFS-4978.004.patch
>
>
> Currently disallowSnapshot is not idempotent: an exception will be thrown 
> when the directory is already non-snapshottable. This jira tries to make it 
> idempotent.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4978) Make disallowSnapshot idempotent

2013-07-11 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-4978:


Attachment: HDFS-4978.004.patch

Thanks for the review Suresh! Update the patch to address your comments. 
Besides, the unit test in the original patch also covers the allowSnapshot case.

> Make disallowSnapshot idempotent
> 
>
> Key: HDFS-4978
> URL: https://issues.apache.org/jira/browse/HDFS-4978
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-4978.001.patch, HDFS-4978.002.patch, 
> HDFS-4978.003.patch, HDFS-4978.004.patch
>
>
> Currently disallowSnapshot is not idempotent: an exception will be thrown 
> when the directory is already non-snapshottable. This jira tries to make it 
> idempotent.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4978) Make disallowSnapshot idempotent

2013-07-11 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13706099#comment-13706099
 ] 

Suresh Srinivas commented on HDFS-4978:
---

+1 for the patch. Can you add @Idempotent annotation to disallowSnapshot() 
method. Also can you add @Idempotent flag to allowSnapshot (with a unit test to 
ensure this) and getSnapshotDiffReport. I am okay if you want to handle the 
second comment in a separate jira.

> Make disallowSnapshot idempotent
> 
>
> Key: HDFS-4978
> URL: https://issues.apache.org/jira/browse/HDFS-4978
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-4978.001.patch, HDFS-4978.002.patch, 
> HDFS-4978.003.patch
>
>
> Currently disallowSnapshot is not idempotent: an exception will be thrown 
> when the directory is already non-snapshottable. This jira tries to make it 
> idempotent.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4980) Incorrect logging.properties file for hadoop-httpfs

2013-07-11 Thread Mark Grover (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13706094#comment-13706094
 ] 

Mark Grover commented on HDFS-4980:
---

Thanks Suresh and Alejandro!


> Incorrect logging.properties file for hadoop-httpfs
> ---
>
> Key: HDFS-4980
> URL: https://issues.apache.org/jira/browse/HDFS-4980
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.4-alpha
> Environment: Maven 3.0.2 on CentOS6.2
>Reporter: Mark Grover
>Assignee: Mark Grover
> Fix For: 2.1.0-beta
>
> Attachments: HADOOP-9721.1.patch
>
>
> Tomcat ships with a default logging.properties file that's generic enough to 
> be used however we already override it with a custom log file as seen at 
> https://github.com/apache/hadoop-common/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml#L557
> This is necessary because we can have the log locations controlled by 
> httpfs.log.dir env variable (instead of default catalina.base/logs), control 
> the prefix of the log files names, etc.
> In any case, this overriding doesn't always happen. In my environment, the 
> custom logging.properties file doesn't get overridden. The reason is the 
> destination logging.properties file already exists and the maven pom's copy 
> command silently fails and doesn't override. If we explicitly delete the 
> destination logging.properties file, then the copy command successfully 
> completes. You may notice, we do the same thing with server.xml (which 
> doesn't have this problem). We explicitly delete the destination file first 
> and then copy it over. We should do the same with logging.properties as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4373) Add HTTP API for querying NameNode startup progress

2013-07-11 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-4373:


Hadoop Flags: Reviewed

Thanks, Jing!  I've committed this to trunk.  I will hold off until later on 
merging to other branches, as per comment on HDFS-4372.

> Add HTTP API for querying NameNode startup progress
> ---
>
> Key: HDFS-4373
> URL: https://issues.apache.org/jira/browse/HDFS-4373
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-4373.1.patch, HDFS-4373.2.patch, HDFS-4373.3.patch
>
>
> Provide an HTTP API for non-browser clients to query the NameNode's current 
> progress through startup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4980) Incorrect logging.properties file for hadoop-httpfs

2013-07-11 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4980:
--

   Resolution: Fixed
Fix Version/s: (was: 3.0.0)
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I have committed the patch to trunk, branch-2 and branch-2.1. Thank you Mark!

> Incorrect logging.properties file for hadoop-httpfs
> ---
>
> Key: HDFS-4980
> URL: https://issues.apache.org/jira/browse/HDFS-4980
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.4-alpha
> Environment: Maven 3.0.2 on CentOS6.2
>Reporter: Mark Grover
>Assignee: Mark Grover
> Fix For: 2.1.0-beta
>
> Attachments: HADOOP-9721.1.patch
>
>
> Tomcat ships with a default logging.properties file that's generic enough to 
> be used however we already override it with a custom log file as seen at 
> https://github.com/apache/hadoop-common/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml#L557
> This is necessary because we can have the log locations controlled by 
> httpfs.log.dir env variable (instead of default catalina.base/logs), control 
> the prefix of the log files names, etc.
> In any case, this overriding doesn't always happen. In my environment, the 
> custom logging.properties file doesn't get overridden. The reason is the 
> destination logging.properties file already exists and the maven pom's copy 
> command silently fails and doesn't override. If we explicitly delete the 
> destination logging.properties file, then the copy command successfully 
> completes. You may notice, we do the same thing with server.xml (which 
> doesn't have this problem). We explicitly delete the destination file first 
> and then copy it over. We should do the same with logging.properties as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4980) Incorrect logging.properties file for hadoop-httpfs

2013-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13706055#comment-13706055
 ] 

Hudson commented on HDFS-4980:
--

Integrated in Hadoop-trunk-Commit #4068 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4068/])
HDFS-4980. Incorrect logging.properties file for hadoop-httpfs. Contributed 
by Mark Grover. (Revision 1502302)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1502302
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Incorrect logging.properties file for hadoop-httpfs
> ---
>
> Key: HDFS-4980
> URL: https://issues.apache.org/jira/browse/HDFS-4980
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.4-alpha
> Environment: Maven 3.0.2 on CentOS6.2
>Reporter: Mark Grover
>Assignee: Mark Grover
> Fix For: 3.0.0, 2.1.0-beta
>
> Attachments: HADOOP-9721.1.patch
>
>
> Tomcat ships with a default logging.properties file that's generic enough to 
> be used however we already override it with a custom log file as seen at 
> https://github.com/apache/hadoop-common/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml#L557
> This is necessary because we can have the log locations controlled by 
> httpfs.log.dir env variable (instead of default catalina.base/logs), control 
> the prefix of the log files names, etc.
> In any case, this overriding doesn't always happen. In my environment, the 
> custom logging.properties file doesn't get overridden. The reason is the 
> destination logging.properties file already exists and the maven pom's copy 
> command silently fails and doesn't override. If we explicitly delete the 
> destination logging.properties file, then the copy command successfully 
> completes. You may notice, we do the same thing with server.xml (which 
> doesn't have this problem). We explicitly delete the destination file first 
> and then copy it over. We should do the same with logging.properties as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4980) Incorrect logging.properties file for hadoop-httpfs

2013-07-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13706045#comment-13706045
 ] 

Hadoop QA commented on HDFS-4980:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12591761/HADOOP-9721.1.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4633//console

This message is automatically generated.

> Incorrect logging.properties file for hadoop-httpfs
> ---
>
> Key: HDFS-4980
> URL: https://issues.apache.org/jira/browse/HDFS-4980
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.4-alpha
> Environment: Maven 3.0.2 on CentOS6.2
>Reporter: Mark Grover
>Assignee: Mark Grover
> Fix For: 3.0.0, 2.1.0-beta
>
> Attachments: HADOOP-9721.1.patch
>
>
> Tomcat ships with a default logging.properties file that's generic enough to 
> be used however we already override it with a custom log file as seen at 
> https://github.com/apache/hadoop-common/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml#L557
> This is necessary because we can have the log locations controlled by 
> httpfs.log.dir env variable (instead of default catalina.base/logs), control 
> the prefix of the log files names, etc.
> In any case, this overriding doesn't always happen. In my environment, the 
> custom logging.properties file doesn't get overridden. The reason is the 
> destination logging.properties file already exists and the maven pom's copy 
> command silently fails and doesn't override. If we explicitly delete the 
> destination logging.properties file, then the copy command successfully 
> completes. You may notice, we do the same thing with server.xml (which 
> doesn't have this problem). We explicitly delete the destination file first 
> and then copy it over. We should do the same with logging.properties as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Moved] (HDFS-4980) Incorrect logging.properties file for hadoop-httpfs

2013-07-11 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas moved HADOOP-9721 to HDFS-4980:
---

  Component/s: (was: build)
   (was: conf)
   build
Fix Version/s: (was: 2.1.0-beta)
   (was: 3.0.0)
   2.1.0-beta
   3.0.0
Affects Version/s: (was: 2.0.4-alpha)
   2.0.4-alpha
  Key: HDFS-4980  (was: HADOOP-9721)
  Project: Hadoop HDFS  (was: Hadoop Common)

> Incorrect logging.properties file for hadoop-httpfs
> ---
>
> Key: HDFS-4980
> URL: https://issues.apache.org/jira/browse/HDFS-4980
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.4-alpha
> Environment: Maven 3.0.2 on CentOS6.2
>Reporter: Mark Grover
>Assignee: Mark Grover
> Fix For: 3.0.0, 2.1.0-beta
>
> Attachments: HADOOP-9721.1.patch
>
>
> Tomcat ships with a default logging.properties file that's generic enough to 
> be used however we already override it with a custom log file as seen at 
> https://github.com/apache/hadoop-common/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml#L557
> This is necessary because we can have the log locations controlled by 
> httpfs.log.dir env variable (instead of default catalina.base/logs), control 
> the prefix of the log files names, etc.
> In any case, this overriding doesn't always happen. In my environment, the 
> custom logging.properties file doesn't get overridden. The reason is the 
> destination logging.properties file already exists and the maven pom's copy 
> command silently fails and doesn't override. If we explicitly delete the 
> destination logging.properties file, then the copy command successfully 
> completes. You may notice, we do the same thing with server.xml (which 
> doesn't have this problem). We explicitly delete the destination file first 
> and then copy it over. We should do the same with logging.properties as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4374) Display NameNode startup progress in UI

2013-07-11 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13706029#comment-13706029
 ] 

Jing Zhao commented on HDFS-4374:
-

+1 for the patch.

> Display NameNode startup progress in UI
> ---
>
> Key: HDFS-4374
> URL: https://issues.apache.org/jira/browse/HDFS-4374
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-4374.1.patch, HDFS-4374.2.patch
>
>
> Display the information about the NameNode's startup progress in the NameNode 
> web UI.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4794) Browsing filesystem via webui throws kerberos exception when NN service RPC is enabled in a secure cluster

2013-07-11 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13706021#comment-13706021
 ] 

Benoy Antony commented on HDFS-4794:


Its broken.
We applied the attached patch to fix it on our clusters.


> Browsing filesystem via webui throws kerberos exception when NN service RPC 
> is enabled in a secure cluster
> --
>
> Key: HDFS-4794
> URL: https://issues.apache.org/jira/browse/HDFS-4794
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.1.2
>Reporter: Benoy Antony
>Assignee: Benoy Antony
> Attachments: HDFS-4794.patch
>
>
> Browsing filesystem via webui throws kerberos exception when NN service RPC 
> is enabled in a secure cluster
> To reproduce this error, 
> Enable security 
> Enable serviceRPC by setting dfs.namenode.servicerpc-address and use a 
> different port than the rpc port.
> Click on "Browse the filesystem" on NameNode web.
> The following error will be shown :
> Call to NN001/12.123.123.01:8030 failed on local exception: 
> java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed 
> [Caused by GSSException: No valid credentials provided (Mechanism level: 
> Failed to find any Kerberos tgt)]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4977) Change "Checkpoint Size" of web ui of SecondaryNameNode

2013-07-11 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-4977:
-

Attachment: HDFS-4977.patch

> Change "Checkpoint Size" of web ui of SecondaryNameNode
> ---
>
> Key: HDFS-4977
> URL: https://issues.apache.org/jira/browse/HDFS-4977
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0, 2.0.4-alpha
>Reporter: Shinichi Yamashita
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-4977-2.patch, HDFS-4977.patch, HDFS-4977.patch
>
>
> The checkpoint of SecondaryNameNode after 2.0 is carried out by 
> "dfs.namenode.checkpoint.period" and "dfs.namenode.checkpoint.txns".
> Because "Checkpoint Size" displayed in status.jsp of SecondaryNameNode, it 
> shuold make modifications.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4373) Add HTTP API for querying NameNode startup progress

2013-07-11 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13706001#comment-13706001
 ] 

Jing Zhao commented on HDFS-4373:
-

Thanks Chris! The latest patch looks pretty good to me. +1 for the patch.

> Add HTTP API for querying NameNode startup progress
> ---
>
> Key: HDFS-4373
> URL: https://issues.apache.org/jira/browse/HDFS-4373
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-4373.1.patch, HDFS-4373.2.patch, HDFS-4373.3.patch
>
>
> Provide an HTTP API for non-browser clients to query the NameNode's current 
> progress through startup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4969) WebhdfsFileSystem expects non-standard WEBHDFS Json element

2013-07-11 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HDFS-4969:
-

   Resolution: Fixed
Fix Version/s: 2.1.0-beta
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks Robert. Committed to trunk, branch-2 and branch-2.1.

> WebhdfsFileSystem expects non-standard WEBHDFS Json element
> ---
>
> Key: HDFS-4969
> URL: https://issues.apache.org/jira/browse/HDFS-4969
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test, webhdfs
>Affects Versions: 3.0.0, 2.0.5-alpha
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Blocker
> Fix For: 2.1.0-beta
>
> Attachments: HDFS-4969.patch, HDFS-4969.patch
>
>
> org.apache.hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem has two 
> test failures due to a NPE:
> {noformat}
> testOperation[7](org.apache.hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem)
>   Time elapsed: 375 sec  <<< ERROR!
> java.lang.NullPointerException
>   at org.apache.hadoop.hdfs.web.JsonUtil.toFileStatus(JsonUtil.java:251)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:629)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:639)
>   at 
> org.apache.hadoop.fs.http.client.BaseTestHttpFSWith.testListStatus(BaseTestHttpFSWith.java:292)
>   at 
> org.apache.hadoop.fs.http.client.BaseTestHttpFSWith.operation(BaseTestHttpFSWith.java:506)
>   at 
> org.apache.hadoop.fs.http.client.BaseTestHttpFSWith.testOperation(BaseTestHttpFSWith.java:558)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
>   at 
> org.apache.hadoop.test.TestHdfsHelper$HdfsStatement.evaluate(TestHdfsHelper.java:74)
>   at 
> org.apache.hadoop.test.TestDirHelper$1.evaluate(TestDirHelper.java:106)
>   at 
> org.apache.hadoop.test.TestDirHelper$1.evaluate(TestDirHelper.java:106)
>   at 
> org.apache.hadoop.test.TestJettyHelper$1.evaluate(TestJettyHelper.java:53)
>   at 
> org.apache.hadoop.test.TestExceptionHelper$1.evaluate(TestExceptionHelper.java:42)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
>   at org.junit.runners.Suite.runChild(Suite.java:128)
>   at org.junit.runners.Suite.runChild(Suite.java:24)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMet

[jira] [Commented] (HDFS-4887) TestNNThroughputBenchmark exits abruptly

2013-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705862#comment-13705862
 ] 

Hudson commented on HDFS-4887:
--

Integrated in Hadoop-Mapreduce-trunk #1484 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1484/])
HDFS-4887. TestNNThroughputBenchmark exits abruptly. Contributed by Kihwal 
Lee. (Revision 1501841)

 Result = SUCCESS
kihwal : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1501841
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerTestUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java


> TestNNThroughputBenchmark exits abruptly
> 
>
> Key: HDFS-4887
> URL: https://issues.apache.org/jira/browse/HDFS-4887
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: benchmarks, test
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Blocker
> Fix For: 3.0.0, 2.1.0-beta
>
> Attachments: HDFS-4887.patch, HDFS-4887.patch
>
>
> After HDFS-4840, TestNNThroughputBenchmark exits in the middle. This is 
> because ReplicationMonitor is being stopped while NN is still running. 
> This is only valid during testing. In normal cases, ReplicationMonitor thread 
> runs all the time once started. In standby or safemode, it just skips 
> calculating DN work. I think NNThroughputBenchmark needs to use ExitUtil to 
> prevent termination, rather than modifying ReplicationMonitor.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4797) BlockScanInfo does not override equals(..) and hashCode() consistently

2013-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705859#comment-13705859
 ] 

Hudson commented on HDFS-4797:
--

Integrated in Hadoop-Mapreduce-trunk #1484 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1484/])
updating CHANGES.txt after committing 
MAPREDUCE-5333,HADOOP-9661,HADOOP-9355,HADOOP-9673,HADOOP-9414,HADOOP-9416,HDFS-4797,YARN-866,YARN-736,YARN-883
 to 2.1-beta branch (Revision 1502075)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1502075
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt


> BlockScanInfo does not override equals(..) and hashCode() consistently
> --
>
> Key: HDFS-4797
> URL: https://issues.apache.org/jira/browse/HDFS-4797
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
>Priority: Minor
> Fix For: 2.1.0-beta
>
> Attachments: h4797_20130513b.patch, h4797_20130513.patch
>
>
> In the code below, equals(..) compares lastScanTime but hashCode() is 
> computed using block ID.  Therefore, it could have two BlockScanInfo objects 
> which are equal but have two different hash codes.
> {code}
> //BlockScanInfo
> @Override
> public int hashCode() {
>   return block.hashCode();
> }
> 
> @Override
> public boolean equals(Object other) {
>   return other instanceof BlockScanInfo &&
>  compareTo((BlockScanInfo)other) == 0;
> }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4645) Move from randomly generated block ID to sequentially generated block ID

2013-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705850#comment-13705850
 ] 

Hudson commented on HDFS-4645:
--

Integrated in Hadoop-Mapreduce-trunk #1484 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1484/])
In CHANGES.txt, move HDFS-4908 and HDFS-4645 to 2.1.0-beta. (Revision 
1502009)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1502009
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Move from randomly generated block ID to sequentially generated block ID
> 
>
> Key: HDFS-4645
> URL: https://issues.apache.org/jira/browse/HDFS-4645
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Suresh Srinivas
>Assignee: Arpit Agarwal
> Fix For: 2.1.0-beta
>
> Attachments: editsStored, HDFS-4645.001.patch, HDFS-4645.002.patch, 
> HDFS-4645.003.patch, HDFS-4645.004.patch, HDFS-4645.005.patch, 
> HDFS-4645.006.patch, HDFS-4645.branch-2.patch, SequentialblockIDallocation.pdf
>
>
> Currently block IDs are randomly generated. This means there is no pattern to 
> block ID generation and no guarantees such as uniqueness of block ID for the 
> life time of the system can be made. I propose using SequentialNumber for 
> block ID generation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4908) Reduce snapshot inode memory usage

2013-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705856#comment-13705856
 ] 

Hudson commented on HDFS-4908:
--

Integrated in Hadoop-Mapreduce-trunk #1484 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1484/])
In CHANGES.txt, move HDFS-4908 and HDFS-4645 to 2.1.0-beta. (Revision 
1502009)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1502009
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Reduce snapshot inode memory usage
> --
>
> Key: HDFS-4908
> URL: https://issues.apache.org/jira/browse/HDFS-4908
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, snapshots
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Fix For: 2.1.0-beta
>
> Attachments: editsStored, h4908_20130617b.patch, 
> h4908_20130617c.patch, h4908_20130617.patch, h4908_20130619.patch, 
> h4908_20130620.patch
>
>
> Snapshots currently use INodeFile and INodeDirectory for storing previous 
> file/dir states as snapshot inodes.  However, INodeFile and INodeDirectory 
> have some fields not used by snapshot inodes such as parent reference, inode 
> id, block/children reference.  The memory footprint could be reduced by 
> having some specific classes for snapshot inodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4962) Use enum for nfs constants

2013-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705849#comment-13705849
 ] 

Hudson commented on HDFS-4962:
--

Integrated in Hadoop-Mapreduce-trunk #1484 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1484/])
HDFS-4962. Use enum for nfs constants. Contributed by Tsz Wo (Nicholas) 
SZE. (Revision 1501851)

 Result = SUCCESS
jing9 : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1501851
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/mount/MountInterface.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/mount/MountResponse.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/Nfs3Constant.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcAcceptedReply.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcCall.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcDeniedReply.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcMessage.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcReply.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/portmap/PortmapInterface.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/oncrpc/TestRpcAcceptedReply.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/oncrpc/TestRpcCall.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/oncrpc/TestRpcDeniedReply.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/oncrpc/TestRpcMessage.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/oncrpc/TestRpcReply.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/mount/RpcProgramMountd.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/OpenFileCtx.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteCtx.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/TestOutOfOrderWrite.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestRpcProgramNfs3.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Use enum for nfs constants
> --
>
> Key: HDFS-4962
> URL: https://issues.apache.org/jira/browse/HDFS-4962
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: nfs
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: h4962_20130709b.patch, h4962_20130709.patch, 
> h4962_20130710b.patch, h4962_20130710.patch
>
>
> The constants defined in MountInterface and some other classes are better to 
> use enum types instead of lists of int.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4969) WebhdfsFileSystem expects non-standard WEBHDFS Json element

2013-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705854#comment-13705854
 ] 

Hudson commented on HDFS-4969:
--

Integrated in Hadoop-Mapreduce-trunk #1484 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1484/])
HDFS-4969. WebhdfsFileSystem expects non-standard WEBHDFS Json element. 
(rkanter via tucu) (Revision 1501868)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1501868
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java


> WebhdfsFileSystem expects non-standard WEBHDFS Json element
> ---
>
> Key: HDFS-4969
> URL: https://issues.apache.org/jira/browse/HDFS-4969
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test, webhdfs
>Affects Versions: 3.0.0, 2.0.5-alpha
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Blocker
> Attachments: HDFS-4969.patch, HDFS-4969.patch
>
>
> org.apache.hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem has two 
> test failures due to a NPE:
> {noformat}
> testOperation[7](org.apache.hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem)
>   Time elapsed: 375 sec  <<< ERROR!
> java.lang.NullPointerException
>   at org.apache.hadoop.hdfs.web.JsonUtil.toFileStatus(JsonUtil.java:251)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:629)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:639)
>   at 
> org.apache.hadoop.fs.http.client.BaseTestHttpFSWith.testListStatus(BaseTestHttpFSWith.java:292)
>   at 
> org.apache.hadoop.fs.http.client.BaseTestHttpFSWith.operation(BaseTestHttpFSWith.java:506)
>   at 
> org.apache.hadoop.fs.http.client.BaseTestHttpFSWith.testOperation(BaseTestHttpFSWith.java:558)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
>   at 
> org.apache.hadoop.test.TestHdfsHelper$HdfsStatement.evaluate(TestHdfsHelper.java:74)
>   at 
> org.apache.hadoop.test.TestDirHelper$1.evaluate(TestDirHelper.java:106)
>   at 
> org.apache.hadoop.test.TestDirHelper$1.evaluate(TestDirHelper.java:106)
>   at 
> org.apache.hadoop.test.TestJettyHelper$1.evaluate(TestJettyHelper.java:53)
>   at 
> org.apache.hadoop.test.TestExceptionHelper$1.evaluate(TestExceptionHelper.java:42)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
>   at org.junit.runners.Suite.runChild(Suite.java:128)
>   at org.junit.runners.Suite.runChild(Suite.java:24)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>

[jira] [Commented] (HDFS-4372) Track NameNode startup progress

2013-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705853#comment-13705853
 ] 

Hudson commented on HDFS-4372:
--

Integrated in Hadoop-Mapreduce-trunk #1484 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1484/])
HDFS-4372. Track NameNode startup progress. Contributed by Chris Nauroth. 
(Revision 1502120)

 Result = SUCCESS
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1502120
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/MetricsAsserts.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/delegation/DelegationTokenSecretManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogFileInputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogInputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/RedundantEditLogInputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/AbstractTracking.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/Phase.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/PhaseTracking.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/StartupProgress.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/StartupProgressMetrics.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/StartupProgressView.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/Status.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/Step.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/StepTracking.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/StepType.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/package-info.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/startupprogress
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/StartupProgressTestHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/TestStartupProgress.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/TestStartupProgressMetrics.java


> Track NameNode startup progress
> ---
>
> Key: HDFS-4372
> URL: https://issues.apache.org/jira/browse/HDFS-4372
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 3.0.0
>
> Attachments: HDFS-4372.1.patch, HDFS-4372.2.patch, HDFS-4372.3.patch, 
> HDFS-4372.4.patch, HDFS-4372.4.rebase.patch
>
>
> Track detailed progress information about the steps of NameNode start

[jira] [Commented] (HDFS-4908) Reduce snapshot inode memory usage

2013-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705795#comment-13705795
 ] 

Hudson commented on HDFS-4908:
--

Integrated in Hadoop-Hdfs-trunk #1457 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1457/])
In CHANGES.txt, move HDFS-4908 and HDFS-4645 to 2.1.0-beta. (Revision 
1502009)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1502009
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Reduce snapshot inode memory usage
> --
>
> Key: HDFS-4908
> URL: https://issues.apache.org/jira/browse/HDFS-4908
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, snapshots
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Fix For: 2.1.0-beta
>
> Attachments: editsStored, h4908_20130617b.patch, 
> h4908_20130617c.patch, h4908_20130617.patch, h4908_20130619.patch, 
> h4908_20130620.patch
>
>
> Snapshots currently use INodeFile and INodeDirectory for storing previous 
> file/dir states as snapshot inodes.  However, INodeFile and INodeDirectory 
> have some fields not used by snapshot inodes such as parent reference, inode 
> id, block/children reference.  The memory footprint could be reduced by 
> having some specific classes for snapshot inodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4797) BlockScanInfo does not override equals(..) and hashCode() consistently

2013-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705799#comment-13705799
 ] 

Hudson commented on HDFS-4797:
--

Integrated in Hadoop-Hdfs-trunk #1457 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1457/])
updating CHANGES.txt after committing 
MAPREDUCE-5333,HADOOP-9661,HADOOP-9355,HADOOP-9673,HADOOP-9414,HADOOP-9416,HDFS-4797,YARN-866,YARN-736,YARN-883
 to 2.1-beta branch (Revision 1502075)

 Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1502075
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt


> BlockScanInfo does not override equals(..) and hashCode() consistently
> --
>
> Key: HDFS-4797
> URL: https://issues.apache.org/jira/browse/HDFS-4797
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
>Priority: Minor
> Fix For: 2.1.0-beta
>
> Attachments: h4797_20130513b.patch, h4797_20130513.patch
>
>
> In the code below, equals(..) compares lastScanTime but hashCode() is 
> computed using block ID.  Therefore, it could have two BlockScanInfo objects 
> which are equal but have two different hash codes.
> {code}
> //BlockScanInfo
> @Override
> public int hashCode() {
>   return block.hashCode();
> }
> 
> @Override
> public boolean equals(Object other) {
>   return other instanceof BlockScanInfo &&
>  compareTo((BlockScanInfo)other) == 0;
> }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4645) Move from randomly generated block ID to sequentially generated block ID

2013-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705789#comment-13705789
 ] 

Hudson commented on HDFS-4645:
--

Integrated in Hadoop-Hdfs-trunk #1457 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1457/])
In CHANGES.txt, move HDFS-4908 and HDFS-4645 to 2.1.0-beta. (Revision 
1502009)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1502009
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Move from randomly generated block ID to sequentially generated block ID
> 
>
> Key: HDFS-4645
> URL: https://issues.apache.org/jira/browse/HDFS-4645
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Suresh Srinivas
>Assignee: Arpit Agarwal
> Fix For: 2.1.0-beta
>
> Attachments: editsStored, HDFS-4645.001.patch, HDFS-4645.002.patch, 
> HDFS-4645.003.patch, HDFS-4645.004.patch, HDFS-4645.005.patch, 
> HDFS-4645.006.patch, HDFS-4645.branch-2.patch, SequentialblockIDallocation.pdf
>
>
> Currently block IDs are randomly generated. This means there is no pattern to 
> block ID generation and no guarantees such as uniqueness of block ID for the 
> life time of the system can be made. I propose using SequentialNumber for 
> block ID generation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4887) TestNNThroughputBenchmark exits abruptly

2013-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705803#comment-13705803
 ] 

Hudson commented on HDFS-4887:
--

Integrated in Hadoop-Hdfs-trunk #1457 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1457/])
HDFS-4887. TestNNThroughputBenchmark exits abruptly. Contributed by Kihwal 
Lee. (Revision 1501841)

 Result = FAILURE
kihwal : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1501841
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerTestUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java


> TestNNThroughputBenchmark exits abruptly
> 
>
> Key: HDFS-4887
> URL: https://issues.apache.org/jira/browse/HDFS-4887
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: benchmarks, test
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Blocker
> Fix For: 3.0.0, 2.1.0-beta
>
> Attachments: HDFS-4887.patch, HDFS-4887.patch
>
>
> After HDFS-4840, TestNNThroughputBenchmark exits in the middle. This is 
> because ReplicationMonitor is being stopped while NN is still running. 
> This is only valid during testing. In normal cases, ReplicationMonitor thread 
> runs all the time once started. In standby or safemode, it just skips 
> calculating DN work. I think NNThroughputBenchmark needs to use ExitUtil to 
> prevent termination, rather than modifying ReplicationMonitor.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4372) Track NameNode startup progress

2013-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705792#comment-13705792
 ] 

Hudson commented on HDFS-4372:
--

Integrated in Hadoop-Hdfs-trunk #1457 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1457/])
HDFS-4372. Track NameNode startup progress. Contributed by Chris Nauroth. 
(Revision 1502120)

 Result = FAILURE
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1502120
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/MetricsAsserts.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/delegation/DelegationTokenSecretManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogFileInputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogInputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/RedundantEditLogInputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/AbstractTracking.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/Phase.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/PhaseTracking.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/StartupProgress.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/StartupProgressMetrics.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/StartupProgressView.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/Status.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/Step.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/StepTracking.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/StepType.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/package-info.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/startupprogress
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/StartupProgressTestHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/TestStartupProgress.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/TestStartupProgressMetrics.java


> Track NameNode startup progress
> ---
>
> Key: HDFS-4372
> URL: https://issues.apache.org/jira/browse/HDFS-4372
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 3.0.0
>
> Attachments: HDFS-4372.1.patch, HDFS-4372.2.patch, HDFS-4372.3.patch, 
> HDFS-4372.4.patch, HDFS-4372.4.rebase.patch
>
>
> Track detailed progress information about the steps of NameNode startup to 
> e

[jira] [Commented] (HDFS-4962) Use enum for nfs constants

2013-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705788#comment-13705788
 ] 

Hudson commented on HDFS-4962:
--

Integrated in Hadoop-Hdfs-trunk #1457 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1457/])
HDFS-4962. Use enum for nfs constants. Contributed by Tsz Wo (Nicholas) 
SZE. (Revision 1501851)

 Result = FAILURE
jing9 : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1501851
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/mount/MountInterface.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/mount/MountResponse.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/Nfs3Constant.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcAcceptedReply.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcCall.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcDeniedReply.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcMessage.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcReply.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/portmap/PortmapInterface.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/oncrpc/TestRpcAcceptedReply.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/oncrpc/TestRpcCall.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/oncrpc/TestRpcDeniedReply.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/oncrpc/TestRpcMessage.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/oncrpc/TestRpcReply.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/mount/RpcProgramMountd.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/OpenFileCtx.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteCtx.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/TestOutOfOrderWrite.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestRpcProgramNfs3.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Use enum for nfs constants
> --
>
> Key: HDFS-4962
> URL: https://issues.apache.org/jira/browse/HDFS-4962
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: nfs
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: h4962_20130709b.patch, h4962_20130709.patch, 
> h4962_20130710b.patch, h4962_20130710.patch
>
>
> The constants defined in MountInterface and some other classes are better to 
> use enum types instead of lists of int.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4969) WebhdfsFileSystem expects non-standard WEBHDFS Json element

2013-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705793#comment-13705793
 ] 

Hudson commented on HDFS-4969:
--

Integrated in Hadoop-Hdfs-trunk #1457 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1457/])
HDFS-4969. WebhdfsFileSystem expects non-standard WEBHDFS Json element. 
(rkanter via tucu) (Revision 1501868)

 Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1501868
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java


> WebhdfsFileSystem expects non-standard WEBHDFS Json element
> ---
>
> Key: HDFS-4969
> URL: https://issues.apache.org/jira/browse/HDFS-4969
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test, webhdfs
>Affects Versions: 3.0.0, 2.0.5-alpha
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Blocker
> Attachments: HDFS-4969.patch, HDFS-4969.patch
>
>
> org.apache.hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem has two 
> test failures due to a NPE:
> {noformat}
> testOperation[7](org.apache.hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem)
>   Time elapsed: 375 sec  <<< ERROR!
> java.lang.NullPointerException
>   at org.apache.hadoop.hdfs.web.JsonUtil.toFileStatus(JsonUtil.java:251)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:629)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:639)
>   at 
> org.apache.hadoop.fs.http.client.BaseTestHttpFSWith.testListStatus(BaseTestHttpFSWith.java:292)
>   at 
> org.apache.hadoop.fs.http.client.BaseTestHttpFSWith.operation(BaseTestHttpFSWith.java:506)
>   at 
> org.apache.hadoop.fs.http.client.BaseTestHttpFSWith.testOperation(BaseTestHttpFSWith.java:558)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
>   at 
> org.apache.hadoop.test.TestHdfsHelper$HdfsStatement.evaluate(TestHdfsHelper.java:74)
>   at 
> org.apache.hadoop.test.TestDirHelper$1.evaluate(TestDirHelper.java:106)
>   at 
> org.apache.hadoop.test.TestDirHelper$1.evaluate(TestDirHelper.java:106)
>   at 
> org.apache.hadoop.test.TestJettyHelper$1.evaluate(TestJettyHelper.java:53)
>   at 
> org.apache.hadoop.test.TestExceptionHelper$1.evaluate(TestExceptionHelper.java:42)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
>   at org.junit.runners.Suite.runChild(Suite.java:128)
>   at org.junit.runners.Suite.runChild(Suite.java:24)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> s

[jira] [Commented] (HDFS-4979) Implement retry cache on the namenode

2013-07-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705715#comment-13705715
 ] 

Hadoop QA commented on HDFS-4979:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12591817/HDFS-4979.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4631//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4631//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4631//console

This message is automatically generated.

> Implement retry cache on the namenode
> -
>
> Key: HDFS-4979
> URL: https://issues.apache.org/jira/browse/HDFS-4979
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Suresh Srinivas
> Attachments: HDFS-4979.1.patch, HDFS-4979.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4887) TestNNThroughputBenchmark exits abruptly

2013-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705701#comment-13705701
 ] 

Hudson commented on HDFS-4887:
--

Integrated in Hadoop-Yarn-trunk #267 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/267/])
HDFS-4887. TestNNThroughputBenchmark exits abruptly. Contributed by Kihwal 
Lee. (Revision 1501841)

 Result = SUCCESS
kihwal : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1501841
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerTestUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java


> TestNNThroughputBenchmark exits abruptly
> 
>
> Key: HDFS-4887
> URL: https://issues.apache.org/jira/browse/HDFS-4887
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: benchmarks, test
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Blocker
> Fix For: 3.0.0, 2.1.0-beta
>
> Attachments: HDFS-4887.patch, HDFS-4887.patch
>
>
> After HDFS-4840, TestNNThroughputBenchmark exits in the middle. This is 
> because ReplicationMonitor is being stopped while NN is still running. 
> This is only valid during testing. In normal cases, ReplicationMonitor thread 
> runs all the time once started. In standby or safemode, it just skips 
> calculating DN work. I think NNThroughputBenchmark needs to use ExitUtil to 
> prevent termination, rather than modifying ReplicationMonitor.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4908) Reduce snapshot inode memory usage

2013-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705695#comment-13705695
 ] 

Hudson commented on HDFS-4908:
--

Integrated in Hadoop-Yarn-trunk #267 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/267/])
In CHANGES.txt, move HDFS-4908 and HDFS-4645 to 2.1.0-beta. (Revision 
1502009)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1502009
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Reduce snapshot inode memory usage
> --
>
> Key: HDFS-4908
> URL: https://issues.apache.org/jira/browse/HDFS-4908
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, snapshots
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Fix For: 2.1.0-beta
>
> Attachments: editsStored, h4908_20130617b.patch, 
> h4908_20130617c.patch, h4908_20130617.patch, h4908_20130619.patch, 
> h4908_20130620.patch
>
>
> Snapshots currently use INodeFile and INodeDirectory for storing previous 
> file/dir states as snapshot inodes.  However, INodeFile and INodeDirectory 
> have some fields not used by snapshot inodes such as parent reference, inode 
> id, block/children reference.  The memory footprint could be reduced by 
> having some specific classes for snapshot inodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4969) WebhdfsFileSystem expects non-standard WEBHDFS Json element

2013-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705693#comment-13705693
 ] 

Hudson commented on HDFS-4969:
--

Integrated in Hadoop-Yarn-trunk #267 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/267/])
HDFS-4969. WebhdfsFileSystem expects non-standard WEBHDFS Json element. 
(rkanter via tucu) (Revision 1501868)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1501868
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java


> WebhdfsFileSystem expects non-standard WEBHDFS Json element
> ---
>
> Key: HDFS-4969
> URL: https://issues.apache.org/jira/browse/HDFS-4969
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test, webhdfs
>Affects Versions: 3.0.0, 2.0.5-alpha
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Blocker
> Attachments: HDFS-4969.patch, HDFS-4969.patch
>
>
> org.apache.hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem has two 
> test failures due to a NPE:
> {noformat}
> testOperation[7](org.apache.hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem)
>   Time elapsed: 375 sec  <<< ERROR!
> java.lang.NullPointerException
>   at org.apache.hadoop.hdfs.web.JsonUtil.toFileStatus(JsonUtil.java:251)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:629)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:639)
>   at 
> org.apache.hadoop.fs.http.client.BaseTestHttpFSWith.testListStatus(BaseTestHttpFSWith.java:292)
>   at 
> org.apache.hadoop.fs.http.client.BaseTestHttpFSWith.operation(BaseTestHttpFSWith.java:506)
>   at 
> org.apache.hadoop.fs.http.client.BaseTestHttpFSWith.testOperation(BaseTestHttpFSWith.java:558)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
>   at 
> org.apache.hadoop.test.TestHdfsHelper$HdfsStatement.evaluate(TestHdfsHelper.java:74)
>   at 
> org.apache.hadoop.test.TestDirHelper$1.evaluate(TestDirHelper.java:106)
>   at 
> org.apache.hadoop.test.TestDirHelper$1.evaluate(TestDirHelper.java:106)
>   at 
> org.apache.hadoop.test.TestJettyHelper$1.evaluate(TestJettyHelper.java:53)
>   at 
> org.apache.hadoop.test.TestExceptionHelper$1.evaluate(TestExceptionHelper.java:42)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
>   at org.junit.runners.Suite.runChild(Suite.java:128)
>   at org.junit.runners.Suite.runChild(Suite.java:24)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun

[jira] [Commented] (HDFS-4797) BlockScanInfo does not override equals(..) and hashCode() consistently

2013-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705698#comment-13705698
 ] 

Hudson commented on HDFS-4797:
--

Integrated in Hadoop-Yarn-trunk #267 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/267/])
updating CHANGES.txt after committing 
MAPREDUCE-5333,HADOOP-9661,HADOOP-9355,HADOOP-9673,HADOOP-9414,HADOOP-9416,HDFS-4797,YARN-866,YARN-736,YARN-883
 to 2.1-beta branch (Revision 1502075)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1502075
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt


> BlockScanInfo does not override equals(..) and hashCode() consistently
> --
>
> Key: HDFS-4797
> URL: https://issues.apache.org/jira/browse/HDFS-4797
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
>Priority: Minor
> Fix For: 2.1.0-beta
>
> Attachments: h4797_20130513b.patch, h4797_20130513.patch
>
>
> In the code below, equals(..) compares lastScanTime but hashCode() is 
> computed using block ID.  Therefore, it could have two BlockScanInfo objects 
> which are equal but have two different hash codes.
> {code}
> //BlockScanInfo
> @Override
> public int hashCode() {
>   return block.hashCode();
> }
> 
> @Override
> public boolean equals(Object other) {
>   return other instanceof BlockScanInfo &&
>  compareTo((BlockScanInfo)other) == 0;
> }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4645) Move from randomly generated block ID to sequentially generated block ID

2013-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705689#comment-13705689
 ] 

Hudson commented on HDFS-4645:
--

Integrated in Hadoop-Yarn-trunk #267 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/267/])
In CHANGES.txt, move HDFS-4908 and HDFS-4645 to 2.1.0-beta. (Revision 
1502009)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1502009
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Move from randomly generated block ID to sequentially generated block ID
> 
>
> Key: HDFS-4645
> URL: https://issues.apache.org/jira/browse/HDFS-4645
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Suresh Srinivas
>Assignee: Arpit Agarwal
> Fix For: 2.1.0-beta
>
> Attachments: editsStored, HDFS-4645.001.patch, HDFS-4645.002.patch, 
> HDFS-4645.003.patch, HDFS-4645.004.patch, HDFS-4645.005.patch, 
> HDFS-4645.006.patch, HDFS-4645.branch-2.patch, SequentialblockIDallocation.pdf
>
>
> Currently block IDs are randomly generated. This means there is no pattern to 
> block ID generation and no guarantees such as uniqueness of block ID for the 
> life time of the system can be made. I propose using SequentialNumber for 
> block ID generation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4372) Track NameNode startup progress

2013-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705692#comment-13705692
 ] 

Hudson commented on HDFS-4372:
--

Integrated in Hadoop-Yarn-trunk #267 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/267/])
HDFS-4372. Track NameNode startup progress. Contributed by Chris Nauroth. 
(Revision 1502120)

 Result = SUCCESS
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1502120
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/MetricsAsserts.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/delegation/DelegationTokenSecretManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogFileInputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogInputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/RedundantEditLogInputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/AbstractTracking.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/Phase.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/PhaseTracking.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/StartupProgress.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/StartupProgressMetrics.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/StartupProgressView.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/Status.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/Step.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/StepTracking.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/StepType.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/package-info.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/startupprogress
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/StartupProgressTestHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/TestStartupProgress.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/TestStartupProgressMetrics.java


> Track NameNode startup progress
> ---
>
> Key: HDFS-4372
> URL: https://issues.apache.org/jira/browse/HDFS-4372
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 3.0.0
>
> Attachments: HDFS-4372.1.patch, HDFS-4372.2.patch, HDFS-4372.3.patch, 
> HDFS-4372.4.patch, HDFS-4372.4.rebase.patch
>
>
> Track detailed progress information about the steps of NameNode startup to 
> ena

[jira] [Commented] (HDFS-4962) Use enum for nfs constants

2013-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705688#comment-13705688
 ] 

Hudson commented on HDFS-4962:
--

Integrated in Hadoop-Yarn-trunk #267 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/267/])
HDFS-4962. Use enum for nfs constants. Contributed by Tsz Wo (Nicholas) 
SZE. (Revision 1501851)

 Result = SUCCESS
jing9 : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1501851
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/mount/MountInterface.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/mount/MountResponse.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/Nfs3Constant.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcAcceptedReply.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcCall.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcDeniedReply.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcMessage.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcReply.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/portmap/PortmapInterface.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/oncrpc/TestRpcAcceptedReply.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/oncrpc/TestRpcCall.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/oncrpc/TestRpcDeniedReply.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/oncrpc/TestRpcMessage.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/oncrpc/TestRpcReply.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/mount/RpcProgramMountd.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/OpenFileCtx.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteCtx.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/TestOutOfOrderWrite.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestRpcProgramNfs3.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Use enum for nfs constants
> --
>
> Key: HDFS-4962
> URL: https://issues.apache.org/jira/browse/HDFS-4962
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: nfs
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: h4962_20130709b.patch, h4962_20130709.patch, 
> h4962_20130710b.patch, h4962_20130710.patch
>
>
> The constants defined in MountInterface and some other classes are better to 
> use enum types instead of lists of int.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4912) Cleanup FSNamesystem#startFileInternal

2013-07-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705681#comment-13705681
 ] 

Hadoop QA commented on HDFS-4912:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12591808/HDFS-4912.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 eclipse:eclipse{color}.  The patch failed to build with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 
release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4630//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4630//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4630//console

This message is automatically generated.

> Cleanup FSNamesystem#startFileInternal
> --
>
> Key: HDFS-4912
> URL: https://issues.apache.org/jira/browse/HDFS-4912
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Attachments: HDFS-4912.1.patch, HDFS-4912.patch
>
>
> FSNamesystem#startFileInternal is used by both create and append. This 
> results in ugly if else conditions to consider append/create scenarios. This 
> method can be refactored and the code can be cleaned up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4979) Implement retry cache on the namenode

2013-07-11 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4979:
--

Attachment: HDFS-4979.1.patch

Updated patch. RetryCache should only be used for RPC request and not for 
requests received on webhdfs interface.


> Implement retry cache on the namenode
> -
>
> Key: HDFS-4979
> URL: https://issues.apache.org/jira/browse/HDFS-4979
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Suresh Srinivas
> Attachments: HDFS-4979.1.patch, HDFS-4979.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4979) Implement retry cache on the namenode

2013-07-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705625#comment-13705625
 ] 

Hadoop QA commented on HDFS-4979:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12591773/HDFS-4979.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 eclipse:eclipse{color}.  The patch failed to build with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 
release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
  org.apache.hadoop.hdfs.server.namenode.TestMetaSave
  org.apache.hadoop.hdfs.server.namenode.TestFsck
  
org.apache.hadoop.hdfs.server.namenode.TestNNThroughputBenchmark
  org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4629//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4629//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4629//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4629//console

This message is automatically generated.

> Implement retry cache on the namenode
> -
>
> Key: HDFS-4979
> URL: https://issues.apache.org/jira/browse/HDFS-4979
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Suresh Srinivas
> Attachments: HDFS-4979.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4912) Cleanup FSNamesystem#startFileInternal

2013-07-11 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4912:
--

Attachment: HDFS-4912.1.patch

Updated patch to fix the unit test failures.

> Cleanup FSNamesystem#startFileInternal
> --
>
> Key: HDFS-4912
> URL: https://issues.apache.org/jira/browse/HDFS-4912
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Attachments: HDFS-4912.1.patch, HDFS-4912.patch
>
>
> FSNamesystem#startFileInternal is used by both create and append. This 
> results in ugly if else conditions to consider append/create scenarios. This 
> method can be refactored and the code can be cleaned up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   >