[jira] [Commented] (HADOOP-7736) Remove duplicate call of Path#normalizePath during initialization.

2011-10-11 Thread Harsh J (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13125621#comment-13125621
 ] 

Harsh J commented on HADOOP-7736:
-

Thanks for clearing it up Jakob and Aaron. Thanks for the review as well!

I've committed this to trunk. 

> Remove duplicate call of Path#normalizePath during initialization.
> --
>
> Key: HADOOP-7736
> URL: https://issues.apache.org/jira/browse/HADOOP-7736
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 0.24.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Trivial
> Fix For: 0.24.0
>
> Attachments: HADOOP-7736.patch
>
>
> Found during code reading on HADOOP-6490, there seems to be an unnecessary 
> call of {{normalizePath(...)}} being made in the constructor {{Path(Path, 
> Path)}}. Since {{initialize(...)}} normalizes its received path string 
> already, its unnecessary to do it to the path parameter in the constructor's 
> call of the same.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7736) Remove duplicate call of Path#normalizePath during initialization.

2011-10-11 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-7736:


  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Resolved in trunk (0.24+)

> Remove duplicate call of Path#normalizePath during initialization.
> --
>
> Key: HADOOP-7736
> URL: https://issues.apache.org/jira/browse/HADOOP-7736
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 0.24.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Trivial
> Fix For: 0.24.0
>
> Attachments: HADOOP-7736.patch
>
>
> Found during code reading on HADOOP-6490, there seems to be an unnecessary 
> call of {{normalizePath(...)}} being made in the constructor {{Path(Path, 
> Path)}}. Since {{initialize(...)}} normalizes its received path string 
> already, its unnecessary to do it to the path parameter in the constructor's 
> call of the same.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7736) Remove duplicate call of Path#normalizePath during initialization.

2011-10-11 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13125613#comment-13125613
 ] 

Aaron T. Myers commented on HADOOP-7736:


+1, the patch looks good to me.

I believe that according to the Hadoop bylaws, all patches require a +1 from a 
committer before they can be committed. I've seen a few exceptions to that in 
practice, such as for trivial documentation fixes or for back-ports from other 
branches. In general it's safest to just find a committer who's willing to 
review it, though.

> Remove duplicate call of Path#normalizePath during initialization.
> --
>
> Key: HADOOP-7736
> URL: https://issues.apache.org/jira/browse/HADOOP-7736
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 0.24.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Trivial
> Fix For: 0.24.0
>
> Attachments: HADOOP-7736.patch
>
>
> Found during code reading on HADOOP-6490, there seems to be an unnecessary 
> call of {{normalizePath(...)}} being made in the constructor {{Path(Path, 
> Path)}}. Since {{initialize(...)}} normalizes its received path string 
> already, its unnecessary to do it to the path parameter in the constructor's 
> call of the same.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7736) Remove duplicate call of Path#normalizePath during initialization.

2011-10-11 Thread Harsh J (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13125610#comment-13125610
 ] 

Harsh J commented on HADOOP-7736:
-

Jakob,

Well yes then, that too. Isn't this trivial enough though? I'm unclear on where 
all a +1 would be required, so I'll assume its for everything :)

> Remove duplicate call of Path#normalizePath during initialization.
> --
>
> Key: HADOOP-7736
> URL: https://issues.apache.org/jira/browse/HADOOP-7736
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 0.24.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Trivial
> Fix For: 0.24.0
>
> Attachments: HADOOP-7736.patch
>
>
> Found during code reading on HADOOP-6490, there seems to be an unnecessary 
> call of {{normalizePath(...)}} being made in the constructor {{Path(Path, 
> Path)}}. Since {{initialize(...)}} normalizes its received path string 
> already, its unnecessary to do it to the path parameter in the constructor's 
> call of the same.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7736) Remove duplicate call of Path#normalizePath during initialization.

2011-10-11 Thread Jakob Homan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13125609#comment-13125609
 ] 

Jakob Homan commented on HADOOP-7736:
-

bq. Will commit once builds' test all passes and if there are no objections.
and after the patch gets a +1, of course.

> Remove duplicate call of Path#normalizePath during initialization.
> --
>
> Key: HADOOP-7736
> URL: https://issues.apache.org/jira/browse/HADOOP-7736
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 0.24.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Trivial
> Fix For: 0.24.0
>
> Attachments: HADOOP-7736.patch
>
>
> Found during code reading on HADOOP-6490, there seems to be an unnecessary 
> call of {{normalizePath(...)}} being made in the constructor {{Path(Path, 
> Path)}}. Since {{initialize(...)}} normalizes its received path string 
> already, its unnecessary to do it to the path parameter in the constructor's 
> call of the same.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7736) Remove duplicate call of Path#normalizePath during initialization.

2011-10-11 Thread Harsh J (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13125596#comment-13125596
 ] 

Harsh J commented on HADOOP-7736:
-

No tests cause the current tests that exist for that particular constructor 
already cover the change, and also pass.

> Remove duplicate call of Path#normalizePath during initialization.
> --
>
> Key: HADOOP-7736
> URL: https://issues.apache.org/jira/browse/HADOOP-7736
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 0.24.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Trivial
> Fix For: 0.24.0
>
> Attachments: HADOOP-7736.patch
>
>
> Found during code reading on HADOOP-6490, there seems to be an unnecessary 
> call of {{normalizePath(...)}} being made in the constructor {{Path(Path, 
> Path)}}. Since {{initialize(...)}} normalizes its received path string 
> already, its unnecessary to do it to the path parameter in the constructor's 
> call of the same.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7736) Remove duplicate call of Path#normalizePath during initialization.

2011-10-11 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13125593#comment-13125593
 ] 

Hadoop QA commented on HADOOP-7736:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12498704/HADOOP-7736.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/288//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/288//console

This message is automatically generated.

> Remove duplicate call of Path#normalizePath during initialization.
> --
>
> Key: HADOOP-7736
> URL: https://issues.apache.org/jira/browse/HADOOP-7736
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 0.24.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Trivial
> Fix For: 0.24.0
>
> Attachments: HADOOP-7736.patch
>
>
> Found during code reading on HADOOP-6490, there seems to be an unnecessary 
> call of {{normalizePath(...)}} being made in the constructor {{Path(Path, 
> Path)}}. Since {{initialize(...)}} normalizes its received path string 
> already, its unnecessary to do it to the path parameter in the constructor's 
> call of the same.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7736) Remove duplicate call of Path#normalizePath during initialization.

2011-10-11 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-7736:


Status: Patch Available  (was: Open)

> Remove duplicate call of Path#normalizePath during initialization.
> --
>
> Key: HADOOP-7736
> URL: https://issues.apache.org/jira/browse/HADOOP-7736
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 0.24.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Trivial
> Fix For: 0.24.0
>
> Attachments: HADOOP-7736.patch
>
>
> Found during code reading on HADOOP-6490, there seems to be an unnecessary 
> call of {{normalizePath(...)}} being made in the constructor {{Path(Path, 
> Path)}}. Since {{initialize(...)}} normalizes its received path string 
> already, its unnecessary to do it to the path parameter in the constructor's 
> call of the same.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7736) Remove duplicate call of Path#normalizePath during initialization.

2011-10-11 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-7736:


Attachment: HADOOP-7736.patch

{code}

---
 T E S T S
---
Running org.apache.hadoop.fs.TestPath
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.076 sec

Results :

Tests run: 14, Failures: 0, Errors: 0, Skipped: 0
{code}

Patch that removes the redundant call. Will commit once builds' test all passes 
and if there are no objections.

> Remove duplicate call of Path#normalizePath during initialization.
> --
>
> Key: HADOOP-7736
> URL: https://issues.apache.org/jira/browse/HADOOP-7736
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 0.24.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Trivial
> Fix For: 0.24.0
>
> Attachments: HADOOP-7736.patch
>
>
> Found during code reading on HADOOP-6490, there seems to be an unnecessary 
> call of {{normalizePath(...)}} being made in the constructor {{Path(Path, 
> Path)}}. Since {{initialize(...)}} normalizes its received path string 
> already, its unnecessary to do it to the path parameter in the constructor's 
> call of the same.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-7736) Remove duplicate call of Path#normalizePath during initialization.

2011-10-11 Thread Harsh J (Created) (JIRA)
Remove duplicate call of Path#normalizePath during initialization.
--

 Key: HADOOP-7736
 URL: https://issues.apache.org/jira/browse/HADOOP-7736
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.24.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Fix For: 0.24.0


Found during code reading on HADOOP-6490, there seems to be an unnecessary call 
of {{normalizePath(...)}} being made in the constructor {{Path(Path, Path)}}. 
Since {{initialize(...)}} normalizes its received path string already, its 
unnecessary to do it to the path parameter in the constructor's call of the 
same.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-6490) Path.normalize should use StringUtils.replace in favor of String.replace

2011-10-11 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13125570#comment-13125570
 ] 

Hudson commented on HADOOP-6490:


Integrated in Hadoop-Mapreduce-trunk-Commit #1079 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1079/])
HADOOP-6490. Use StringUtils over String#replace in Path#normalizePath. 
Contributed by Uma Maheswara Rao G.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1182189
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Path.java


> Path.normalize should use StringUtils.replace in favor of String.replace
> 
>
> Key: HADOOP-6490
> URL: https://issues.apache.org/jira/browse/HADOOP-6490
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.20.1
>Reporter: Zheng Shao
>Assignee: Uma Maheswara Rao G
>Priority: Minor
>  Labels: newbie
> Fix For: 0.24.0
>
> Attachments: HADOOP-6490.patch
>
>
> in our environment, we are seeing that the JobClient is going out of memory 
> because Path.normalizePath(String) is called several tens of thousands of 
> times, and each time it calls "String.replace" twice.
> java.lang.String.replace compiles a regex to do the job which is very costly.
> We should use org.apache.commons.lang.StringUtils.replace which is much 
> faster and consumes almost no extra memory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-6490) Path.normalize should use StringUtils.replace in favor of String.replace

2011-10-11 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13125563#comment-13125563
 ] 

Hudson commented on HADOOP-6490:


Integrated in Hadoop-Hdfs-trunk-Commit #1137 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1137/])
HADOOP-6490. Use StringUtils over String#replace in Path#normalizePath. 
Contributed by Uma Maheswara Rao G.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1182189
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Path.java


> Path.normalize should use StringUtils.replace in favor of String.replace
> 
>
> Key: HADOOP-6490
> URL: https://issues.apache.org/jira/browse/HADOOP-6490
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.20.1
>Reporter: Zheng Shao
>Assignee: Uma Maheswara Rao G
>Priority: Minor
>  Labels: newbie
> Fix For: 0.24.0
>
> Attachments: HADOOP-6490.patch
>
>
> in our environment, we are seeing that the JobClient is going out of memory 
> because Path.normalizePath(String) is called several tens of thousands of 
> times, and each time it calls "String.replace" twice.
> java.lang.String.replace compiles a regex to do the job which is very costly.
> We should use org.apache.commons.lang.StringUtils.replace which is much 
> faster and consumes almost no extra memory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-6490) Path.normalize should use StringUtils.replace in favor of String.replace

2011-10-11 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13125561#comment-13125561
 ] 

Hudson commented on HADOOP-6490:


Integrated in Hadoop-Common-trunk-Commit #1059 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1059/])
HADOOP-6490. Use StringUtils over String#replace in Path#normalizePath. 
Contributed by Uma Maheswara Rao G.

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1182189
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Path.java


> Path.normalize should use StringUtils.replace in favor of String.replace
> 
>
> Key: HADOOP-6490
> URL: https://issues.apache.org/jira/browse/HADOOP-6490
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.20.1
>Reporter: Zheng Shao
>Assignee: Uma Maheswara Rao G
>Priority: Minor
>  Labels: newbie
> Fix For: 0.24.0
>
> Attachments: HADOOP-6490.patch
>
>
> in our environment, we are seeing that the JobClient is going out of memory 
> because Path.normalizePath(String) is called several tens of thousands of 
> times, and each time it calls "String.replace" twice.
> java.lang.String.replace compiles a regex to do the job which is very costly.
> We should use org.apache.commons.lang.StringUtils.replace which is much 
> faster and consumes almost no extra memory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-6490) Path.normalize should use StringUtils.replace in favor of String.replace

2011-10-11 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-6490:


 Tags: path
   Resolution: Fixed
Fix Version/s: 0.24.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks Uma!

> Path.normalize should use StringUtils.replace in favor of String.replace
> 
>
> Key: HADOOP-6490
> URL: https://issues.apache.org/jira/browse/HADOOP-6490
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.20.1
>Reporter: Zheng Shao
>Assignee: Uma Maheswara Rao G
>Priority: Minor
>  Labels: newbie
> Fix For: 0.24.0
>
> Attachments: HADOOP-6490.patch
>
>
> in our environment, we are seeing that the JobClient is going out of memory 
> because Path.normalizePath(String) is called several tens of thousands of 
> times, and each time it calls "String.replace" twice.
> java.lang.String.replace compiles a regex to do the job which is very costly.
> We should use org.apache.commons.lang.StringUtils.replace which is much 
> faster and consumes almost no extra memory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HADOOP-6490) Path.normalize should use StringUtils.replace in favor of String.replace

2011-10-11 Thread Harsh J (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J reassigned HADOOP-6490:
---

Assignee: Uma Maheswara Rao G

+1. TestPath passes with all its existing Normalization tests. Pushing.

> Path.normalize should use StringUtils.replace in favor of String.replace
> 
>
> Key: HADOOP-6490
> URL: https://issues.apache.org/jira/browse/HADOOP-6490
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.20.1
>Reporter: Zheng Shao
>Assignee: Uma Maheswara Rao G
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-6490.patch
>
>
> in our environment, we are seeing that the JobClient is going out of memory 
> because Path.normalizePath(String) is called several tens of thousands of 
> times, and each time it calls "String.replace" twice.
> java.lang.String.replace compiles a regex to do the job which is very costly.
> We should use org.apache.commons.lang.StringUtils.replace which is much 
> faster and consumes almost no extra memory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7642) create hadoop-dist module where TAR stitching would happen

2011-10-11 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13125510#comment-13125510
 ] 

Hudson commented on HADOOP-7642:


Integrated in Hadoop-Mapreduce-trunk-Commit #1078 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1078/])
HADOOP-7642. create hadoop-dist module where TAR stitching would happen. 
Contributed by Thomas White.

tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1182151
Files : 
* 
/hadoop/common/trunk/hadoop-assemblies/src/main/resources/assemblies/hadoop-mapreduce-dist.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-dist
* /hadoop/common/trunk/hadoop-dist/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/INSTALL
* /hadoop/common/trunk/hadoop-mapreduce-project/assembly/all.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/pom.xml
* /hadoop/common/trunk/hadoop-project/pom.xml
* /hadoop/common/trunk/pom.xml


> create hadoop-dist module where TAR stitching would happen
> --
>
> Key: HADOOP-7642
> URL: https://issues.apache.org/jira/browse/HADOOP-7642
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Alejandro Abdelnur
>Assignee: Tom White
> Fix For: 0.23.0, 0.24.0
>
> Attachments: HADOOP-7642.patch, HADOOP-7642.patch, HADOOP-7642.patch, 
> HADOOP-7642.patch, HADOOP-7642.patch, HADOOP-7642.patch, HADOOP-7642.patch, 
> HADOOP-7642.patch
>
>
> Instead having a post build script that stitches common&hdfs&mmr, this should 
> be done as part of the build when running 'mvn package -Pdist -Dtar'

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7642) create hadoop-dist module where TAR stitching would happen

2011-10-11 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13125505#comment-13125505
 ] 

Hudson commented on HADOOP-7642:


Integrated in Hadoop-Hdfs-trunk-Commit #1136 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1136/])
HADOOP-7642. create hadoop-dist module where TAR stitching would happen. 
Contributed by Thomas White.

tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1182151
Files : 
* 
/hadoop/common/trunk/hadoop-assemblies/src/main/resources/assemblies/hadoop-mapreduce-dist.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-dist
* /hadoop/common/trunk/hadoop-dist/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/INSTALL
* /hadoop/common/trunk/hadoop-mapreduce-project/assembly/all.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/pom.xml
* /hadoop/common/trunk/hadoop-project/pom.xml
* /hadoop/common/trunk/pom.xml


> create hadoop-dist module where TAR stitching would happen
> --
>
> Key: HADOOP-7642
> URL: https://issues.apache.org/jira/browse/HADOOP-7642
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Alejandro Abdelnur
>Assignee: Tom White
> Fix For: 0.23.0, 0.24.0
>
> Attachments: HADOOP-7642.patch, HADOOP-7642.patch, HADOOP-7642.patch, 
> HADOOP-7642.patch, HADOOP-7642.patch, HADOOP-7642.patch, HADOOP-7642.patch, 
> HADOOP-7642.patch
>
>
> Instead having a post build script that stitches common&hdfs&mmr, this should 
> be done as part of the build when running 'mvn package -Pdist -Dtar'

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7734) metrics2 class names inconsistent between trunk and 20x

2011-10-11 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13125504#comment-13125504
 ] 

Todd Lipcon commented on HADOOP-7734:
-

Is it possible to backport "metrics2.1" in such a way that "metrics2.0" isn't 
broken in the process? I don't know the APIs well enough.

> metrics2 class names inconsistent between trunk and 20x
> ---
>
> Key: HADOOP-7734
> URL: https://issues.apache.org/jira/browse/HADOOP-7734
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.20.205.0, 0.23.0
>Reporter: Todd Lipcon
>Priority: Critical
>
> The new metrics2 framework was backported into the 20x branch, but the class 
> names differ between the two branches. So, if anyone were to build against 
> the metrics2 API in 20x, it would break when they upgraded to 23. We should 
> reconcile the two so that they are API-compatible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7642) create hadoop-dist module where TAR stitching would happen

2011-10-11 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13125503#comment-13125503
 ] 

Hudson commented on HADOOP-7642:


Integrated in Hadoop-Common-trunk-Commit #1058 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1058/])
HADOOP-7642. create hadoop-dist module where TAR stitching would happen. 
Contributed by Thomas White.

tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1182151
Files : 
* 
/hadoop/common/trunk/hadoop-assemblies/src/main/resources/assemblies/hadoop-mapreduce-dist.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-dist
* /hadoop/common/trunk/hadoop-dist/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/INSTALL
* /hadoop/common/trunk/hadoop-mapreduce-project/assembly/all.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/pom.xml
* /hadoop/common/trunk/hadoop-project/pom.xml
* /hadoop/common/trunk/pom.xml


> create hadoop-dist module where TAR stitching would happen
> --
>
> Key: HADOOP-7642
> URL: https://issues.apache.org/jira/browse/HADOOP-7642
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Alejandro Abdelnur
>Assignee: Tom White
> Fix For: 0.23.0, 0.24.0
>
> Attachments: HADOOP-7642.patch, HADOOP-7642.patch, HADOOP-7642.patch, 
> HADOOP-7642.patch, HADOOP-7642.patch, HADOOP-7642.patch, HADOOP-7642.patch, 
> HADOOP-7642.patch
>
>
> Instead having a post build script that stitches common&hdfs&mmr, this should 
> be done as part of the build when running 'mvn package -Pdist -Dtar'

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7642) create hadoop-dist module where TAR stitching would happen

2011-10-11 Thread Alejandro Abdelnur (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-7642:
---

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

committed. Thanks Tom.

In a couple days this patch will be committed to 0.23 branch

> create hadoop-dist module where TAR stitching would happen
> --
>
> Key: HADOOP-7642
> URL: https://issues.apache.org/jira/browse/HADOOP-7642
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Alejandro Abdelnur
>Assignee: Tom White
> Fix For: 0.23.0, 0.24.0
>
> Attachments: HADOOP-7642.patch, HADOOP-7642.patch, HADOOP-7642.patch, 
> HADOOP-7642.patch, HADOOP-7642.patch, HADOOP-7642.patch, HADOOP-7642.patch, 
> HADOOP-7642.patch
>
>
> Instead having a post build script that stitches common&hdfs&mmr, this should 
> be done as part of the build when running 'mvn package -Pdist -Dtar'

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Moved] (HADOOP-7735) webhdfs returns two content-type headers

2011-10-11 Thread Tsz Wo (Nicholas), SZE (Moved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE moved HDFS-2423 to HADOOP-7735:
--

Affects Version/s: (was: 0.20.205.0)
   0.20.205.0
  Key: HADOOP-7735  (was: HDFS-2423)
  Project: Hadoop Common  (was: Hadoop HDFS)

> webhdfs returns two content-type headers
> 
>
> Key: HADOOP-7735
> URL: https://issues.apache.org/jira/browse/HADOOP-7735
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.205.0
>Reporter: Arpit Gupta
>
> $ curl -i "http://localhost:50070/webhdfs/path?op=GETFILESTATUS";
> HTTP/1.1 200 OK
> Content-Type: text/html; charset=utf-8
> Expires: Thu, 01-Jan-1970 00:00:00 GMT
> 
> Content-Type: application/json
> Transfer-Encoding: chunked
> Server: Jetty(6.1.26)
> It should only return one content type header = application/json

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7735) webhdfs returns two content-type headers

2011-10-11 Thread Tsz Wo (Nicholas), SZE (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13125497#comment-13125497
 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-7735:


It is because HttpServer.QuotingInputFilter set content type for all requests.

Moving this from HDFS to Common.

> webhdfs returns two content-type headers
> 
>
> Key: HADOOP-7735
> URL: https://issues.apache.org/jira/browse/HADOOP-7735
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.205.0
>Reporter: Arpit Gupta
>
> $ curl -i "http://localhost:50070/webhdfs/path?op=GETFILESTATUS";
> HTTP/1.1 200 OK
> Content-Type: text/html; charset=utf-8
> Expires: Thu, 01-Jan-1970 00:00:00 GMT
> 
> Content-Type: application/json
> Transfer-Encoding: chunked
> Server: Jetty(6.1.26)
> It should only return one content type header = application/json

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7734) metrics2 class names inconsistent between trunk and 20x

2011-10-11 Thread Luke Lu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13125473#comment-13125473
 ] 

Luke Lu commented on HADOOP-7734:
-

bq. HBase is stuck using metrics1 until all of our users move to 0.23, since we 
need to support both 20 and 23, and the metrics2 API is incompatible.

OK, you can add metrics2.1 support in 0.20x along with the older metrics2.0 API 
that'd be deprecated. HBase can use the nice new metrics 2.1 API for both 0.20x 
and 0.23+ lines. I prefer this instead of the other way around to keep the 
newer branches clean :)


> metrics2 class names inconsistent between trunk and 20x
> ---
>
> Key: HADOOP-7734
> URL: https://issues.apache.org/jira/browse/HADOOP-7734
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.20.205.0, 0.23.0
>Reporter: Todd Lipcon
>Priority: Critical
>
> The new metrics2 framework was backported into the 20x branch, but the class 
> names differ between the two branches. So, if anyone were to build against 
> the metrics2 API in 20x, it would break when they upgraded to 23. We should 
> reconcile the two so that they are API-compatible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7603) Set default hdfs, mapred uid, and hadoop group gid for RPM packages

2011-10-11 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7603:
---

Release Note: Set hdfs uid, mapred uid, and hadoop gid to fixed numbers 
(201, 202, and 123, respectively).  (was: Set hdfs, mapred uid, and hadoop uid 
to fixed numbers. (Eric Yang))

> Set default hdfs, mapred uid, and hadoop group gid for RPM packages
> ---
>
> Key: HADOOP-7603
> URL: https://issues.apache.org/jira/browse/HADOOP-7603
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.205.0, 0.23.0
> Environment: Java, Redhat EL, Ubuntu
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: 0.20.205.0, 0.23.0
>
> Attachments: HADOOP-7603-trunk.patch, HADOOP-7603.patch
>
>
> Hadoop rpm package creates hdfs, mapped users, and hadoop group for 
> automatically setting up pid directory and log directory with proper 
> permission.  The default headless users should have a fixed uid, and gid 
> numbers defined.
> Searched through the standard uid and gid on both Redhat and Debian distro.  
> It looks like:
> {noformat}
> uid: 201 for hdfs
> uid: 202 for mapred
> gid: 49 for hadoop
> {noformat}
> would be free for use.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7557) Make IPC header be extensible

2011-10-11 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13125464#comment-13125464
 ] 

Hadoop QA commented on HADOOP-7557:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12498683/HADOOP-7557.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/287//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/287//console

This message is automatically generated.

> Make  IPC  header be extensible
> ---
>
> Key: HADOOP-7557
> URL: https://issues.apache.org/jira/browse/HADOOP-7557
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sanjay Radia
>Assignee: Sanjay Radia
> Attachments: HADOOP-7557.patch, IpcHeader.proto, ipcHeader1.patch, 
> ipcHeader2.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7557) Make IPC header be extensible

2011-10-11 Thread Doug Cutting (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doug Cutting updated HADOOP-7557:
-

Attachment: HADOOP-7557.patch

Your patch does more than just make the header extensible.  It also adds new 
stuff into the header, etc.

Here's a considerably simpler patch that only changes the wire format of the 
header to be something that fields can be easily added to and removed from.

> Make  IPC  header be extensible
> ---
>
> Key: HADOOP-7557
> URL: https://issues.apache.org/jira/browse/HADOOP-7557
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sanjay Radia
>Assignee: Sanjay Radia
> Attachments: HADOOP-7557.patch, IpcHeader.proto, ipcHeader1.patch, 
> ipcHeader2.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7734) metrics2 class names inconsistent between trunk and 20x

2011-10-11 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13125431#comment-13125431
 ] 

Todd Lipcon commented on HADOOP-7734:
-

bq. The only two API changes that affect plugins are:

It's not just sink plugins that are affected, but any applications that want to 
build against metrics. For example, HBase is stuck using metrics1 until all of 
our users move to 0.23, since we need to support both 20 and 23, and the 
metrics2 API is incompatible.

bq. 203-205 and some previous Y releases
Serves them right for developing a major new subsystem on 20 instead of trunk :P

> metrics2 class names inconsistent between trunk and 20x
> ---
>
> Key: HADOOP-7734
> URL: https://issues.apache.org/jira/browse/HADOOP-7734
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.20.205.0, 0.23.0
>Reporter: Todd Lipcon
>Priority: Critical
>
> The new metrics2 framework was backported into the 20x branch, but the class 
> names differ between the two branches. So, if anyone were to build against 
> the metrics2 API in 20x, it would break when they upgraded to 23. We should 
> reconcile the two so that they are API-compatible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7734) metrics2 class names inconsistent between trunk and 20x

2011-10-11 Thread Luke Lu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13125413#comment-13125413
 ] 

Luke Lu commented on HADOOP-7734:
-

bq. can't we backport the improvements into 20x as well? What's the point of 
having them in 20x if no one can adopt them without breaking in the next 
version?

Well, backporting to 20x say 206 would break existing plugins in production for 
previous 20x releases (203-205 and some previous Y releases). 0.20 to 0.23 is a 
considered a major release change and people expect things (especially none 
end-user facing stuff like metrics) break a little :) 0.23 already requires 
major deployment/config changes, of which metrics is a only a small part.

> metrics2 class names inconsistent between trunk and 20x
> ---
>
> Key: HADOOP-7734
> URL: https://issues.apache.org/jira/browse/HADOOP-7734
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.20.205.0, 0.23.0
>Reporter: Todd Lipcon
>Priority: Critical
>
> The new metrics2 framework was backported into the 20x branch, but the class 
> names differ between the two branches. So, if anyone were to build against 
> the metrics2 API in 20x, it would break when they upgraded to 23. We should 
> reconcile the two so that they are API-compatible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7734) metrics2 class names inconsistent between trunk and 20x

2011-10-11 Thread Luke Lu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13125409#comment-13125409
 ] 

Luke Lu commented on HADOOP-7734:
-

bq. Some examples

For new code (0.23+), the new style (with annotations) API is recommended.

The only two API changes that affect plugins are: 
# Metric -> AbstractMetric because I decide to use Metric for annotation.
# MetricsVisitor method signatures because I didn't understand Java generics 
very well :)



> metrics2 class names inconsistent between trunk and 20x
> ---
>
> Key: HADOOP-7734
> URL: https://issues.apache.org/jira/browse/HADOOP-7734
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.20.205.0, 0.23.0
>Reporter: Todd Lipcon
>Priority: Critical
>
> The new metrics2 framework was backported into the 20x branch, but the class 
> names differ between the two branches. So, if anyone were to build against 
> the metrics2 API in 20x, it would break when they upgraded to 23. We should 
> reconcile the two so that they are API-compatible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7730) Allow TestCLI to be run against a cluster

2011-10-11 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13125407#comment-13125407
 ] 

Hudson commented on HADOOP-7730:


Integrated in Hadoop-Common-22-branch #91 (See 
[https://builds.apache.org/job/Hadoop-Common-22-branch/91/])
HADOOP-7730. Allow TestCLI to be run against a cluster. Contributed by Tom 
White, Konstantin Boudnik.

cos : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1182018
Files : 
* /hadoop/common/branches/branch-0.22/common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.22/common/src/test/core/org/apache/hadoop/cli/CLITestHelper.java
* 
/hadoop/common/branches/branch-0.22/common/src/test/core/org/apache/hadoop/cli/util/CommandExecutor.java


> Allow TestCLI to be run against a cluster
> -
>
> Key: HADOOP-7730
> URL: https://issues.apache.org/jira/browse/HADOOP-7730
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Fix For: 0.22.0
>
> Attachments: HADOOP-7730.patch, HADOOP-7730.trunk.patch, 
> HADOOP-7730.trunk.patch
>
>
> Use the same CLI test to test cluster bits (see HDFS-1762 for more info)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7734) metrics2 class names inconsistent between trunk and 20x

2011-10-11 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13125394#comment-13125394
 ] 

Todd Lipcon commented on HADOOP-7734:
-

it's unfortunate that such a new API in 20x is already incompatible with 23... 
can't we backport the improvements into 20x as well? What's the point of having 
them in 20x if no one can adopt them without breaking in the next version?

> metrics2 class names inconsistent between trunk and 20x
> ---
>
> Key: HADOOP-7734
> URL: https://issues.apache.org/jira/browse/HADOOP-7734
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.20.205.0, 0.23.0
>Reporter: Todd Lipcon
>Priority: Critical
>
> The new metrics2 framework was backported into the 20x branch, but the class 
> names differ between the two branches. So, if anyone were to build against 
> the metrics2 API in 20x, it would break when they upgraded to 23. We should 
> reconcile the two so that they are API-compatible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7734) metrics2 class names inconsistent between trunk and 20x

2011-10-11 Thread Luke Lu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13125387#comment-13125387
 ] 

Luke Lu commented on HADOOP-7734:
-

Actually the metrics2 framework is first checked into 20x branch and then some 
API evolved for good reasons in trunk and then 0.23, which was explicitly 
marked as incompatible changes. I personally don't want to support 0.20x style 
plugins and prefer giving a more explicit upgrade message, which is reasonable. 
Supporting both would make 0.23/trunk code very ugly.

> metrics2 class names inconsistent between trunk and 20x
> ---
>
> Key: HADOOP-7734
> URL: https://issues.apache.org/jira/browse/HADOOP-7734
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.20.205.0, 0.23.0
>Reporter: Todd Lipcon
>Priority: Critical
>
> The new metrics2 framework was backported into the 20x branch, but the class 
> names differ between the two branches. So, if anyone were to build against 
> the metrics2 API in 20x, it would break when they upgraded to 23. We should 
> reconcile the two so that they are API-compatible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7734) metrics2 class names inconsistent between trunk and 20x

2011-10-11 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13125386#comment-13125386
 ] 

Todd Lipcon commented on HADOOP-7734:
-

Some examples:
- JvmMetricsSource vs JvmMetrics
- MetricMutableStat* vs MutableStat*
- MetricsBuilder vs MetricsCollector
maybe some others

> metrics2 class names inconsistent between trunk and 20x
> ---
>
> Key: HADOOP-7734
> URL: https://issues.apache.org/jira/browse/HADOOP-7734
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.20.205.0, 0.23.0
>Reporter: Todd Lipcon
>Priority: Critical
>
> The new metrics2 framework was backported into the 20x branch, but the class 
> names differ between the two branches. So, if anyone were to build against 
> the metrics2 API in 20x, it would break when they upgraded to 23. We should 
> reconcile the two so that they are API-compatible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7642) create hadoop-dist module where TAR stitching would happen

2011-10-11 Thread Alejandro Abdelnur (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13125382#comment-13125382
 ] 

Alejandro Abdelnur commented on HADOOP-7642:


+1 tested the patch in mac (no native) and in ubuntu (no native & native), 
generated TAR seems to have all pieces.



> create hadoop-dist module where TAR stitching would happen
> --
>
> Key: HADOOP-7642
> URL: https://issues.apache.org/jira/browse/HADOOP-7642
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Alejandro Abdelnur
>Assignee: Tom White
> Fix For: 0.23.0, 0.24.0
>
> Attachments: HADOOP-7642.patch, HADOOP-7642.patch, HADOOP-7642.patch, 
> HADOOP-7642.patch, HADOOP-7642.patch, HADOOP-7642.patch, HADOOP-7642.patch, 
> HADOOP-7642.patch
>
>
> Instead having a post build script that stitches common&hdfs&mmr, this should 
> be done as part of the build when running 'mvn package -Pdist -Dtar'

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-7734) metrics2 class names inconsistent between trunk and 20x

2011-10-11 Thread Todd Lipcon (Created) (JIRA)
metrics2 class names inconsistent between trunk and 20x
---

 Key: HADOOP-7734
 URL: https://issues.apache.org/jira/browse/HADOOP-7734
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 0.20.205.0, 0.23.0
Reporter: Todd Lipcon
Priority: Critical


The new metrics2 framework was backported into the 20x branch, but the class 
names differ between the two branches. So, if anyone were to build against the 
metrics2 API in 20x, it would break when they upgraded to 23. We should 
reconcile the two so that they are API-compatible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7642) create hadoop-dist module where TAR stitching would happen

2011-10-11 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13125361#comment-13125361
 ] 

Aaron T. Myers commented on HADOOP-7642:


I didn't actually look at the code, but I did test the functionality. I ran 
{{`mvn -Pnative -Pdist -Dtar -DskipTests clean package'}} and set 
{{HADOOP_HOME}} to "{{hadoop-dist/target/hadoop-0.24.0-SNAPSHOT}}". I then was 
able to start up an NN and DN using {{`hdfs node'}} and successfully 
run {{`hadoop fs -ls /'}}.

So, the change seems functionally good to me, but some maven guru (wink, Tucu, 
wink) should probably review the code.

> create hadoop-dist module where TAR stitching would happen
> --
>
> Key: HADOOP-7642
> URL: https://issues.apache.org/jira/browse/HADOOP-7642
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Alejandro Abdelnur
>Assignee: Tom White
> Fix For: 0.23.0, 0.24.0
>
> Attachments: HADOOP-7642.patch, HADOOP-7642.patch, HADOOP-7642.patch, 
> HADOOP-7642.patch, HADOOP-7642.patch, HADOOP-7642.patch, HADOOP-7642.patch, 
> HADOOP-7642.patch
>
>
> Instead having a post build script that stitches common&hdfs&mmr, this should 
> be done as part of the build when running 'mvn package -Pdist -Dtar'

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7642) create hadoop-dist module where TAR stitching would happen

2011-10-11 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13125341#comment-13125341
 ] 

Hadoop QA commented on HADOOP-7642:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12498657/HADOOP-7642.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 4 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/286//console

This message is automatically generated.

> create hadoop-dist module where TAR stitching would happen
> --
>
> Key: HADOOP-7642
> URL: https://issues.apache.org/jira/browse/HADOOP-7642
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Alejandro Abdelnur
>Assignee: Tom White
> Fix For: 0.23.0, 0.24.0
>
> Attachments: HADOOP-7642.patch, HADOOP-7642.patch, HADOOP-7642.patch, 
> HADOOP-7642.patch, HADOOP-7642.patch, HADOOP-7642.patch, HADOOP-7642.patch, 
> HADOOP-7642.patch
>
>
> Instead having a post build script that stitches common&hdfs&mmr, this should 
> be done as part of the build when running 'mvn package -Pdist -Dtar'

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7642) create hadoop-dist module where TAR stitching would happen

2011-10-11 Thread Tom White (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7642:
--

Attachment: HADOOP-7642.patch

> create hadoop-dist module where TAR stitching would happen
> --
>
> Key: HADOOP-7642
> URL: https://issues.apache.org/jira/browse/HADOOP-7642
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Alejandro Abdelnur
>Assignee: Tom White
> Fix For: 0.23.0, 0.24.0
>
> Attachments: HADOOP-7642.patch, HADOOP-7642.patch, HADOOP-7642.patch, 
> HADOOP-7642.patch, HADOOP-7642.patch, HADOOP-7642.patch, HADOOP-7642.patch, 
> HADOOP-7642.patch
>
>
> Instead having a post build script that stitches common&hdfs&mmr, this should 
> be done as part of the build when running 'mvn package -Pdist -Dtar'

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7642) create hadoop-dist module where TAR stitching would happen

2011-10-11 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13125334#comment-13125334
 ] 

Hadoop QA commented on HADOOP-7642:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12498655/HADOOP-7642.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 4 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/285//console

This message is automatically generated.

> create hadoop-dist module where TAR stitching would happen
> --
>
> Key: HADOOP-7642
> URL: https://issues.apache.org/jira/browse/HADOOP-7642
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Alejandro Abdelnur
>Assignee: Tom White
> Fix For: 0.23.0, 0.24.0
>
> Attachments: HADOOP-7642.patch, HADOOP-7642.patch, HADOOP-7642.patch, 
> HADOOP-7642.patch, HADOOP-7642.patch, HADOOP-7642.patch, HADOOP-7642.patch
>
>
> Instead having a post build script that stitches common&hdfs&mmr, this should 
> be done as part of the build when running 'mvn package -Pdist -Dtar'

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7727) fix some typos and tabs in CHANGES.TXT

2011-10-11 Thread Suresh Srinivas (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13125323#comment-13125323
 ] 

Suresh Srinivas commented on HADOOP-7727:
-

Steve, if this is just CHANGES.txt change, there is no need for jira.

> fix some typos and tabs in CHANGES.TXT
> --
>
> Key: HADOOP-7727
> URL: https://issues.apache.org/jira/browse/HADOOP-7727
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.24.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Trivial
> Fix For: 0.24.0
>
> Attachments: minor-changes-txt-typos.patch
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> This is a minor edit to CHANGES.txt; giving it a JIRA issue to have complete 
> release notes (though I'm not going to add it to the CHANGES.txt file as that 
> would be too recursive). There are a couple of tabs and mis-spelling of the 
> word exception in the trunk CHANGES.TXT

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7642) create hadoop-dist module where TAR stitching would happen

2011-10-11 Thread Tom White (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7642:
--

Attachment: HADOOP-7642.patch

There was an issue with the environment variable interpolation in some cases 
which the current patch fixes.

> create hadoop-dist module where TAR stitching would happen
> --
>
> Key: HADOOP-7642
> URL: https://issues.apache.org/jira/browse/HADOOP-7642
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Alejandro Abdelnur
>Assignee: Tom White
> Fix For: 0.23.0, 0.24.0
>
> Attachments: HADOOP-7642.patch, HADOOP-7642.patch, HADOOP-7642.patch, 
> HADOOP-7642.patch, HADOOP-7642.patch, HADOOP-7642.patch, HADOOP-7642.patch
>
>
> Instead having a post build script that stitches common&hdfs&mmr, this should 
> be done as part of the build when running 'mvn package -Pdist -Dtar'

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7730) Allow TestCLI to be run against a cluster

2011-10-11 Thread Konstantin Boudnik (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-7730:
---

Attachment: HADOOP-7730.trunk.patch

Wrong base for the trunk patch

> Allow TestCLI to be run against a cluster
> -
>
> Key: HADOOP-7730
> URL: https://issues.apache.org/jira/browse/HADOOP-7730
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Fix For: 0.22.0
>
> Attachments: HADOOP-7730.patch, HADOOP-7730.trunk.patch, 
> HADOOP-7730.trunk.patch
>
>
> Use the same CLI test to test cluster bits (see HDFS-1762 for more info)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7730) Allow TestCLI to be run against a cluster

2011-10-11 Thread Konstantin Boudnik (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-7730:
---

Attachment: HADOOP-7730.trunk.patch

This patch needs to be reworked a little bit for the trunk

> Allow TestCLI to be run against a cluster
> -
>
> Key: HADOOP-7730
> URL: https://issues.apache.org/jira/browse/HADOOP-7730
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Fix For: 0.22.0
>
> Attachments: HADOOP-7730.patch, HADOOP-7730.trunk.patch
>
>
> Use the same CLI test to test cluster bits (see HDFS-1762 for more info)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-7730) Allow TestCLI to be run against a cluster

2011-10-11 Thread Konstantin Boudnik (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik resolved HADOOP-7730.


   Resolution: Fixed
Fix Version/s: 0.22.0
 Hadoop Flags: Reviewed

I have just committed it.

> Allow TestCLI to be run against a cluster
> -
>
> Key: HADOOP-7730
> URL: https://issues.apache.org/jira/browse/HADOOP-7730
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Fix For: 0.22.0
>
> Attachments: HADOOP-7730.patch
>
>
> Use the same CLI test to test cluster bits (see HDFS-1762 for more info)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7729) Send back valid HTTP response if user hits IPC port with HTTP GET

2011-10-11 Thread Suresh Srinivas (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13125245#comment-13125245
 ] 

Suresh Srinivas commented on HADOOP-7729:
-

+1 for the patch.

> Send back valid HTTP response if user hits IPC port with HTTP GET
> -
>
> Key: HADOOP-7729
> URL: https://issues.apache.org/jira/browse/HADOOP-7729
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 0.23.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hadoop-7729.txt
>
>
> Often, I've seen users get confused between the IPC ports and HTTP ports for 
> a daemon. It would be easy for us to detect when an HTTP GET request hits an 
> IPC port, and instead of sending back garbage, we can send back a valid HTTP 
> response explaining their mistake.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Issue Comment Edited] (HADOOP-7733) Mapreduce jobs are failing with hadoop.security.token.service.use_ip=false

2011-10-11 Thread Suresh Srinivas (Issue Comment Edited) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13125215#comment-13125215
 ] 

Suresh Srinivas edited comment on HADOOP-7733 at 10/11/11 6:11 PM:
---

Rajit, when creating a bug, can you please keep the description short and add 
long description as a comment later.

When client and server both use use_ip, does this problem happen. If it does 
not, we should fix this in 20-security to be picked up by a later release.

  was (Author: sureshms):
Rajith, when creating a bug, can you please keep the description short and 
add long description as a comment later.

When client and server both use use_ip, does this problem happen. If it does 
not, we should fix this in 20-security to be picked up by a later release.
  
> Mapreduce jobs are failing with hadoop.security.token.service.use_ip=false
> --
>
> Key: HADOOP-7733
> URL: https://issues.apache.org/jira/browse/HADOOP-7733
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.20.205.0
>Reporter: Rajit Saha
>Assignee: Daryn Sharp
>
> I have added following property in core-site.xml of all the nodes in cluster 
> and restarted
> 
> hadoop.security.token.service.use_ip
> false
> desc
> 
> 
> Then ran a randomwriter, distcp jobs, they are all failing 
> $HADOOP_HOME/bin/hadoop --config $HADOOP_CONFIG_DIR jar 
> $HADOOP_HOME/hadoop-examples.jar randomwriter 
> -Dtest.randomwrite.bytes_per_map=256000 input_1318325953
> Running 140 maps.
> Job started: Tue Oct 11 09:48:09 UTC 2011
> 11/10/11 09:48:09 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 14 
> for  on :8020
> 11/10/11 09:48:09 INFO security.TokenCache: Got dt for
> hdfs:// Hostname>/user//.staging/job_201110110946_0001;uri= IP>:8020;t.service=:8020
> 11/10/11 09:48:09 INFO mapred.JobClient: Cleaning up the staging area
> hdfs:///user//.staging/job_201110110946_0001
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: 
> java.io.IOException: Call to
> /:8020 failed on local exception: 
> java.io.IOException:
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided
> (Mechanism level: Failed to find any Kerberos tgt)]
> at org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:3943)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
> Caused by: java.io.IOException: Call to / IP>:8020 failed on local exception:
> java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed 
> [Caused by GSSException: No valid
> credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
> at org.apache.hadoop.ipc.Client.wrapException(Client.java:1103)
> at org.apache.hadoop.ipc.Client.call(Client.java:1071)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
> at $Proxy7.getProtocolVersion(Unknown Source)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
> at 
> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:118)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:222)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:187)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
> at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1328)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:65)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1346)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:244)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
> at 
> org.apache.hadoop.mapred.JobInProgress$2.run(JobInProgress.java:401)
> at 
> org.apache.hadoop.mapred.JobInProgress$2.run(JobInProgress.java:399)
> at 

[jira] [Commented] (HADOOP-7733) Mapreduce jobs are failing with hadoop.security.token.service.use_ip=false

2011-10-11 Thread Suresh Srinivas (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13125215#comment-13125215
 ] 

Suresh Srinivas commented on HADOOP-7733:
-

Rajith, when creating a bug, can you please keep the description short and add 
long description as a comment later.

When client and server both use use_ip, does this problem happen. If it does 
not, we should fix this in 20-security to be picked up by a later release.

> Mapreduce jobs are failing with hadoop.security.token.service.use_ip=false
> --
>
> Key: HADOOP-7733
> URL: https://issues.apache.org/jira/browse/HADOOP-7733
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.20.205.0
>Reporter: Rajit Saha
>Assignee: Daryn Sharp
>
> I have added following property in core-site.xml of all the nodes in cluster 
> and restarted
> 
> hadoop.security.token.service.use_ip
> false
> desc
> 
> 
> Then ran a randomwriter, distcp jobs, they are all failing 
> $HADOOP_HOME/bin/hadoop --config $HADOOP_CONFIG_DIR jar 
> $HADOOP_HOME/hadoop-examples.jar randomwriter 
> -Dtest.randomwrite.bytes_per_map=256000 input_1318325953
> Running 140 maps.
> Job started: Tue Oct 11 09:48:09 UTC 2011
> 11/10/11 09:48:09 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 14 
> for  on :8020
> 11/10/11 09:48:09 INFO security.TokenCache: Got dt for
> hdfs:// Hostname>/user//.staging/job_201110110946_0001;uri= IP>:8020;t.service=:8020
> 11/10/11 09:48:09 INFO mapred.JobClient: Cleaning up the staging area
> hdfs:///user//.staging/job_201110110946_0001
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: 
> java.io.IOException: Call to
> /:8020 failed on local exception: 
> java.io.IOException:
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided
> (Mechanism level: Failed to find any Kerberos tgt)]
> at org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:3943)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
> Caused by: java.io.IOException: Call to / IP>:8020 failed on local exception:
> java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed 
> [Caused by GSSException: No valid
> credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
> at org.apache.hadoop.ipc.Client.wrapException(Client.java:1103)
> at org.apache.hadoop.ipc.Client.call(Client.java:1071)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
> at $Proxy7.getProtocolVersion(Unknown Source)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
> at 
> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:118)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:222)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:187)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
> at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1328)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:65)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1346)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:244)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
> at 
> org.apache.hadoop.mapred.JobInProgress$2.run(JobInProgress.java:401)
> at 
> org.apache.hadoop.mapred.JobInProgress$2.run(JobInProgress.java:399)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
> at 
> org.apache.hadoop.mapred.JobInProgress.(JobInProgress.java:399)
> at org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.

[jira] [Commented] (HADOOP-7733) Mapreduce jobs are failing with hadoop.security.token.service.use_ip=false

2011-10-11 Thread Daryn Sharp (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13125210#comment-13125210
 ] 

Daryn Sharp commented on HADOOP-7733:
-

This was found to be the result of mismatched use_ip settings on the client and 
the JT.  I'm investigating whether it's feasible to detect a misconfig or for 
the system to adapt the setting to what's in the job conf.

> Mapreduce jobs are failing with hadoop.security.token.service.use_ip=false
> --
>
> Key: HADOOP-7733
> URL: https://issues.apache.org/jira/browse/HADOOP-7733
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.20.205.0
>Reporter: Rajit Saha
>Assignee: Daryn Sharp
>
> I have added following property in core-site.xml of all the nodes in cluster 
> and restarted
> 
> hadoop.security.token.service.use_ip
> false
> desc
> 
> 
> Then ran a randomwriter, distcp jobs, they are all failing 
> $HADOOP_HOME/bin/hadoop --config $HADOOP_CONFIG_DIR jar 
> $HADOOP_HOME/hadoop-examples.jar randomwriter 
> -Dtest.randomwrite.bytes_per_map=256000 input_1318325953
> Running 140 maps.
> Job started: Tue Oct 11 09:48:09 UTC 2011
> 11/10/11 09:48:09 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 14 
> for  on :8020
> 11/10/11 09:48:09 INFO security.TokenCache: Got dt for
> hdfs:// Hostname>/user//.staging/job_201110110946_0001;uri= IP>:8020;t.service=:8020
> 11/10/11 09:48:09 INFO mapred.JobClient: Cleaning up the staging area
> hdfs:///user//.staging/job_201110110946_0001
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: 
> java.io.IOException: Call to
> /:8020 failed on local exception: 
> java.io.IOException:
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided
> (Mechanism level: Failed to find any Kerberos tgt)]
> at org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:3943)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
> Caused by: java.io.IOException: Call to / IP>:8020 failed on local exception:
> java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed 
> [Caused by GSSException: No valid
> credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
> at org.apache.hadoop.ipc.Client.wrapException(Client.java:1103)
> at org.apache.hadoop.ipc.Client.call(Client.java:1071)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
> at $Proxy7.getProtocolVersion(Unknown Source)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
> at 
> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:118)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:222)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:187)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
> at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1328)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:65)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1346)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:244)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
> at 
> org.apache.hadoop.mapred.JobInProgress$2.run(JobInProgress.java:401)
> at 
> org.apache.hadoop.mapred.JobInProgress$2.run(JobInProgress.java:399)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
> at 
> org.apache.hadoop.mapred.JobInProgress.(JobInProgress.java:399)
> at org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:3941)
> ... 11 more
> Caused by: java.io.IOExce

[jira] [Assigned] (HADOOP-7733) Mapreduce jobs are failing with hadoop.security.token.service.use_ip=false

2011-10-11 Thread Daryn Sharp (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp reassigned HADOOP-7733:
---

Assignee: Daryn Sharp

> Mapreduce jobs are failing with hadoop.security.token.service.use_ip=false
> --
>
> Key: HADOOP-7733
> URL: https://issues.apache.org/jira/browse/HADOOP-7733
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.20.205.0
>Reporter: Rajit Saha
>Assignee: Daryn Sharp
>
> I have added following property in core-site.xml of all the nodes in cluster 
> and restarted
> 
> hadoop.security.token.service.use_ip
> false
> desc
> 
> 
> Then ran a randomwriter, distcp jobs, they are all failing 
> $HADOOP_HOME/bin/hadoop --config $HADOOP_CONFIG_DIR jar 
> $HADOOP_HOME/hadoop-examples.jar randomwriter 
> -Dtest.randomwrite.bytes_per_map=256000 input_1318325953
> Running 140 maps.
> Job started: Tue Oct 11 09:48:09 UTC 2011
> 11/10/11 09:48:09 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 14 
> for  on :8020
> 11/10/11 09:48:09 INFO security.TokenCache: Got dt for
> hdfs:// Hostname>/user//.staging/job_201110110946_0001;uri= IP>:8020;t.service=:8020
> 11/10/11 09:48:09 INFO mapred.JobClient: Cleaning up the staging area
> hdfs:///user//.staging/job_201110110946_0001
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: 
> java.io.IOException: Call to
> /:8020 failed on local exception: 
> java.io.IOException:
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided
> (Mechanism level: Failed to find any Kerberos tgt)]
> at org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:3943)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
> Caused by: java.io.IOException: Call to / IP>:8020 failed on local exception:
> java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed 
> [Caused by GSSException: No valid
> credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
> at org.apache.hadoop.ipc.Client.wrapException(Client.java:1103)
> at org.apache.hadoop.ipc.Client.call(Client.java:1071)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
> at $Proxy7.getProtocolVersion(Unknown Source)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
> at 
> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:118)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:222)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:187)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
> at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1328)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:65)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1346)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:244)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
> at 
> org.apache.hadoop.mapred.JobInProgress$2.run(JobInProgress.java:401)
> at 
> org.apache.hadoop.mapred.JobInProgress$2.run(JobInProgress.java:399)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
> at 
> org.apache.hadoop.mapred.JobInProgress.(JobInProgress.java:399)
> at org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:3941)
> ... 11 more
> Caused by: java.io.IOException: javax.security.sasl.SaslException: GSS 
> initiate failed [Caused by GSSException: No
> valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
> at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:539)
> 

[jira] [Created] (HADOOP-7733) Mapreduce jobs are failing with hadoop.security.token.service.use_ip=false

2011-10-11 Thread Rajit Saha (Created) (JIRA)
Mapreduce jobs are failing with hadoop.security.token.service.use_ip=false
--

 Key: HADOOP-7733
 URL: https://issues.apache.org/jira/browse/HADOOP-7733
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 0.20.205.0
Reporter: Rajit Saha


I have added following property in core-site.xml of all the nodes in cluster 
and restarted


hadoop.security.token.service.use_ip
false
desc



Then ran a randomwriter, distcp jobs, they are all failing 
$HADOOP_HOME/bin/hadoop --config $HADOOP_CONFIG_DIR jar 
$HADOOP_HOME/hadoop-examples.jar randomwriter 
-Dtest.randomwrite.bytes_per_map=256000 input_1318325953
Running 140 maps.
Job started: Tue Oct 11 09:48:09 UTC 2011
11/10/11 09:48:09 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 14 
for  on :8020
11/10/11 09:48:09 INFO security.TokenCache: Got dt for
hdfs:///user//.staging/job_201110110946_0001;uri=:8020;t.service=:8020
11/10/11 09:48:09 INFO mapred.JobClient: Cleaning up the staging area
hdfs:///user//.staging/job_201110110946_0001
org.apache.hadoop.ipc.RemoteException: java.io.IOException: 
java.io.IOException: Call to
/:8020 failed on local exception: 
java.io.IOException:
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: 
No valid credentials provided
(Mechanism level: Failed to find any Kerberos tgt)]
at org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:3943)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
Caused by: java.io.IOException: Call to /:8020 
failed on local exception:
java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed 
[Caused by GSSException: No valid
credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1103)
at org.apache.hadoop.ipc.Client.call(Client.java:1071)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at $Proxy7.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
at 
org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:118)
at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:222)
at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:187)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1328)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:65)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1346)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:244)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
at org.apache.hadoop.mapred.JobInProgress$2.run(JobInProgress.java:401)
at org.apache.hadoop.mapred.JobInProgress$2.run(JobInProgress.java:399)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
at org.apache.hadoop.mapred.JobInProgress.(JobInProgress.java:399)
at org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:3941)
... 11 more
Caused by: java.io.IOException: javax.security.sasl.SaslException: GSS initiate 
failed [Caused by GSSException: No
valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:539)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
at 
org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:484)
at 
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:586)
at org.apache.hadoop.ipc.Client$Connection.acc