[jira] [Updated] (HADOOP-9397) Incremental dist tar build fails

2013-03-13 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-9397:
--

Attachment: HADOOP-9397.1.patch

Here is a patch that switches to using gzip -f.  I tested this successfully on 
Mac and Windows.

 Incremental dist tar build fails
 

 Key: HADOOP-9397
 URL: https://issues.apache.org/jira/browse/HADOOP-9397
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Jason Lowe
 Attachments: HADOOP-9397.1.patch


 Building a dist tar build when the dist tarball already exists from a 
 previous build fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-9397) Incremental dist tar build fails

2013-03-13 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth reassigned HADOOP-9397:
-

Assignee: Chris Nauroth

 Incremental dist tar build fails
 

 Key: HADOOP-9397
 URL: https://issues.apache.org/jira/browse/HADOOP-9397
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Jason Lowe
Assignee: Chris Nauroth
 Attachments: HADOOP-9397.1.patch


 Building a dist tar build when the dist tarball already exists from a 
 previous build fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9397) Incremental dist tar build fails

2013-03-13 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-9397:
--

Status: Patch Available  (was: Open)

 Incremental dist tar build fails
 

 Key: HADOOP-9397
 URL: https://issues.apache.org/jira/browse/HADOOP-9397
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Jason Lowe
Assignee: Chris Nauroth
 Attachments: HADOOP-9397.1.patch


 Building a dist tar build when the dist tarball already exists from a 
 previous build fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9397) Incremental dist tar build fails

2013-03-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13600874#comment-13600874
 ] 

Hadoop QA commented on HADOOP-9397:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12573480/HADOOP-9397.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-dist.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2321//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2321//console

This message is automatically generated.

 Incremental dist tar build fails
 

 Key: HADOOP-9397
 URL: https://issues.apache.org/jira/browse/HADOOP-9397
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Jason Lowe
Assignee: Chris Nauroth
 Attachments: HADOOP-9397.1.patch


 Building a dist tar build when the dist tarball already exists from a 
 previous build fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9397) Incremental dist tar build fails

2013-03-13 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13600877#comment-13600877
 ] 

Chris Nauroth commented on HADOOP-9397:
---

No tests, because it's a build script change only.

 Incremental dist tar build fails
 

 Key: HADOOP-9397
 URL: https://issues.apache.org/jira/browse/HADOOP-9397
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Jason Lowe
Assignee: Chris Nauroth
 Attachments: HADOOP-9397.1.patch


 Building a dist tar build when the dist tarball already exists from a 
 previous build fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9044) add FindClass main class to provide classpath checking of installations

2013-03-13 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13600885#comment-13600885
 ] 

Suresh Srinivas commented on HADOOP-9044:
-

[~stev...@iseran.com] I will review the patch in a day or two. But this seems 
like a nice tool to have.

 add FindClass main class to provide classpath checking of installations
 ---

 Key: HADOOP-9044
 URL: https://issues.apache.org/jira/browse/HADOOP-9044
 Project: Hadoop Common
  Issue Type: New Feature
  Components: util
Affects Versions: 1.1.0, 2.0.3-alpha
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor
 Attachments: HADOOP-9044.patch, HADOOP-9044.patch


 It's useful in postflight checking of a hadoop installation to verify that 
 classes load, especially codes with external JARs and native codecs. 
 An entry point designed to load a named class and create an instance of that 
 class can do this -and be invoked from any shell script or tool that does the 
 installation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7101) UserGroupInformation.getCurrentUser() fails when called from non-Hadoop JAAS context

2013-03-13 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13600908#comment-13600908
 ] 

Suresh Srinivas commented on HADOOP-7101:
-

bq. Let's continue the discussion there, instead of on this CLOSED issue.
Sure. Generally if the port is straightforward, this issue could also be opened 
to attach the branch-1 patch.

 UserGroupInformation.getCurrentUser() fails when called from non-Hadoop JAAS 
 context
 

 Key: HADOOP-7101
 URL: https://issues.apache.org/jira/browse/HADOOP-7101
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 0.22.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Blocker
 Fix For: 0.22.0

 Attachments: hadoop-7101.txt


 If a Hadoop client is run from inside a container like Tomcat, and the 
 current AccessControlContext has a Subject associated with it that is not 
 created by Hadoop, then UserGroupInformation.getCurrentUser() will throw 
 NoSuchElementException, since it assumes that any Subject will have a hadoop 
 User principal.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9280) HADOOP-7101 was never merged from 0.20.x to the 1.x branch

2013-03-13 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13600914#comment-13600914
 ] 

Suresh Srinivas commented on HADOOP-9280:
-

[~cib...@e-ma.net] Can you please point to where in 0.20 branch this code has 
been merged to?

 HADOOP-7101 was never merged from 0.20.x to the 1.x branch
 --

 Key: HADOOP-9280
 URL: https://issues.apache.org/jira/browse/HADOOP-9280
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.0.4
Reporter: Claus Ibsen
Priority: Critical

 See HADOOP-7101
 This code fix went into the 0.20 branch.
 But was never merged into the 1.x branch, which causing problems for people 
 upgrading from 0.20 to 1.0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9280) HADOOP-7101 was never merged from 0.20.x to the 1.x branch

2013-03-13 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-9280:


Assignee: Suresh Srinivas

 HADOOP-7101 was never merged from 0.20.x to the 1.x branch
 --

 Key: HADOOP-9280
 URL: https://issues.apache.org/jira/browse/HADOOP-9280
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.0.4
Reporter: Claus Ibsen
Assignee: Suresh Srinivas
Priority: Critical

 See HADOOP-7101
 This code fix went into the 0.20 branch.
 But was never merged into the 1.x branch, which causing problems for people 
 upgrading from 0.20 to 1.0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9280) HADOOP-7101 was never merged from 0.20.x to the 1.x branch

2013-03-13 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13600915#comment-13600915
 ] 

Suresh Srinivas commented on HADOOP-9280:
-

BTW the port of HADOOP-7101 was straightforward. I am going to to just merge 
the change from that to branch-1 as a part of that jira.

 HADOOP-7101 was never merged from 0.20.x to the 1.x branch
 --

 Key: HADOOP-9280
 URL: https://issues.apache.org/jira/browse/HADOOP-9280
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.0.4
Reporter: Claus Ibsen
Assignee: Suresh Srinivas
Priority: Critical

 See HADOOP-7101
 This code fix went into the 0.20 branch.
 But was never merged into the 1.x branch, which causing problems for people 
 upgrading from 0.20 to 1.0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7101) UserGroupInformation.getCurrentUser() fails when called from non-Hadoop JAAS context

2013-03-13 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-7101:


Attachment: hadoop-7101.branch-1.patch

Here is a branch-1 patch for this issue.

 UserGroupInformation.getCurrentUser() fails when called from non-Hadoop JAAS 
 context
 

 Key: HADOOP-7101
 URL: https://issues.apache.org/jira/browse/HADOOP-7101
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 0.22.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Blocker
 Fix For: 0.22.0

 Attachments: hadoop-7101.branch-1.patch, hadoop-7101.txt


 If a Hadoop client is run from inside a container like Tomcat, and the 
 current AccessControlContext has a Subject associated with it that is not 
 created by Hadoop, then UserGroupInformation.getCurrentUser() will throw 
 NoSuchElementException, since it assumes that any Subject will have a hadoop 
 User principal.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7101) UserGroupInformation.getCurrentUser() fails when called from non-Hadoop JAAS context

2013-03-13 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13600928#comment-13600928
 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-7101:


+1 the branch-1 patch looks good.

 UserGroupInformation.getCurrentUser() fails when called from non-Hadoop JAAS 
 context
 

 Key: HADOOP-7101
 URL: https://issues.apache.org/jira/browse/HADOOP-7101
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 0.22.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Blocker
 Fix For: 0.22.0

 Attachments: hadoop-7101.branch-1.patch, hadoop-7101.txt


 If a Hadoop client is run from inside a container like Tomcat, and the 
 current AccessControlContext has a Subject associated with it that is not 
 created by Hadoop, then UserGroupInformation.getCurrentUser() will throw 
 NoSuchElementException, since it assumes that any Subject will have a hadoop 
 User principal.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9387) TestDFVariations fails on Windows after the merge

2013-03-13 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HADOOP-9387:
---

Attachment: HADOOP-9387.trunk.2.patch

Attaching the new patch.

A few notes:
 - Addressed the problem around DF#getFileSystem() on Windows, added a test 
case for the scenario
 - Since I already touched DF.java I used the opportunity to cleanup the unused 
code around OS type. Please comment if you think it is more appropriate to have 
this fixed via a separate Jira (it's a minor change so it should generally be 
fine).

 TestDFVariations fails on Windows after the merge
 -

 Key: HADOOP-9387
 URL: https://issues.apache.org/jira/browse/HADOOP-9387
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-9387.trunk.2.patch, HADOOP-9387.trunk.patch


 Test fails with the following errors:
 {code}
 Running org.apache.hadoop.fs.TestDFVariations
 Tests run: 4, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 0.186 sec  
 FAILURE!
 testOSParsing(org.apache.hadoop.fs.TestDFVariations)  Time elapsed: 109 sec  
  ERROR!
 java.io.IOException: Fewer lines of output than expected
 at org.apache.hadoop.fs.DF.parseOutput(DF.java:203)
 at org.apache.hadoop.fs.DF.getMount(DF.java:150)
 at 
 org.apache.hadoop.fs.TestDFVariations.testOSParsing(TestDFVariations.java:59)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
 at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
 at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
 at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
 at 
 org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)
 testGetMountCurrentDirectory(org.apache.hadoop.fs.TestDFVariations)  Time 
 elapsed: 1 sec   ERROR!
 java.io.IOException: Fewer lines of output than expected
 at org.apache.hadoop.fs.DF.parseOutput(DF.java:203)
 at org.apache.hadoop.fs.DF.getMount(DF.java:150)
 at 
 org.apache.hadoop.fs.TestDFVariations.testGetMountCurrentDirectory(TestDFVariations.java:139)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
 at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
 at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
 at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
 at 
 org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9280) HADOOP-7101 was never merged from 0.20.x to the 1.x branch

2013-03-13 Thread Claus Ibsen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13600945#comment-13600945
 ] 

Claus Ibsen commented on HADOOP-9280:
-

Suresh, I am not a Hadoop committer. I am just an end user who suffered from 
this issue and reported / responded on these tickets. I cannot point you to the 
code base / branch. That is the Hadoop team who knows where their code lives.






 HADOOP-7101 was never merged from 0.20.x to the 1.x branch
 --

 Key: HADOOP-9280
 URL: https://issues.apache.org/jira/browse/HADOOP-9280
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.0.4
Reporter: Claus Ibsen
Assignee: Suresh Srinivas
Priority: Critical

 See HADOOP-7101
 This code fix went into the 0.20 branch.
 But was never merged into the 1.x branch, which causing problems for people 
 upgrading from 0.20 to 1.0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9280) HADOOP-7101 was never merged from 0.20.x to the 1.x branch

2013-03-13 Thread Claus Ibsen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13600946#comment-13600946
 ] 

Claus Ibsen commented on HADOOP-9280:
-

Suresh, see HADOOP-7101 this ticket had fixed version marked as 0.22.0. And I 
take that as the fix went in that branch/release.

The problem is that this fix never went into any of the 1.x branches.

See more details by Torsten who commented at HADOOP-7101

 HADOOP-7101 was never merged from 0.20.x to the 1.x branch
 --

 Key: HADOOP-9280
 URL: https://issues.apache.org/jira/browse/HADOOP-9280
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.0.4
Reporter: Claus Ibsen
Assignee: Suresh Srinivas
Priority: Critical

 See HADOOP-7101
 This code fix went into the 0.20 branch.
 But was never merged into the 1.x branch, which causing problems for people 
 upgrading from 0.20 to 1.0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9044) add FindClass main class to provide classpath checking of installations

2013-03-13 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13600988#comment-13600988
 ] 

Steve Loughran commented on HADOOP-9044:


I'm using it to post-installation validation of dependent classes -to make sure 
things are complete before jobs run

 add FindClass main class to provide classpath checking of installations
 ---

 Key: HADOOP-9044
 URL: https://issues.apache.org/jira/browse/HADOOP-9044
 Project: Hadoop Common
  Issue Type: New Feature
  Components: util
Affects Versions: 1.1.0, 2.0.3-alpha
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor
 Attachments: HADOOP-9044.patch, HADOOP-9044.patch


 It's useful in postflight checking of a hadoop installation to verify that 
 classes load, especially codes with external JARs and native codecs. 
 An entry point designed to load a named class and create an instance of that 
 class can do this -and be invoked from any shell script or tool that does the 
 installation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9396) library.properties has duplicate (inconsistent) aspectj versions

2013-03-13 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9396:
---

Attachment: HADOOP-9396.patch

 library.properties has duplicate (inconsistent) aspectj versions
 

 Key: HADOOP-9396
 URL: https://issues.apache.org/jira/browse/HADOOP-9396
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 1.2.0
Reporter: Steve Loughran
Priority: Trivial
 Attachments: HADOOP-9396.patch


 {{ivy/libraries.properties}} says
 {code}
 aspectj.version=1.6.5
 aspectj.version=1.6.11
 {code}
 Presumably the first one should be deleted

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9396) library.properties has duplicate (inconsistent) aspectj versions

2013-03-13 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9396:
---

Attachment: (was: HADOOP-9396.patch)

 library.properties has duplicate (inconsistent) aspectj versions
 

 Key: HADOOP-9396
 URL: https://issues.apache.org/jira/browse/HADOOP-9396
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 1.2.0
Reporter: Steve Loughran
Priority: Trivial
 Attachments: HADOOP-9396.patch


 {{ivy/libraries.properties}} says
 {code}
 aspectj.version=1.6.5
 aspectj.version=1.6.11
 {code}
 Presumably the first one should be deleted

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9396) library.properties has duplicate (inconsistent) aspectj versions

2013-03-13 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9396:
---

Attachment: HADOOP-9396.patch

patch leaves only the latest aspectj version, 1.6.11 

 library.properties has duplicate (inconsistent) aspectj versions
 

 Key: HADOOP-9396
 URL: https://issues.apache.org/jira/browse/HADOOP-9396
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 1.2.0
Reporter: Steve Loughran
Priority: Trivial
 Attachments: HADOOP-9396.patch


 {{ivy/libraries.properties}} says
 {code}
 aspectj.version=1.6.5
 aspectj.version=1.6.11
 {code}
 Presumably the first one should be deleted

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9396) library.properties has duplicate (inconsistent) aspectj versions

2013-03-13 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9396:
---

Status: Patch Available  (was: Open)

 library.properties has duplicate (inconsistent) aspectj versions
 

 Key: HADOOP-9396
 URL: https://issues.apache.org/jira/browse/HADOOP-9396
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 1.2.0
Reporter: Steve Loughran
Priority: Trivial
 Attachments: HADOOP-9396.patch


 {{ivy/libraries.properties}} says
 {code}
 aspectj.version=1.6.5
 aspectj.version=1.6.11
 {code}
 Presumably the first one should be deleted

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9396) library.properties has duplicate (inconsistent) aspectj versions

2013-03-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13601005#comment-13601005
 ] 

Hadoop QA commented on HADOOP-9396:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12573503/HADOOP-9396.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2322//console

This message is automatically generated.

 library.properties has duplicate (inconsistent) aspectj versions
 

 Key: HADOOP-9396
 URL: https://issues.apache.org/jira/browse/HADOOP-9396
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 1.2.0
Reporter: Steve Loughran
Priority: Trivial
 Attachments: HADOOP-9396.patch


 {{ivy/libraries.properties}} says
 {code}
 aspectj.version=1.6.5
 aspectj.version=1.6.11
 {code}
 Presumably the first one should be deleted

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9402) Ivy dependencies don't declare that commons-net is a core dependency

2013-03-13 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-9402:
--

 Summary: Ivy dependencies don't declare that commons-net is a core 
dependency
 Key: HADOOP-9402
 URL: https://issues.apache.org/jira/browse/HADOOP-9402
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 1.2.0
Reporter: Steve Loughran
Priority: Minor


HDFS-3148 changed {{NetUtils}} to use commons-net components, and in doing so
added a dependency on commons-net into the core JARs. Previously it was only 
declared as a dependency for the {{ftp}} and {{s3-client}} configs. 

This impacts local ivy builds more than anything else. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9402) Ivy dependencies don't declare that commons-net is a core dependency

2013-03-13 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9402:
---

Attachment: HADOOP-9402.patch

make commons-net a dependency of the {{client}} config, remove the superfluous 
declarations in {{ftp}} and {{s3-client}}

 Ivy dependencies don't declare that commons-net is a core dependency
 

 Key: HADOOP-9402
 URL: https://issues.apache.org/jira/browse/HADOOP-9402
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 1.2.0
Reporter: Steve Loughran
Priority: Minor
 Attachments: HADOOP-9402.patch


 HDFS-3148 changed {{NetUtils}} to use commons-net components, and in doing so
 added a dependency on commons-net into the core JARs. Previously it was only 
 declared as a dependency for the {{ftp}} and {{s3-client}} configs. 
 This impacts local ivy builds more than anything else. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9402) Ivy dependencies don't declare that commons-net is a core dependency

2013-03-13 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9402:
---

Status: Patch Available  (was: Open)

 Ivy dependencies don't declare that commons-net is a core dependency
 

 Key: HADOOP-9402
 URL: https://issues.apache.org/jira/browse/HADOOP-9402
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 1.2.0
Reporter: Steve Loughran
Priority: Minor
 Attachments: HADOOP-9402.patch


 HDFS-3148 changed {{NetUtils}} to use commons-net components, and in doing so
 added a dependency on commons-net into the core JARs. Previously it was only 
 declared as a dependency for the {{ftp}} and {{s3-client}} configs. 
 This impacts local ivy builds more than anything else. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9402) Ivy dependencies don't declare that commons-net is a core dependency

2013-03-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13601010#comment-13601010
 ] 

Hadoop QA commented on HADOOP-9402:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12573504/HADOOP-9402.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2323//console

This message is automatically generated.

 Ivy dependencies don't declare that commons-net is a core dependency
 

 Key: HADOOP-9402
 URL: https://issues.apache.org/jira/browse/HADOOP-9402
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 1.2.0
Reporter: Steve Loughran
Priority: Minor
 Attachments: HADOOP-9402.patch


 HDFS-3148 changed {{NetUtils}} to use commons-net components, and in doing so
 added a dependency on commons-net into the core JARs. Previously it was only 
 declared as a dependency for the {{ftp}} and {{s3-client}} configs. 
 This impacts local ivy builds more than anything else. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9388) TestFsShellCopy fails on Windows

2013-03-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13601022#comment-13601022
 ] 

Hudson commented on HADOOP-9388:


Integrated in Hadoop-Yarn-trunk #154 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/154/])
HADOOP-9388. TestFsShellCopy fails on Windows. Contributed by Ivan Mitic. 
(Revision 1455637)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1455637
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java


 TestFsShellCopy fails on Windows
 

 Key: HADOOP-9388
 URL: https://issues.apache.org/jira/browse/HADOOP-9388
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Fix For: 3.0.0

 Attachments: HADOOP-9388.trunk.patch


 Test fails on below test cases:
 {code}
 Tests run: 11, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 4.343 sec 
  FAILURE!
 testMoveDirFromLocal(org.apache.hadoop.fs.TestFsShellCopy)  Time elapsed: 29 
 sec   FAILURE!
 java.lang.AssertionError: expected:0 but was:1
 at org.junit.Assert.fail(Assert.java:91)
 at org.junit.Assert.failNotEquals(Assert.java:645)
 at org.junit.Assert.assertEquals(Assert.java:126)
 at org.junit.Assert.assertEquals(Assert.java:470)
 at org.junit.Assert.assertEquals(Assert.java:454)
 at 
 org.apache.hadoop.fs.TestFsShellCopy.testMoveDirFromLocal(TestFsShellCopy.java:392)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
 at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
 at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
 at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
 at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
 at 
 org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
 at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
 at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
 at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
 at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
 at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
 at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
 at 
 org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
 at 
 org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
 at 
 org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
 at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
 testMoveDirFromLocalDestExists(org.apache.hadoop.fs.TestFsShellCopy)  Time 
 elapsed: 25 sec   FAILURE!
 java.lang.AssertionError: expected:0 but was:1
 at org.junit.Assert.fail(Assert.java:91)
 at org.junit.Assert.failNotEquals(Assert.java:645)
 at org.junit.Assert.assertEquals(Assert.java:126)
 at org.junit.Assert.assertEquals(Assert.java:470)
 at 

[jira] [Commented] (HADOOP-9099) NetUtils.normalizeHostName fails on domains where UnknownHost resolves to an IP address

2013-03-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13601023#comment-13601023
 ] 

Hudson commented on HADOOP-9099:


Integrated in Hadoop-Yarn-trunk #154 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/154/])
HADOOP-9099. NetUtils.normalizeHostName fails on domains where UnknownHost 
resolves to an IP address. Contributed by Ivan Mitic. (Revision 1455629)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1455629
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java


 NetUtils.normalizeHostName fails on domains where UnknownHost resolves to an 
 IP address
 ---

 Key: HADOOP-9099
 URL: https://issues.apache.org/jira/browse/HADOOP-9099
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0, 1-win
Reporter: Ivan Mitic
Assignee: Ivan Mitic
Priority: Minor
 Fix For: 1.2.0, 3.0.0, 1-win

 Attachments: HADOOP-9099.branch-1-win.patch, HADOOP-9099.trunk.patch


 I just hit this failure. We should use some more unique string for 
 UnknownHost:
 Testcase: testNormalizeHostName took 0.007 sec
   FAILED
 expected:[65.53.5.181] but was:[UnknownHost]
 junit.framework.AssertionFailedError: expected:[65.53.5.181] but 
 was:[UnknownHost]
   at 
 org.apache.hadoop.net.TestNetUtils.testNormalizeHostName(TestNetUtils.java:347)
 Will post a patch in a bit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9280) HADOOP-7101 was never merged from 0.23.x to the 1.x branch

2013-03-13 Thread Matthew Farrellee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthew Farrellee updated HADOOP-9280:
--

Description: 
See HADOOP-7101

This code fix went into the 0.20 branch.
But was never merged into the 1.x branch, which causing problems for people 
upgrading from 0.23 to 1.0.


  was:
See HADOOP-7101

This code fix went into the 0.20 branch.
But was never merged into the 1.x branch, which causing problems for people 
upgrading from 0.20 to 1.0.



 HADOOP-7101 was never merged from 0.23.x to the 1.x branch
 --

 Key: HADOOP-9280
 URL: https://issues.apache.org/jira/browse/HADOOP-9280
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.0.4
Reporter: Claus Ibsen
Assignee: Suresh Srinivas
Priority: Critical

 See HADOOP-7101
 This code fix went into the 0.20 branch.
 But was never merged into the 1.x branch, which causing problems for people 
 upgrading from 0.23 to 1.0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9280) HADOOP-7101 was never merged from 0.23.x to the 1.x branch

2013-03-13 Thread Matthew Farrellee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthew Farrellee updated HADOOP-9280:
--

Description: 
See HADOOP-7101

This code fix went into the 0.23 branch.
But was never merged into the 1.x branch, which causing problems for people 
upgrading from 0.23 to 1.0.


  was:
See HADOOP-7101

This code fix went into the 0.20 branch.
But was never merged into the 1.x branch, which causing problems for people 
upgrading from 0.23 to 1.0.



 HADOOP-7101 was never merged from 0.23.x to the 1.x branch
 --

 Key: HADOOP-9280
 URL: https://issues.apache.org/jira/browse/HADOOP-9280
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.0.4
Reporter: Claus Ibsen
Assignee: Suresh Srinivas
Priority: Critical

 See HADOOP-7101
 This code fix went into the 0.23 branch.
 But was never merged into the 1.x branch, which causing problems for people 
 upgrading from 0.23 to 1.0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9280) HADOOP-7101 was never merged from 0.23.x to the 1.x branch

2013-03-13 Thread Matthew Farrellee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13601050#comment-13601050
 ] 

Matthew Farrellee commented on HADOOP-9280:
---

I only found the HADOOP-7101 on the 0.23.0 and later branches/tags

$ git tag --contains 5cbfd7c039a4810fae58f2d53b0e9cf0cd307fcf | head -n3
release-0.23.0
release-0.23.0-rc0
release-0.23.0-rc1

The fix, marked 0.22, may be in error.

 HADOOP-7101 was never merged from 0.23.x to the 1.x branch
 --

 Key: HADOOP-9280
 URL: https://issues.apache.org/jira/browse/HADOOP-9280
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.0.4
Reporter: Claus Ibsen
Assignee: Suresh Srinivas
Priority: Critical

 See HADOOP-7101
 This code fix went into the 0.23 branch.
 But was never merged into the 1.x branch, which causing problems for people 
 upgrading from 0.23 to 1.0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9379) capture the ulimit info after printing the log to the console

2013-03-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13601075#comment-13601075
 ] 

Hudson commented on HADOOP-9379:


Integrated in Hadoop-Hdfs-0.23-Build #552 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/552/])
HADOOP-9379. capture the ulimit info after printing the log to the console. 
(Arpit Gupta via suresh) (Revision 1455519)

 Result = UNSTABLE
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1455519
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemon.sh
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/bin/yarn-daemon.sh


 capture the ulimit info after printing the log to the console
 -

 Key: HADOOP-9379
 URL: https://issues.apache.org/jira/browse/HADOOP-9379
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.2.0, 2.0.4-alpha
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Trivial
 Fix For: 1.2.0, 0.23.7, 2.0.5-beta

 Attachments: HADOOP-9379.branch-1.patch, HADOOP-9379.patch


 Based on the discussions in HADOOP-9253 people prefer if we dont print the 
 ulimit info to the console but still have it in the logs.
 Just need to move the head statement to before the capture of ulimit code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8515) Upgrade Jetty to the current Jetty 7 release

2013-03-13 Thread Alexey Babutin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Babutin updated HADOOP-8515:
---

Attachment: hadoop_jetty.patch.v2

 Upgrade Jetty to the current Jetty 7 release
 

 Key: HADOOP-8515
 URL: https://issues.apache.org/jira/browse/HADOOP-8515
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Luke Lu
  Labels: jetty
 Attachments: hadoop_jetty.patch.v2


 According to 
 http://dev.eclipse.org/mhonarc/lists/jetty-announce/msg00026.html, jetty-6 
 has been effectively EOL since January. Let's bump jetty to the 7 series. The 
 current jetty 6.1.26 contains at least one known vulnerability: CVE-2011-4461.
 Note this can be an incompatible change if you reference jetty-6 packages 
 (org.mortbay.*).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9388) TestFsShellCopy fails on Windows

2013-03-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13601095#comment-13601095
 ] 

Hudson commented on HADOOP-9388:


Integrated in Hadoop-Hdfs-trunk #1343 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1343/])
HADOOP-9388. TestFsShellCopy fails on Windows. Contributed by Ivan Mitic. 
(Revision 1455637)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1455637
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java


 TestFsShellCopy fails on Windows
 

 Key: HADOOP-9388
 URL: https://issues.apache.org/jira/browse/HADOOP-9388
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Fix For: 3.0.0

 Attachments: HADOOP-9388.trunk.patch


 Test fails on below test cases:
 {code}
 Tests run: 11, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 4.343 sec 
  FAILURE!
 testMoveDirFromLocal(org.apache.hadoop.fs.TestFsShellCopy)  Time elapsed: 29 
 sec   FAILURE!
 java.lang.AssertionError: expected:0 but was:1
 at org.junit.Assert.fail(Assert.java:91)
 at org.junit.Assert.failNotEquals(Assert.java:645)
 at org.junit.Assert.assertEquals(Assert.java:126)
 at org.junit.Assert.assertEquals(Assert.java:470)
 at org.junit.Assert.assertEquals(Assert.java:454)
 at 
 org.apache.hadoop.fs.TestFsShellCopy.testMoveDirFromLocal(TestFsShellCopy.java:392)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
 at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
 at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
 at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
 at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
 at 
 org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
 at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
 at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
 at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
 at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
 at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
 at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
 at 
 org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
 at 
 org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
 at 
 org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
 at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
 testMoveDirFromLocalDestExists(org.apache.hadoop.fs.TestFsShellCopy)  Time 
 elapsed: 25 sec   FAILURE!
 java.lang.AssertionError: expected:0 but was:1
 at org.junit.Assert.fail(Assert.java:91)
 at org.junit.Assert.failNotEquals(Assert.java:645)
 at org.junit.Assert.assertEquals(Assert.java:126)
 at org.junit.Assert.assertEquals(Assert.java:470)
 at 

[jira] [Commented] (HADOOP-9099) NetUtils.normalizeHostName fails on domains where UnknownHost resolves to an IP address

2013-03-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13601096#comment-13601096
 ] 

Hudson commented on HADOOP-9099:


Integrated in Hadoop-Hdfs-trunk #1343 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1343/])
HADOOP-9099. NetUtils.normalizeHostName fails on domains where UnknownHost 
resolves to an IP address. Contributed by Ivan Mitic. (Revision 1455629)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1455629
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java


 NetUtils.normalizeHostName fails on domains where UnknownHost resolves to an 
 IP address
 ---

 Key: HADOOP-9099
 URL: https://issues.apache.org/jira/browse/HADOOP-9099
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0, 1-win
Reporter: Ivan Mitic
Assignee: Ivan Mitic
Priority: Minor
 Fix For: 1.2.0, 3.0.0, 1-win

 Attachments: HADOOP-9099.branch-1-win.patch, HADOOP-9099.trunk.patch


 I just hit this failure. We should use some more unique string for 
 UnknownHost:
 Testcase: testNormalizeHostName took 0.007 sec
   FAILED
 expected:[65.53.5.181] but was:[UnknownHost]
 junit.framework.AssertionFailedError: expected:[65.53.5.181] but 
 was:[UnknownHost]
   at 
 org.apache.hadoop.net.TestNetUtils.testNormalizeHostName(TestNetUtils.java:347)
 Will post a patch in a bit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9399) protoc maven plugin doesn't work on mvn 3.0.2

2013-03-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13601097#comment-13601097
 ] 

Hudson commented on HADOOP-9399:


Integrated in Hadoop-Hdfs-trunk #1343 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1343/])
HADOOP-9399. protoc maven plugin doesn't work on mvn 3.0.2. Contributed by 
Todd Lipcon. (Revision 1455771)

 Result = SUCCESS
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1455771
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java


 protoc maven plugin doesn't work on mvn 3.0.2
 -

 Key: HADOOP-9399
 URL: https://issues.apache.org/jira/browse/HADOOP-9399
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Fix For: 3.0.0, 2.0.5-beta

 Attachments: hadoop-9399.txt


 On my machine with mvn 3.0.2, I get a ClassCastException trying to use the 
 maven protoc plugin. The issue seems to be that mvn 3.0.2 sees the ListFile 
 parameter, and doesn't see the generic type argument, and stuffs Strings 
 inside instead. So, we get ClassCastException trying to use the objects as 
 Files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9280) HADOOP-7101 was never merged from 0.23.x to the 1.x branch

2013-03-13 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13601140#comment-13601140
 ] 

Suresh Srinivas commented on HADOOP-9280:
-

[~farrellee] Thank you for pointing out the releases and clearing things.

bq. Suresh, I am not a Hadoop committer. I am just an end user who suffered 
from this issue and reported / responded on these tickets. I cannot point you 
to the code base / branch. That is the Hadoop team who knows where their code 
lives.
That is reasonable. However, in several comments you indicated the release as 
0.2. This is indeed a release from may be 5-6 years ago and is not in use any 
more. My reason for asking you to point out the release is to understand the 
exact release you are talking about, so that I can out figure the release you 
were pointing out. When pointing out the issues, using the right release 
numbers would help.

I also think there may be some confusion around release numbering and upgrade 
paths. It indeed is confusing.
* Hadoop community decided to call the release 0.20.205 as 1.0 due to its 
stability. It is off of 0.20 branches.
* 0.22, 0.23 and 2.x are later releases. These are off of the branches that 
come after 0.20. Hence upgrade from 0.22 or 0.23 to 1.0 is going from a later 
release to a release based on older branch. The right upgrade path is to go 
from 0.23 to 2.0. Going from 0.23 to 1.0 is not a supported upgrade path.

bq. The problem is that this fix never went into any of the 1.x branches.
Based on the above explanation, not all the major fixes go from 0.23 and 2.0 
releases to 1.x release. Only the issues that are flagged by the community as 
important goes to 1.x releases. 

 HADOOP-7101 was never merged from 0.23.x to the 1.x branch
 --

 Key: HADOOP-9280
 URL: https://issues.apache.org/jira/browse/HADOOP-9280
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.0.4
Reporter: Claus Ibsen
Assignee: Suresh Srinivas
Priority: Critical

 See HADOOP-7101
 This code fix went into the 0.23 branch.
 But was never merged into the 1.x branch, which causing problems for people 
 upgrading from 0.23 to 1.0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9397) Incremental dist tar build fails

2013-03-13 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13601141#comment-13601141
 ] 

Jason Lowe commented on HADOOP-9397:


Thanks for looking into this, Chris.  I'm assuming this was changed because 
some platform doesn't support the 'z' flag for {{tar}}.  Curious though, why 
does hadoop-dist invoke tar and gzip separately, while other projects pipe the 
output of tar to gzip (e.g.: hadoop-mapreduce-project, hadoop-yarn-project)?  
The latter doesn't have this issue because the shell redirect clobbers the 
output file.  Do we really need the intermediate .tar file kept around?

 Incremental dist tar build fails
 

 Key: HADOOP-9397
 URL: https://issues.apache.org/jira/browse/HADOOP-9397
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Jason Lowe
Assignee: Chris Nauroth
 Attachments: HADOOP-9397.1.patch


 Building a dist tar build when the dist tarball already exists from a 
 previous build fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9388) TestFsShellCopy fails on Windows

2013-03-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13601154#comment-13601154
 ] 

Hudson commented on HADOOP-9388:


Integrated in Hadoop-Mapreduce-trunk #1371 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1371/])
HADOOP-9388. TestFsShellCopy fails on Windows. Contributed by Ivan Mitic. 
(Revision 1455637)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1455637
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java


 TestFsShellCopy fails on Windows
 

 Key: HADOOP-9388
 URL: https://issues.apache.org/jira/browse/HADOOP-9388
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Fix For: 3.0.0

 Attachments: HADOOP-9388.trunk.patch


 Test fails on below test cases:
 {code}
 Tests run: 11, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 4.343 sec 
  FAILURE!
 testMoveDirFromLocal(org.apache.hadoop.fs.TestFsShellCopy)  Time elapsed: 29 
 sec   FAILURE!
 java.lang.AssertionError: expected:0 but was:1
 at org.junit.Assert.fail(Assert.java:91)
 at org.junit.Assert.failNotEquals(Assert.java:645)
 at org.junit.Assert.assertEquals(Assert.java:126)
 at org.junit.Assert.assertEquals(Assert.java:470)
 at org.junit.Assert.assertEquals(Assert.java:454)
 at 
 org.apache.hadoop.fs.TestFsShellCopy.testMoveDirFromLocal(TestFsShellCopy.java:392)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
 at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
 at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
 at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
 at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
 at 
 org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
 at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
 at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
 at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
 at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
 at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
 at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
 at 
 org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
 at 
 org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
 at 
 org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
 at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
 testMoveDirFromLocalDestExists(org.apache.hadoop.fs.TestFsShellCopy)  Time 
 elapsed: 25 sec   FAILURE!
 java.lang.AssertionError: expected:0 but was:1
 at org.junit.Assert.fail(Assert.java:91)
 at org.junit.Assert.failNotEquals(Assert.java:645)
 at org.junit.Assert.assertEquals(Assert.java:126)
 at org.junit.Assert.assertEquals(Assert.java:470)
 

[jira] [Commented] (HADOOP-9099) NetUtils.normalizeHostName fails on domains where UnknownHost resolves to an IP address

2013-03-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13601155#comment-13601155
 ] 

Hudson commented on HADOOP-9099:


Integrated in Hadoop-Mapreduce-trunk #1371 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1371/])
HADOOP-9099. NetUtils.normalizeHostName fails on domains where UnknownHost 
resolves to an IP address. Contributed by Ivan Mitic. (Revision 1455629)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1455629
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java


 NetUtils.normalizeHostName fails on domains where UnknownHost resolves to an 
 IP address
 ---

 Key: HADOOP-9099
 URL: https://issues.apache.org/jira/browse/HADOOP-9099
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0, 1-win
Reporter: Ivan Mitic
Assignee: Ivan Mitic
Priority: Minor
 Fix For: 1.2.0, 3.0.0, 1-win

 Attachments: HADOOP-9099.branch-1-win.patch, HADOOP-9099.trunk.patch


 I just hit this failure. We should use some more unique string for 
 UnknownHost:
 Testcase: testNormalizeHostName took 0.007 sec
   FAILED
 expected:[65.53.5.181] but was:[UnknownHost]
 junit.framework.AssertionFailedError: expected:[65.53.5.181] but 
 was:[UnknownHost]
   at 
 org.apache.hadoop.net.TestNetUtils.testNormalizeHostName(TestNetUtils.java:347)
 Will post a patch in a bit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9399) protoc maven plugin doesn't work on mvn 3.0.2

2013-03-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13601156#comment-13601156
 ] 

Hudson commented on HADOOP-9399:


Integrated in Hadoop-Mapreduce-trunk #1371 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1371/])
HADOOP-9399. protoc maven plugin doesn't work on mvn 3.0.2. Contributed by 
Todd Lipcon. (Revision 1455771)

 Result = SUCCESS
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1455771
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java


 protoc maven plugin doesn't work on mvn 3.0.2
 -

 Key: HADOOP-9399
 URL: https://issues.apache.org/jira/browse/HADOOP-9399
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Fix For: 3.0.0, 2.0.5-beta

 Attachments: hadoop-9399.txt


 On my machine with mvn 3.0.2, I get a ClassCastException trying to use the 
 maven protoc plugin. The issue seems to be that mvn 3.0.2 sees the ListFile 
 parameter, and doesn't see the generic type argument, and stuffs Strings 
 inside instead. So, we get ClassCastException trying to use the objects as 
 Files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7101) UserGroupInformation.getCurrentUser() fails when called from non-Hadoop JAAS context

2013-03-13 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-7101:


Fix Version/s: (was: 0.22.0)
   0.23.0

Changing the fix version from 0.22 to 0.23. I also changed CHANGES.txt in the 
current branches of development to reflect this.

 UserGroupInformation.getCurrentUser() fails when called from non-Hadoop JAAS 
 context
 

 Key: HADOOP-7101
 URL: https://issues.apache.org/jira/browse/HADOOP-7101
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 0.22.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Blocker
 Fix For: 0.23.0

 Attachments: hadoop-7101.branch-1.patch, hadoop-7101.txt


 If a Hadoop client is run from inside a container like Tomcat, and the 
 current AccessControlContext has a Subject associated with it that is not 
 created by Hadoop, then UserGroupInformation.getCurrentUser() will throw 
 NoSuchElementException, since it assumes that any Subject will have a hadoop 
 User principal.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7101) UserGroupInformation.getCurrentUser() fails when called from non-Hadoop JAAS context

2013-03-13 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-7101:


Target Version/s: 1.2.0
   Fix Version/s: 1.2.0

I committed the patch to branch-1 and branch-1.2.

 UserGroupInformation.getCurrentUser() fails when called from non-Hadoop JAAS 
 context
 

 Key: HADOOP-7101
 URL: https://issues.apache.org/jira/browse/HADOOP-7101
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 0.22.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Blocker
 Fix For: 1.2.0, 0.23.0

 Attachments: hadoop-7101.branch-1.patch, hadoop-7101.txt


 If a Hadoop client is run from inside a container like Tomcat, and the 
 current AccessControlContext has a Subject associated with it that is not 
 created by Hadoop, then UserGroupInformation.getCurrentUser() will throw 
 NoSuchElementException, since it assumes that any Subject will have a hadoop 
 User principal.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8816) HTTP Error 413 full HEAD if using kerberos authentication

2013-03-13 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-8816:


Fix Version/s: 0.23.7

I pulled this back into 23.  We may want to consider marking this an 
incompatible or making it a config option because the new 64K buffer causes the 
tests to hang on OS X, whereas it's fine up to 32K.  I haven't debugged why yet.

 HTTP Error 413 full HEAD if using kerberos authentication
 -

 Key: HADOOP-8816
 URL: https://issues.apache.org/jira/browse/HADOOP-8816
 Project: Hadoop Common
  Issue Type: Bug
  Components: net
Affects Versions: 2.0.1-alpha
 Environment: ubuntu linux with active directory kerberos.
Reporter: Moritz Moeller
Assignee: Moritz Moeller
 Fix For: 2.0.3-alpha, 0.23.7

 Attachments: HADOOP-8816.patch, 
 hadoop-common-kerberos-increase-http-header-buffer-size.patch


 The HTTP Authentication: header is too large if using kerberos and the 
 request is rejected by Jetty because Jetty has a too low default header size 
 limit.
 Can be fixed by adding ret.setHeaderBufferSize(1024*128); in 
 org.apache.hadoop.http.HttpServer.createDefaultChannelConnector

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9387) TestDFVariations fails on Windows after the merge

2013-03-13 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HADOOP-9387:
---

Status: Patch Available  (was: Open)

 TestDFVariations fails on Windows after the merge
 -

 Key: HADOOP-9387
 URL: https://issues.apache.org/jira/browse/HADOOP-9387
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-9387.trunk.2.patch, HADOOP-9387.trunk.patch


 Test fails with the following errors:
 {code}
 Running org.apache.hadoop.fs.TestDFVariations
 Tests run: 4, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 0.186 sec  
 FAILURE!
 testOSParsing(org.apache.hadoop.fs.TestDFVariations)  Time elapsed: 109 sec  
  ERROR!
 java.io.IOException: Fewer lines of output than expected
 at org.apache.hadoop.fs.DF.parseOutput(DF.java:203)
 at org.apache.hadoop.fs.DF.getMount(DF.java:150)
 at 
 org.apache.hadoop.fs.TestDFVariations.testOSParsing(TestDFVariations.java:59)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
 at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
 at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
 at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
 at 
 org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)
 testGetMountCurrentDirectory(org.apache.hadoop.fs.TestDFVariations)  Time 
 elapsed: 1 sec   ERROR!
 java.io.IOException: Fewer lines of output than expected
 at org.apache.hadoop.fs.DF.parseOutput(DF.java:203)
 at org.apache.hadoop.fs.DF.getMount(DF.java:150)
 at 
 org.apache.hadoop.fs.TestDFVariations.testGetMountCurrentDirectory(TestDFVariations.java:139)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
 at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
 at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
 at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
 at 
 org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9318) when exiting on a signal, print the signal name first

2013-03-13 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13601384#comment-13601384
 ] 

Colin Patrick McCabe commented on HADOOP-9318:
--

bq. unless different actions need to be taken for each signal

at the end of our signal handler, we have to call the previous signal handler 
for that signal.  This will be different for each signal.

 when exiting on a signal, print the signal name first
 -

 Key: HADOOP-9318
 URL: https://issues.apache.org/jira/browse/HADOOP-9318
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.5-beta
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-9318.001.patch, HADOOP-9318.002.patch, 
 HADOOP-9318.003.patch


 On UNIX, it would be nice to know when a Hadoop daemon had exited on a 
 signal.  For example, if a daemon exited because the system administrator 
 sent SIGTERM (i.e. {{killall java}}), it would be nice to know that.  
 Although some of this can be deduced from context and {{SHUTDOWN_MSG}}, it 
 would be nice to have it be explicit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9387) TestDFVariations fails on Windows after the merge

2013-03-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13601399#comment-13601399
 ] 

Hadoop QA commented on HADOOP-9387:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12573491/HADOOP-9387.trunk.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 tests included appear to have a timeout.{color}

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2324//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2324//console

This message is automatically generated.

 TestDFVariations fails on Windows after the merge
 -

 Key: HADOOP-9387
 URL: https://issues.apache.org/jira/browse/HADOOP-9387
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-9387.trunk.2.patch, HADOOP-9387.trunk.patch


 Test fails with the following errors:
 {code}
 Running org.apache.hadoop.fs.TestDFVariations
 Tests run: 4, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 0.186 sec  
 FAILURE!
 testOSParsing(org.apache.hadoop.fs.TestDFVariations)  Time elapsed: 109 sec  
  ERROR!
 java.io.IOException: Fewer lines of output than expected
 at org.apache.hadoop.fs.DF.parseOutput(DF.java:203)
 at org.apache.hadoop.fs.DF.getMount(DF.java:150)
 at 
 org.apache.hadoop.fs.TestDFVariations.testOSParsing(TestDFVariations.java:59)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
 at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
 at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
 at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
 at 
 org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)
 testGetMountCurrentDirectory(org.apache.hadoop.fs.TestDFVariations)  Time 
 elapsed: 1 sec   ERROR!
 java.io.IOException: Fewer lines of output than expected
 at org.apache.hadoop.fs.DF.parseOutput(DF.java:203)
 at org.apache.hadoop.fs.DF.getMount(DF.java:150)
 at 
 org.apache.hadoop.fs.TestDFVariations.testGetMountCurrentDirectory(TestDFVariations.java:139)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
 at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
 at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
 at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
 at 
 org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9403) in case of zero map jobs map completion graph is broken

2013-03-13 Thread Abhishek Gayakwad (JIRA)
Abhishek Gayakwad created HADOOP-9403:
-

 Summary: in case of zero map jobs map completion graph is broken
 Key: HADOOP-9403
 URL: https://issues.apache.org/jira/browse/HADOOP-9403
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.20.2
Reporter: Abhishek Gayakwad
Priority: Minor


In case of zero map jobs (normal case in hive MR jobs) jobs completion map is 
broken on jobDetails.jsp. 
THis doesn't happen in case of reduce because we have a check saying if 
job.getTasks(TaskType.REDUCE).length  0 then only show reduce completion graph

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9403) in case of zero map jobs map completion graph is broken

2013-03-13 Thread Abhishek Gayakwad (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Gayakwad updated HADOOP-9403:
--

Attachment: map-completion-graph-broken.jpg

 in case of zero map jobs map completion graph is broken
 ---

 Key: HADOOP-9403
 URL: https://issues.apache.org/jira/browse/HADOOP-9403
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.20.2
Reporter: Abhishek Gayakwad
Priority: Minor
 Attachments: map-completion-graph-broken.jpg


 In case of zero map jobs (normal case in hive MR jobs) jobs completion map is 
 broken on jobDetails.jsp. 
 THis doesn't happen in case of reduce because we have a check saying if 
 job.getTasks(TaskType.REDUCE).length  0 then only show reduce completion 
 graph

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9403) in case of zero map jobs map completion graph is broken

2013-03-13 Thread Abhishek Gayakwad (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Gayakwad updated HADOOP-9403:
--

Description: 
In case of zero map jobs (normal case in hive MR jobs) jobs completion map is 
broken on jobDetails.jsp. 
This doesn't happen in case of reduce because we have a check saying if 
job.getTasks(TaskType.REDUCE).length  0 then only show reduce completion graph

  was:
In case of zero map jobs (normal case in hive MR jobs) jobs completion map is 
broken on jobDetails.jsp. 
THis doesn't happen in case of reduce because we have a check saying if 
job.getTasks(TaskType.REDUCE).length  0 then only show reduce completion graph


 in case of zero map jobs map completion graph is broken
 ---

 Key: HADOOP-9403
 URL: https://issues.apache.org/jira/browse/HADOOP-9403
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.20.2
Reporter: Abhishek Gayakwad
Priority: Minor
 Attachments: map-completion-graph-broken.jpg


 In case of zero map jobs (normal case in hive MR jobs) jobs completion map is 
 broken on jobDetails.jsp. 
 This doesn't happen in case of reduce because we have a check saying if 
 job.getTasks(TaskType.REDUCE).length  0 then only show reduce completion 
 graph

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9398) Fix TestDFSShell failures on Windows

2013-03-13 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-9398:
--

Component/s: test

 Fix TestDFSShell failures on Windows
 

 Key: HADOOP-9398
 URL: https://issues.apache.org/jira/browse/HADOOP-9398
 Project: Hadoop Common
  Issue Type: Bug
  Components: test, tools
 Environment: Windows
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Attachments: TestDFSShell-Failed-testcases-Windows.txt

   Original Estimate: 8h
  Remaining Estimate: 8h

 List of failed tests with exceptions attached. Filing under Hadoop since some 
 of the fixes will need to be in Hadoop common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9398) Fix TestDFSShell failures on Windows

2013-03-13 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-9398:
--

Labels: windows  (was: )

 Fix TestDFSShell failures on Windows
 

 Key: HADOOP-9398
 URL: https://issues.apache.org/jira/browse/HADOOP-9398
 Project: Hadoop Common
  Issue Type: Bug
  Components: test, tools
 Environment: Windows
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
  Labels: windows
 Attachments: TestDFSShell-Failed-testcases-Windows.txt

   Original Estimate: 8h
  Remaining Estimate: 8h

 List of failed tests with exceptions attached. Filing under Hadoop since some 
 of the fixes will need to be in Hadoop common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9397) Incremental dist tar build fails

2013-03-13 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13601591#comment-13601591
 ] 

Chris Nauroth commented on HADOOP-9397:
---

Thanks, Jason.

{quote}
Curious though, why does hadoop-dist invoke tar and gzip separately, while 
other projects pipe the output of tar to gzip (e.g.: hadoop-mapreduce-project, 
hadoop-yarn-project)?
{quote}

The changes to the distribution scripts were originally submitted in 
HADOOP-9271.  I left detailed comments explaining all of the changes there.  
I'm pasting the most relevant part here:

{code}
-  run tar czf hadoop-${project.version}.tar.gz 
hadoop-${project.version}
+  run tar cf hadoop-${project.version}.tar 
hadoop-${project.version}
+  run gzip hadoop-${project.version}.tar
{code}

The 'z' flag for compression causes tar to fork a separate process for gzip. 
GnuWin32 tar has a limitation in that fork was never implemented, so this would 
fail on Windows with Cannot fork: Function not implemented. Splitting this 
into separate tar and gzip commands works cross-platform.

Another option here would have been to control the pipeline explicitly using a 
shell pipeline (tar | gzip), but the run helper function used here isn't 
compatible with passing a command that has a pipe.

{quote}
Do we really need the intermediate .tar file kept around?
{quote}

No, and gzip actually replaces the original file, so we don't have this 
problem.  I just ran it again and confirmed that the end result was a .tar.gz 
file (and no separate .tar file).


 Incremental dist tar build fails
 

 Key: HADOOP-9397
 URL: https://issues.apache.org/jira/browse/HADOOP-9397
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Jason Lowe
Assignee: Chris Nauroth
 Attachments: HADOOP-9397.1.patch


 Building a dist tar build when the dist tarball already exists from a 
 previous build fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9400) Investigate emulating sticky bit directory permissions on Windows

2013-03-13 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13601607#comment-13601607
 ] 

Bikas Saha commented on HADOOP-9400:


Which use case does this target?

 Investigate emulating sticky bit directory permissions on Windows
 -

 Key: HADOOP-9400
 URL: https://issues.apache.org/jira/browse/HADOOP-9400
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
 Environment: Windows
Reporter: Arpit Agarwal
  Labels: windows
 Fix For: 3.0.0


 It should be possible to emulate sticky bit permissions on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9397) Incremental dist tar build fails

2013-03-13 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13601628#comment-13601628
 ] 

Jason Lowe commented on HADOOP-9397:


Thanks for the clarification, but I'm still wondering why this isn't consistent 
with the tar scripts used by hadoop-yarn-project, hadoop-mapreduce-project, 
etc.  Any legitimate reason this is different?

 Incremental dist tar build fails
 

 Key: HADOOP-9397
 URL: https://issues.apache.org/jira/browse/HADOOP-9397
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Jason Lowe
Assignee: Chris Nauroth
 Attachments: HADOOP-9397.1.patch


 Building a dist tar build when the dist tarball already exists from a 
 previous build fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9299) kerberos name resolution is kicking in even when kerberos is not configured

2013-03-13 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13601644#comment-13601644
 ] 

Siddharth Seth commented on HADOOP-9299:


bq. I might know why this wasn't seen before. Sometime back the job client was 
modified, for oozie, to get a HS token if the conf 
mapreduce.history.server.delegationtoken.required is defined. Acquiring a RM 
token will implicitly set the conf value. If set, job submission will 
automatically get a HS token with a renewer of Master.getMasterPrincipal(conf) 
which returns yarn.resourcemanager.principal. Perhaps the HS token fetching was 
added post-2.0.2?
I think the change in MAPREDUCE-4921 (using Master.getMasterPrincipal(conf) as 
the renewer) is what exposed this. Doesn't affect the DFS tokens, since there's 
an explicit isSecurityEnabled() check before attempting to get HDFS delegation 
tokens. 

 kerberos name resolution is kicking in even when kerberos is not configured
 ---

 Key: HADOOP-9299
 URL: https://issues.apache.org/jira/browse/HADOOP-9299
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.3-alpha
Reporter: Roman Shaposhnik
Priority: Blocker

 Here's what I'm observing on a fully distributed cluster deployed via Bigtop 
 from the RC0 2.0.3-alpha tarball:
 {noformat}
 528077-oozie-tucu-W@mr-node] Error starting action [mr-node]. ErrorType 
 [TRANSIENT], ErrorCode [JA009], Message [JA009: 
 org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
 No rules applied to yarn/localhost@LOCALREALM
 at 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.init(AbstractDelegationTokenIdentifier.java:68)
 at 
 org.apache.hadoop.mapreduce.v2.api.MRDelegationTokenIdentifier.init(MRDelegationTokenIdentifier.java:51)
 at 
 org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getDelegationToken(HistoryClientService.java:336)
 at 
 org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.getDelegationToken(MRClientProtocolPBServiceImpl.java:210)
 at 
 org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:240)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
 Caused by: 
 org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
 No rules applied to yarn/localhost@LOCALREALM
 at 
 org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:378)
 at 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.init(AbstractDelegationTokenIdentifier.java:66)
 ... 12 more
 ]
 {noformat}
 This is submitting a mapreduce job via Oozie 3.3.1. The reason I think this 
 is a Hadoop issue rather than the oozie one is because when I hack 
 /etc/krb5.conf to be:
 {noformat}
 [libdefaults]
ticket_lifetime = 600
default_realm = LOCALHOST
default_tkt_enctypes = des3-hmac-sha1 des-cbc-crc
default_tgs_enctypes = des3-hmac-sha1 des-cbc-crc
 [realms]
LOCALHOST = {
kdc = localhost:88
default_domain = .local
}
 [domain_realm]
.local = LOCALHOST
 [logging]
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmin.log
default = FILE:/var/log/krb5lib.log
 {noformat}
 The issue goes away. 
 Now, once again -- the kerberos auth is NOT configured for Hadoop, hence it 
 should NOT pay attention to /etc/krb5.conf to begin with.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8989) hadoop dfs -find feature

2013-03-13 Thread Jonathan Allen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Allen updated HADOOP-8989:
---

Status: Open  (was: Patch Available)

 hadoop dfs -find feature
 

 Key: HADOOP-8989
 URL: https://issues.apache.org/jira/browse/HADOOP-8989
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Marco Nicosia
Assignee: Jonathan Allen
 Attachments: HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
 HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
 HADOOP-8989.patch, HADOOP-8989.patch


 Both sysadmins and users make frequent use of the unix 'find' command, but 
 Hadoop has no correlate. Without this, users are writing scripts which make 
 heavy use of hadoop dfs -lsr, and implementing find one-offs. I think hdfs 
 -lsr is somewhat taxing on the NameNode, and a really slow experience on the 
 client side. Possibly an in-NameNode find operation would be only a bit more 
 taxing on the NameNode, but significantly faster from the client's point of 
 view?
 The minimum set of options I can think of which would make a Hadoop find 
 command generally useful is (in priority order):
 * -type (file or directory, for now)
 * -atime/-ctime-mtime (... and -creationtime?) (both + and - arguments)
 * -print0 (for piping to xargs -0)
 * -depth
 * -owner/-group (and -nouser/-nogroup)
 * -name (allowing for shell pattern, or even regex?)
 * -perm
 * -size
 One possible special case, but could possibly be really cool if it ran from 
 within the NameNode:
 * -delete
 The hadoop dfs -lsr | hadoop dfs -rm cycle is really, really slow.
 Lower priority, some people do use operators, mostly to execute -or searches 
 such as:
 * find / \(-nouser -or -nogroup\)
 Finally, I thought I'd include a link to the [Posix spec for 
 find|http://www.opengroup.org/onlinepubs/009695399/utilities/find.html]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8989) hadoop dfs -find feature

2013-03-13 Thread Jonathan Allen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Allen updated HADOOP-8989:
---

Status: Patch Available  (was: Open)

 hadoop dfs -find feature
 

 Key: HADOOP-8989
 URL: https://issues.apache.org/jira/browse/HADOOP-8989
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Marco Nicosia
Assignee: Jonathan Allen
 Attachments: HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
 HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
 HADOOP-8989.patch, HADOOP-8989.patch


 Both sysadmins and users make frequent use of the unix 'find' command, but 
 Hadoop has no correlate. Without this, users are writing scripts which make 
 heavy use of hadoop dfs -lsr, and implementing find one-offs. I think hdfs 
 -lsr is somewhat taxing on the NameNode, and a really slow experience on the 
 client side. Possibly an in-NameNode find operation would be only a bit more 
 taxing on the NameNode, but significantly faster from the client's point of 
 view?
 The minimum set of options I can think of which would make a Hadoop find 
 command generally useful is (in priority order):
 * -type (file or directory, for now)
 * -atime/-ctime-mtime (... and -creationtime?) (both + and - arguments)
 * -print0 (for piping to xargs -0)
 * -depth
 * -owner/-group (and -nouser/-nogroup)
 * -name (allowing for shell pattern, or even regex?)
 * -perm
 * -size
 One possible special case, but could possibly be really cool if it ran from 
 within the NameNode:
 * -delete
 The hadoop dfs -lsr | hadoop dfs -rm cycle is really, really slow.
 Lower priority, some people do use operators, mostly to execute -or searches 
 such as:
 * find / \(-nouser -or -nogroup\)
 Finally, I thought I'd include a link to the [Posix spec for 
 find|http://www.opengroup.org/onlinepubs/009695399/utilities/find.html]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9403) in case of zero map jobs map completion graph is broken

2013-03-13 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13601679#comment-13601679
 ] 

Suresh Srinivas commented on HADOOP-9403:
-

This should be a MapReduce project jira right? Also is the Affects Version/s 
really 0.20.2?

 in case of zero map jobs map completion graph is broken
 ---

 Key: HADOOP-9403
 URL: https://issues.apache.org/jira/browse/HADOOP-9403
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.20.2
Reporter: Abhishek Gayakwad
Priority: Minor
 Attachments: map-completion-graph-broken.jpg


 In case of zero map jobs (normal case in hive MR jobs) jobs completion map is 
 broken on jobDetails.jsp. 
 This doesn't happen in case of reduce because we have a check saying if 
 job.getTasks(TaskType.REDUCE).length  0 then only show reduce completion 
 graph

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9397) Incremental dist tar build fails

2013-03-13 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13601680#comment-13601680
 ] 

Chris Nauroth commented on HADOOP-9397:
---

The difference is the use of the helper run method for error reporting 
(hadoop-dist) vs. not explicitly reporting errors (hadoop-yarn-project, 
hadoop-mapreduce-project, etc.).  As far as I can tell from revision history, 
this difference has always been there, dating back to initial commit of the 
code in HADOOP-7737 in 2011.  I don't know that there is a legitimate reason 
for it though.  You could argue that hadoop-yarn-project, 
hadoop-mapreduce-project, etc. should be doing something to check for errors 
after each command and report it, more like what hadoop-dist does.


 Incremental dist tar build fails
 

 Key: HADOOP-9397
 URL: https://issues.apache.org/jira/browse/HADOOP-9397
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Jason Lowe
Assignee: Chris Nauroth
 Attachments: HADOOP-9397.1.patch


 Building a dist tar build when the dist tarball already exists from a 
 previous build fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9299) kerberos name resolution is kicking in even when kerberos is not configured

2013-03-13 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13601699#comment-13601699
 ] 

Daryn Sharp commented on HADOOP-9299:
-

That might be it.  On a tangent, there shouldn't be an explicit check because 
the remote server might have security enabled, but that's a separate issue...

I've got a patch I'm almost done testing that will reduce principals to simple 
names if security is off.  I can also finally run all but a few tests when my 
machine is kinit-ed!

 kerberos name resolution is kicking in even when kerberos is not configured
 ---

 Key: HADOOP-9299
 URL: https://issues.apache.org/jira/browse/HADOOP-9299
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.3-alpha
Reporter: Roman Shaposhnik
Priority: Blocker

 Here's what I'm observing on a fully distributed cluster deployed via Bigtop 
 from the RC0 2.0.3-alpha tarball:
 {noformat}
 528077-oozie-tucu-W@mr-node] Error starting action [mr-node]. ErrorType 
 [TRANSIENT], ErrorCode [JA009], Message [JA009: 
 org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
 No rules applied to yarn/localhost@LOCALREALM
 at 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.init(AbstractDelegationTokenIdentifier.java:68)
 at 
 org.apache.hadoop.mapreduce.v2.api.MRDelegationTokenIdentifier.init(MRDelegationTokenIdentifier.java:51)
 at 
 org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getDelegationToken(HistoryClientService.java:336)
 at 
 org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.getDelegationToken(MRClientProtocolPBServiceImpl.java:210)
 at 
 org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:240)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
 Caused by: 
 org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
 No rules applied to yarn/localhost@LOCALREALM
 at 
 org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:378)
 at 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.init(AbstractDelegationTokenIdentifier.java:66)
 ... 12 more
 ]
 {noformat}
 This is submitting a mapreduce job via Oozie 3.3.1. The reason I think this 
 is a Hadoop issue rather than the oozie one is because when I hack 
 /etc/krb5.conf to be:
 {noformat}
 [libdefaults]
ticket_lifetime = 600
default_realm = LOCALHOST
default_tkt_enctypes = des3-hmac-sha1 des-cbc-crc
default_tgs_enctypes = des3-hmac-sha1 des-cbc-crc
 [realms]
LOCALHOST = {
kdc = localhost:88
default_domain = .local
}
 [domain_realm]
.local = LOCALHOST
 [logging]
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmin.log
default = FILE:/var/log/krb5lib.log
 {noformat}
 The issue goes away. 
 Now, once again -- the kerberos auth is NOT configured for Hadoop, hence it 
 should NOT pay attention to /etc/krb5.conf to begin with.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9299) kerberos name resolution is kicking in even when kerberos is not configured

2013-03-13 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13601712#comment-13601712
 ] 

Roman Shaposhnik commented on HADOOP-9299:
--

I'd be more than happy to test any patches in Bigtop

 kerberos name resolution is kicking in even when kerberos is not configured
 ---

 Key: HADOOP-9299
 URL: https://issues.apache.org/jira/browse/HADOOP-9299
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.3-alpha
Reporter: Roman Shaposhnik
Priority: Blocker

 Here's what I'm observing on a fully distributed cluster deployed via Bigtop 
 from the RC0 2.0.3-alpha tarball:
 {noformat}
 528077-oozie-tucu-W@mr-node] Error starting action [mr-node]. ErrorType 
 [TRANSIENT], ErrorCode [JA009], Message [JA009: 
 org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
 No rules applied to yarn/localhost@LOCALREALM
 at 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.init(AbstractDelegationTokenIdentifier.java:68)
 at 
 org.apache.hadoop.mapreduce.v2.api.MRDelegationTokenIdentifier.init(MRDelegationTokenIdentifier.java:51)
 at 
 org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getDelegationToken(HistoryClientService.java:336)
 at 
 org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.getDelegationToken(MRClientProtocolPBServiceImpl.java:210)
 at 
 org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:240)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
 Caused by: 
 org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
 No rules applied to yarn/localhost@LOCALREALM
 at 
 org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:378)
 at 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.init(AbstractDelegationTokenIdentifier.java:66)
 ... 12 more
 ]
 {noformat}
 This is submitting a mapreduce job via Oozie 3.3.1. The reason I think this 
 is a Hadoop issue rather than the oozie one is because when I hack 
 /etc/krb5.conf to be:
 {noformat}
 [libdefaults]
ticket_lifetime = 600
default_realm = LOCALHOST
default_tkt_enctypes = des3-hmac-sha1 des-cbc-crc
default_tgs_enctypes = des3-hmac-sha1 des-cbc-crc
 [realms]
LOCALHOST = {
kdc = localhost:88
default_domain = .local
}
 [domain_realm]
.local = LOCALHOST
 [logging]
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmin.log
default = FILE:/var/log/krb5lib.log
 {noformat}
 The issue goes away. 
 Now, once again -- the kerberos auth is NOT configured for Hadoop, hence it 
 should NOT pay attention to /etc/krb5.conf to begin with.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9397) Incremental dist tar build fails

2013-03-13 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13601736#comment-13601736
 ] 

Jason Lowe commented on HADOOP-9397:


bq. You could argue that hadoop-yarn-project, hadoop-mapreduce-project, etc. 
should be doing something to check for errors after each command and report it, 
more like what hadoop-dist does.

That's basically what I'm getting at.  They probably shouldn't be different, so 
I was wondering which one is the right one.  Thanks for digging up the 
history.

+1, will commit shortly.  I'll also file followup JIRAs to make the tar scripts 
in the other projects consistent with the one in hadoop-dist.


 Incremental dist tar build fails
 

 Key: HADOOP-9397
 URL: https://issues.apache.org/jira/browse/HADOOP-9397
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Jason Lowe
Assignee: Chris Nauroth
 Attachments: HADOOP-9397.1.patch


 Building a dist tar build when the dist tarball already exists from a 
 previous build fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9397) Incremental dist tar build fails

2013-03-13 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-9397:
---

   Resolution: Fixed
Fix Version/s: 3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks, Chris!  I committed this to trunk.

 Incremental dist tar build fails
 

 Key: HADOOP-9397
 URL: https://issues.apache.org/jira/browse/HADOOP-9397
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Jason Lowe
Assignee: Chris Nauroth
 Fix For: 3.0.0

 Attachments: HADOOP-9397.1.patch


 Building a dist tar build when the dist tarball already exists from a 
 previous build fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9397) Incremental dist tar build fails

2013-03-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13601747#comment-13601747
 ] 

Hudson commented on HADOOP-9397:


Integrated in Hadoop-trunk-Commit #3468 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3468/])
HADOOP-9397. Incremental dist tar build fails. Contributed by Chris Nauroth 
(Revision 1456212)

 Result = SUCCESS
jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1456212
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-dist/pom.xml


 Incremental dist tar build fails
 

 Key: HADOOP-9397
 URL: https://issues.apache.org/jira/browse/HADOOP-9397
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Jason Lowe
Assignee: Chris Nauroth
 Fix For: 3.0.0

 Attachments: HADOOP-9397.1.patch


 Building a dist tar build when the dist tarball already exists from a 
 previous build fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9397) Incremental dist tar build fails

2013-03-13 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13601753#comment-13601753
 ] 

Chris Nauroth commented on HADOOP-9397:
---

Thanks for the review and commit, Jason!  I'll keep an eye out for the 
follow-ups.

 Incremental dist tar build fails
 

 Key: HADOOP-9397
 URL: https://issues.apache.org/jira/browse/HADOOP-9397
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Jason Lowe
Assignee: Chris Nauroth
 Fix For: 3.0.0

 Attachments: HADOOP-9397.1.patch


 Building a dist tar build when the dist tarball already exists from a 
 previous build fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9299) kerberos name resolution is kicking in even when kerberos is not configured

2013-03-13 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-9299:


Attachment: HADOOP-9299.patch

This should allow kinit-ed users, or full principals, to be reduced to simple 
user names when the client is using SIMPLE auth.  I did this by changing the 
default auth_to_local value if it's not set.  Kerberos does what it did before, 
other auths now default to simple reduction unless explicit rules are defined.

Seems to pass all the auth/token related tests.

[~tucu00], please take a look at my removal and simplification of the seemingly 
awkward rules loading logic.  I traced it down to an old jira of yours that 
appears to have worked around a manifestation of this bug.

[~rvs], sure, please give it a whirl!

 kerberos name resolution is kicking in even when kerberos is not configured
 ---

 Key: HADOOP-9299
 URL: https://issues.apache.org/jira/browse/HADOOP-9299
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.3-alpha
Reporter: Roman Shaposhnik
Priority: Blocker
 Attachments: HADOOP-9299.patch


 Here's what I'm observing on a fully distributed cluster deployed via Bigtop 
 from the RC0 2.0.3-alpha tarball:
 {noformat}
 528077-oozie-tucu-W@mr-node] Error starting action [mr-node]. ErrorType 
 [TRANSIENT], ErrorCode [JA009], Message [JA009: 
 org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
 No rules applied to yarn/localhost@LOCALREALM
 at 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.init(AbstractDelegationTokenIdentifier.java:68)
 at 
 org.apache.hadoop.mapreduce.v2.api.MRDelegationTokenIdentifier.init(MRDelegationTokenIdentifier.java:51)
 at 
 org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getDelegationToken(HistoryClientService.java:336)
 at 
 org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.getDelegationToken(MRClientProtocolPBServiceImpl.java:210)
 at 
 org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:240)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
 Caused by: 
 org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
 No rules applied to yarn/localhost@LOCALREALM
 at 
 org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:378)
 at 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.init(AbstractDelegationTokenIdentifier.java:66)
 ... 12 more
 ]
 {noformat}
 This is submitting a mapreduce job via Oozie 3.3.1. The reason I think this 
 is a Hadoop issue rather than the oozie one is because when I hack 
 /etc/krb5.conf to be:
 {noformat}
 [libdefaults]
ticket_lifetime = 600
default_realm = LOCALHOST
default_tkt_enctypes = des3-hmac-sha1 des-cbc-crc
default_tgs_enctypes = des3-hmac-sha1 des-cbc-crc
 [realms]
LOCALHOST = {
kdc = localhost:88
default_domain = .local
}
 [domain_realm]
.local = LOCALHOST
 [logging]
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmin.log
default = FILE:/var/log/krb5lib.log
 {noformat}
 The issue goes away. 
 Now, once again -- the kerberos auth is NOT configured for Hadoop, hence it 
 should NOT pay attention to /etc/krb5.conf to begin with.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9299) kerberos name resolution is kicking in even when kerberos is not configured

2013-03-13 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-9299:


Assignee: Daryn Sharp
  Status: Patch Available  (was: Open)

 kerberos name resolution is kicking in even when kerberos is not configured
 ---

 Key: HADOOP-9299
 URL: https://issues.apache.org/jira/browse/HADOOP-9299
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.3-alpha
Reporter: Roman Shaposhnik
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9299.patch


 Here's what I'm observing on a fully distributed cluster deployed via Bigtop 
 from the RC0 2.0.3-alpha tarball:
 {noformat}
 528077-oozie-tucu-W@mr-node] Error starting action [mr-node]. ErrorType 
 [TRANSIENT], ErrorCode [JA009], Message [JA009: 
 org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
 No rules applied to yarn/localhost@LOCALREALM
 at 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.init(AbstractDelegationTokenIdentifier.java:68)
 at 
 org.apache.hadoop.mapreduce.v2.api.MRDelegationTokenIdentifier.init(MRDelegationTokenIdentifier.java:51)
 at 
 org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getDelegationToken(HistoryClientService.java:336)
 at 
 org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.getDelegationToken(MRClientProtocolPBServiceImpl.java:210)
 at 
 org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:240)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
 Caused by: 
 org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
 No rules applied to yarn/localhost@LOCALREALM
 at 
 org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:378)
 at 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.init(AbstractDelegationTokenIdentifier.java:66)
 ... 12 more
 ]
 {noformat}
 This is submitting a mapreduce job via Oozie 3.3.1. The reason I think this 
 is a Hadoop issue rather than the oozie one is because when I hack 
 /etc/krb5.conf to be:
 {noformat}
 [libdefaults]
ticket_lifetime = 600
default_realm = LOCALHOST
default_tkt_enctypes = des3-hmac-sha1 des-cbc-crc
default_tgs_enctypes = des3-hmac-sha1 des-cbc-crc
 [realms]
LOCALHOST = {
kdc = localhost:88
default_domain = .local
}
 [domain_realm]
.local = LOCALHOST
 [logging]
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmin.log
default = FILE:/var/log/krb5lib.log
 {noformat}
 The issue goes away. 
 Now, once again -- the kerberos auth is NOT configured for Hadoop, hence it 
 should NOT pay attention to /etc/krb5.conf to begin with.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9404) Reconcile dist-maketar.sh and dist-tar-stitching.sh

2013-03-13 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-9404:
--

 Summary: Reconcile dist-maketar.sh and dist-tar-stitching.sh
 Key: HADOOP-9404
 URL: https://issues.apache.org/jira/browse/HADOOP-9404
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0
Reporter: Jason Lowe
Priority: Trivial


Per a discussion in HADOOP-9397, there are a couple of different ways 
compressed tarballs are generated during the build.  Some projects create a 
{{dist-maketar.sh}} script that pipes the output of tar through gzip, while 
hadoop-dist creates a {{dist-tar-stitching.sh}} script which runs the commands 
separately.  Ideally these should be made consistent.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9299) kerberos name resolution is kicking in even when kerberos is not configured

2013-03-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13601806#comment-13601806
 ] 

Hadoop QA commented on HADOOP-9299:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12573606/HADOOP-9299.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 tests included appear to have a timeout.{color}

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2325//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2325//console

This message is automatically generated.

 kerberos name resolution is kicking in even when kerberos is not configured
 ---

 Key: HADOOP-9299
 URL: https://issues.apache.org/jira/browse/HADOOP-9299
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.3-alpha
Reporter: Roman Shaposhnik
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9299.patch


 Here's what I'm observing on a fully distributed cluster deployed via Bigtop 
 from the RC0 2.0.3-alpha tarball:
 {noformat}
 528077-oozie-tucu-W@mr-node] Error starting action [mr-node]. ErrorType 
 [TRANSIENT], ErrorCode [JA009], Message [JA009: 
 org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
 No rules applied to yarn/localhost@LOCALREALM
 at 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.init(AbstractDelegationTokenIdentifier.java:68)
 at 
 org.apache.hadoop.mapreduce.v2.api.MRDelegationTokenIdentifier.init(MRDelegationTokenIdentifier.java:51)
 at 
 org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getDelegationToken(HistoryClientService.java:336)
 at 
 org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.getDelegationToken(MRClientProtocolPBServiceImpl.java:210)
 at 
 org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:240)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
 Caused by: 
 org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
 No rules applied to yarn/localhost@LOCALREALM
 at 
 org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:378)
 at 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.init(AbstractDelegationTokenIdentifier.java:66)
 ... 12 more
 ]
 {noformat}
 This is submitting a mapreduce job via Oozie 3.3.1. The reason I think this 
 is a Hadoop issue rather than the oozie one is because when I hack 
 /etc/krb5.conf to be:
 {noformat}
 [libdefaults]
ticket_lifetime = 600
default_realm = LOCALHOST
default_tkt_enctypes = des3-hmac-sha1 des-cbc-crc
default_tgs_enctypes = des3-hmac-sha1 des-cbc-crc
 [realms]
LOCALHOST = {
kdc = localhost:88
default_domain = .local
}
 [domain_realm]
.local = LOCALHOST
 [logging]
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmin.log
default = FILE:/var/log/krb5lib.log
 {noformat}
 The issue goes away. 
 Now, once again -- the kerberos 

[jira] [Updated] (HADOOP-9380) Add totalLength to rpc response

2013-03-13 Thread Sanjay Radia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanjay Radia updated HADOOP-9380:
-

Attachment: HADOOP-9380-2.patch

Updated patch.

 Add totalLength to rpc response
 ---

 Key: HADOOP-9380
 URL: https://issues.apache.org/jira/browse/HADOOP-9380
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Attachments: HADOOP-9380-2.patch, HADOOP-9380.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9387) TestDFVariations fails on Windows after the merge

2013-03-13 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13601817#comment-13601817
 ] 

Chris Nauroth commented on HADOOP-9387:
---

Hi Ivan,

Thanks for working on cleaning it up.  This code has tripped me up in the past.

{code}
  public String getFilesystem() throws IOException {
if (Shell.WINDOWS) {
  this.filesystem = dirFile.getCanonicalPath().substring(0, 2);
  return this.filesystem;
} else {
  run();
  return filesystem;
}
  }
{code}

It appears that there is a pitfall if running on non-Windows.  If the user of 
this class calls {{getFilesystem}} before calling 
{{getMount}}, then {{filesystem}} will be uninitialized.  This is because on 
non-Windows, we assign to the {{filesystem}} member variable inside 
{{parseOutput}}, which gets called from {{getMount}}, but not 
{{getFilesystem}}.  Perhaps it's safer to move all of the parsing logic back 
into {{parseExecResult}}.

{code}
switch(getOSType()) {
  case OS_TYPE_AIX:
Long.parseLong(tokens.nextToken()); // capacity
Long.parseLong(tokens.nextToken()); // available
Integer.parseInt(tokens.nextToken()); // pct used
tokens.nextToken();
tokens.nextToken();
this.mount = tokens.nextToken();
break;

  case OS_TYPE_WIN:
  case OS_TYPE_SOLARIS:
  case OS_TYPE_MAC:
  case OS_TYPE_UNIX:
  default:
Long.parseLong(tokens.nextToken()); // capacity
Long.parseLong(tokens.nextToken()); // used
Long.parseLong(tokens.nextToken()); // available
Integer.parseInt(tokens.nextToken()); // pct used
this.mount = tokens.nextToken();
break;
   }
{code}

The patch removes the special handling for AIX, so I think this would cause a 
regression if running on that platform.  I've never used AIX, but what I infer 
from the old code is that the output of df -k on Linux places mount in column 
6, whereas on AIX it goes in column 7.  Therefore, we need an extra call to 
{{StringTokenizer#nextToken}} if running on AIX.  Unfortunately, I don't have 
access to an AIX machine to confirm.  Perhaps this code could be simplified to 
always look at the last column without platform checks and special cases, i.e.:

{code}
String[] columns = line.split();
this.mount = columns[columns.length - 1];
{code}

However, that would be dependent on AIX printing mount in the last column, and 
without an AIX machine, I can't confirm.

I did confirm that the patch works for Windows, so maybe the simplest path 
forward is to prepare a smaller patch just for fixing Windows, and then file a 
follow-up jira for future refactoring work.  That would give more time to track 
down someone with access to AIX to help with testing.


 TestDFVariations fails on Windows after the merge
 -

 Key: HADOOP-9387
 URL: https://issues.apache.org/jira/browse/HADOOP-9387
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-9387.trunk.2.patch, HADOOP-9387.trunk.patch


 Test fails with the following errors:
 {code}
 Running org.apache.hadoop.fs.TestDFVariations
 Tests run: 4, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 0.186 sec  
 FAILURE!
 testOSParsing(org.apache.hadoop.fs.TestDFVariations)  Time elapsed: 109 sec  
  ERROR!
 java.io.IOException: Fewer lines of output than expected
 at org.apache.hadoop.fs.DF.parseOutput(DF.java:203)
 at org.apache.hadoop.fs.DF.getMount(DF.java:150)
 at 
 org.apache.hadoop.fs.TestDFVariations.testOSParsing(TestDFVariations.java:59)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
 at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
 at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
 at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
 at 
 org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)
 testGetMountCurrentDirectory(org.apache.hadoop.fs.TestDFVariations)  Time 
 elapsed: 1 sec   ERROR!
 java.io.IOException: Fewer lines of output than expected
 at org.apache.hadoop.fs.DF.parseOutput(DF.java:203)
 at org.apache.hadoop.fs.DF.getMount(DF.java:150)
 at 
 

[jira] [Assigned] (HADOOP-8990) Some minor issus in protobuf based ipc

2013-03-13 Thread Sanjay Radia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanjay Radia reassigned HADOOP-8990:


Assignee: Sanjay Radia

 Some minor issus in protobuf based ipc
 --

 Key: HADOOP-8990
 URL: https://issues.apache.org/jira/browse/HADOOP-8990
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Binglin Chang
Assignee: Sanjay Radia
Priority: Minor

 1. proto file naming
 RpcPayloadHeader.proto include not only RpcPayLoadHeaderProto, but also 
 RpcResponseHeaderProto, which is irrelevant to the file name.
 hadoop_rpc.proto only include HadoopRpcRequestProto, and the filename 
 hadoop_rpc is strange comparing to other .proto file names.
 How about merge those two file into HadoopRpc.proto?
 2. proto class naming
 In rpc request RpcPayloadHeaderProto includes callId, but in rpc response 
 callId is included in RpcResponseHeaderProto, and there is also 
 HadoopRpcRequestProto, this is just too confusing.
 3. The rpc system is not fully protobuf based, there are still some Writables:
 RpcRequestWritable and RpcResponseWritable.
 rpc response exception name and stack trace string.
 And RpcRequestWritable uses protobuf style varint32 prefix, but 
 RpcResponseWritable uses int32 prefix, why this inconsistency?
 Currently rpc request is splitted into length, PayLoadHeader and PayLoad, and 
 response into RpcResponseHeader, response and error message. 
 I think wrap request and response into single RequstProto and ResponseProto 
 is better, cause this gives a formal complete wire format definition, 
 or developer need to look into the source code and hard coding the 
 communication format.
 These issues above make it very confusing and hard for developers to use 
 these rpc interfaces.
 Some of these issues can be solved without breaking compatibility, but some 
 can not, but at least we need to know what will be changed and what will stay 
 stable?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9380) Add totalLength to rpc response

2013-03-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13601861#comment-13601861
 ] 

Hadoop QA commented on HADOOP-9380:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12573620/HADOOP-9380-2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2326//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2326//console

This message is automatically generated.

 Add totalLength to rpc response
 ---

 Key: HADOOP-9380
 URL: https://issues.apache.org/jira/browse/HADOOP-9380
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Attachments: HADOOP-9380-2.patch, HADOOP-9380.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9299) kerberos name resolution is kicking in even when kerberos is not configured

2013-03-13 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13601876#comment-13601876
 ] 

Alejandro Abdelnur commented on HADOOP-9299:


In UGI, the skip check should be kept, else it will break things for things 
using hadoop-auth which don't use hadoop config files in their classpath 
(Oozie, HttpFS).

{code}
-  if (!skipRulesSetting) {
-HadoopKerberosName.setConfiguration(conf);
-  }
+  HadoopKerberosName.setConfiguration(conf);
{code}


 kerberos name resolution is kicking in even when kerberos is not configured
 ---

 Key: HADOOP-9299
 URL: https://issues.apache.org/jira/browse/HADOOP-9299
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.3-alpha
Reporter: Roman Shaposhnik
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9299.patch


 Here's what I'm observing on a fully distributed cluster deployed via Bigtop 
 from the RC0 2.0.3-alpha tarball:
 {noformat}
 528077-oozie-tucu-W@mr-node] Error starting action [mr-node]. ErrorType 
 [TRANSIENT], ErrorCode [JA009], Message [JA009: 
 org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
 No rules applied to yarn/localhost@LOCALREALM
 at 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.init(AbstractDelegationTokenIdentifier.java:68)
 at 
 org.apache.hadoop.mapreduce.v2.api.MRDelegationTokenIdentifier.init(MRDelegationTokenIdentifier.java:51)
 at 
 org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getDelegationToken(HistoryClientService.java:336)
 at 
 org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.getDelegationToken(MRClientProtocolPBServiceImpl.java:210)
 at 
 org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:240)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
 Caused by: 
 org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
 No rules applied to yarn/localhost@LOCALREALM
 at 
 org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:378)
 at 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.init(AbstractDelegationTokenIdentifier.java:66)
 ... 12 more
 ]
 {noformat}
 This is submitting a mapreduce job via Oozie 3.3.1. The reason I think this 
 is a Hadoop issue rather than the oozie one is because when I hack 
 /etc/krb5.conf to be:
 {noformat}
 [libdefaults]
ticket_lifetime = 600
default_realm = LOCALHOST
default_tkt_enctypes = des3-hmac-sha1 des-cbc-crc
default_tgs_enctypes = des3-hmac-sha1 des-cbc-crc
 [realms]
LOCALHOST = {
kdc = localhost:88
default_domain = .local
}
 [domain_realm]
.local = LOCALHOST
 [logging]
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmin.log
default = FILE:/var/log/krb5lib.log
 {noformat}
 The issue goes away. 
 Now, once again -- the kerberos auth is NOT configured for Hadoop, hence it 
 should NOT pay attention to /etc/krb5.conf to begin with.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9299) kerberos name resolution is kicking in even when kerberos is not configured

2013-03-13 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13601878#comment-13601878
 ] 

Alejandro Abdelnur commented on HADOOP-9299:


Unless I'm missing something, we are using the Kerberos principal short name 
when interacting with an unsecure cluster, that seems wrong, no?

 kerberos name resolution is kicking in even when kerberos is not configured
 ---

 Key: HADOOP-9299
 URL: https://issues.apache.org/jira/browse/HADOOP-9299
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.3-alpha
Reporter: Roman Shaposhnik
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9299.patch


 Here's what I'm observing on a fully distributed cluster deployed via Bigtop 
 from the RC0 2.0.3-alpha tarball:
 {noformat}
 528077-oozie-tucu-W@mr-node] Error starting action [mr-node]. ErrorType 
 [TRANSIENT], ErrorCode [JA009], Message [JA009: 
 org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
 No rules applied to yarn/localhost@LOCALREALM
 at 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.init(AbstractDelegationTokenIdentifier.java:68)
 at 
 org.apache.hadoop.mapreduce.v2.api.MRDelegationTokenIdentifier.init(MRDelegationTokenIdentifier.java:51)
 at 
 org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getDelegationToken(HistoryClientService.java:336)
 at 
 org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.getDelegationToken(MRClientProtocolPBServiceImpl.java:210)
 at 
 org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:240)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
 Caused by: 
 org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
 No rules applied to yarn/localhost@LOCALREALM
 at 
 org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:378)
 at 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.init(AbstractDelegationTokenIdentifier.java:66)
 ... 12 more
 ]
 {noformat}
 This is submitting a mapreduce job via Oozie 3.3.1. The reason I think this 
 is a Hadoop issue rather than the oozie one is because when I hack 
 /etc/krb5.conf to be:
 {noformat}
 [libdefaults]
ticket_lifetime = 600
default_realm = LOCALHOST
default_tkt_enctypes = des3-hmac-sha1 des-cbc-crc
default_tgs_enctypes = des3-hmac-sha1 des-cbc-crc
 [realms]
LOCALHOST = {
kdc = localhost:88
default_domain = .local
}
 [domain_realm]
.local = LOCALHOST
 [logging]
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmin.log
default = FILE:/var/log/krb5lib.log
 {noformat}
 The issue goes away. 
 Now, once again -- the kerberos auth is NOT configured for Hadoop, hence it 
 should NOT pay attention to /etc/krb5.conf to begin with.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9371) Define Semantics of FileSystem and FileContext more rigorously

2013-03-13 Thread Mike Liddell (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13601901#comment-13601901
 ] 

Mike Liddell commented on HADOOP-9371:
--

A few items for consideration:

Possible additions to 'implicit assumption': 
 - paths are represented as Unicode strings
 - equality/comparison of paths is based on binary content. this implies 
case-sensitivity and no locale-specific comparison rules.

The data added to a file during a write or append MAY be visible during while 
the write operation is in progress.
- Allowing read(s) during write seems to break the subsequent rule that 
readers always see consistent data.

 Deleting the root path, /, MUST fail iff recursive==false.
- If the root path is empty, it seems reasonable for delete(/,false) to 
succeed but to have no effect.

 After a file is created, all ls operations on the file and parent directory 
 MUST not find the file
- copy-paste error - after a file is deleted ...

 Security: if a caller has the rights to list a directory, it has the rights 
 to list directories all the way up the tree.
- This point raises lots of interesting questions and requirements for 
individual methods.  A section on security assumptions/rules would be great.




 Define Semantics of FileSystem and FileContext more rigorously
 --

 Key: HADOOP-9371
 URL: https://issues.apache.org/jira/browse/HADOOP-9371
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Affects Versions: 1.2.0, 3.0.0
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-9361.patch, HadoopFilesystemContract.pdf

   Original Estimate: 48h
  Remaining Estimate: 48h

 The semantics of {{FileSystem}} and {{FileContext}} are not completely 
 defined in terms of 
 # core expectations of a filesystem
 # consistency requirements.
 # concurrency requirements.
 # minimum scale limits
 Furthermore, methods are not defined strictly enough in terms of their 
 outcomes and failure modes.
 The requirements and method semantics should be defined more strictly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9390) Tests fail when run as root/Administrator.

2013-03-13 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13601915#comment-13601915
 ] 

Chris Nauroth commented on HADOOP-9390:
---

I am planning on closing this as Won't Fix on Friday, 3/15, unless I hear 
feedback otherwise.

 Tests fail when run as root/Administrator.
 --

 Key: HADOOP-9390
 URL: https://issues.apache.org/jira/browse/HADOOP-9390
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Chris Nauroth
 Fix For: 3.0.0


 There is at least one test, {{TestDiskChecker}}, that fails when running as 
 root on Linux or Administrator on Windows.  The test assumes that setting 
 file permissions can make a file inaccessible, without considering the 
 possibility that root can access everything.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira