[jira] [Commented] (HADOOP-9387) TestDFVariations fails on Windows after the merge

2013-03-14 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13602103#comment-13602103
 ] 

Ivan Mitic commented on HADOOP-9387:


Thanks for reviewing Chris!

Hmm, something does not add up, looks like you are looking at the older version 
of source code :) Can you please check for this? I looked up the history and 
HADOOP-9337 removed the above special handling for AIX.

> TestDFVariations fails on Windows after the merge
> -
>
> Key: HADOOP-9387
> URL: https://issues.apache.org/jira/browse/HADOOP-9387
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Ivan Mitic
>Assignee: Ivan Mitic
> Attachments: HADOOP-9387.trunk.2.patch, HADOOP-9387.trunk.patch
>
>
> Test fails with the following errors:
> {code}
> Running org.apache.hadoop.fs.TestDFVariations
> Tests run: 4, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 0.186 sec <<< 
> FAILURE!
> testOSParsing(org.apache.hadoop.fs.TestDFVariations)  Time elapsed: 109 sec  
> <<< ERROR!
> java.io.IOException: Fewer lines of output than expected
> at org.apache.hadoop.fs.DF.parseOutput(DF.java:203)
> at org.apache.hadoop.fs.DF.getMount(DF.java:150)
> at 
> org.apache.hadoop.fs.TestDFVariations.testOSParsing(TestDFVariations.java:59)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)
> testGetMountCurrentDirectory(org.apache.hadoop.fs.TestDFVariations)  Time 
> elapsed: 1 sec  <<< ERROR!
> java.io.IOException: Fewer lines of output than expected
> at org.apache.hadoop.fs.DF.parseOutput(DF.java:203)
> at org.apache.hadoop.fs.DF.getMount(DF.java:150)
> at 
> org.apache.hadoop.fs.TestDFVariations.testGetMountCurrentDirectory(TestDFVariations.java:139)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9371) Define Semantics of FileSystem and FileContext more rigorously

2013-03-14 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13602166#comment-13602166
 ] 

Steve Loughran commented on HADOOP-9371:


[~mikelid] all good points.

How about you submit a patch to the md file for the implicit assumptions, the 
copy-paste and the root dir -that one being easy to test on all but localfs.

That "what happens to read during a write or append" is a tough one. HDFS 
silently serves up new data when the read crosses a block, which I'm not 
convinced is what anyone expects to have happen. 

We could rephrase consistency "after any update operation has completed, read 
operations initiated afterwards see a consistent view of the latest data"?

Even there, the ambiguity of what happens of read-during-write is something we 
should pull out, as it may be where user expectations != hdfs operation

 

> Define Semantics of FileSystem and FileContext more rigorously
> --
>
> Key: HADOOP-9371
> URL: https://issues.apache.org/jira/browse/HADOOP-9371
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 1.2.0, 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-9361.patch, HadoopFilesystemContract.pdf
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> The semantics of {{FileSystem}} and {{FileContext}} are not completely 
> defined in terms of 
> # core expectations of a filesystem
> # consistency requirements.
> # concurrency requirements.
> # minimum scale limits
> Furthermore, methods are not defined strictly enough in terms of their 
> outcomes and failure modes.
> The requirements and method semantics should be defined more strictly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9370) Write FSWrapper class to wrap FileSystem and FileContext for better test coverage

2013-03-14 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13602167#comment-13602167
 ] 

Steve Loughran commented on HADOOP-9370:


Andrew -I'll look at this, but probably not for a few days (work + conf). Ping 
me if not.

I think we may want to consider doing a branch here to get the dev cycle 
through fast, with the spec being RTC, and test evolution done as CTR 
-otherwise we could work via git.

> Write FSWrapper class to wrap FileSystem and FileContext for better test 
> coverage
> -
>
> Key: HADOOP-9370
> URL: https://issues.apache.org/jira/browse/HADOOP-9370
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 1.2.0, 3.0.0
>Reporter: Steve Loughran
>Assignee: Andrew Wang
> Attachments: hadoop-9370-1.patch, hadoop-9370-2.patch
>
>
> Take the FileSystem and FileContext from HADOOP-9355 and use it for tests 
> against both FS APIs

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9397) Incremental dist tar build fails

2013-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13602177#comment-13602177
 ] 

Hudson commented on HADOOP-9397:


Integrated in Hadoop-Yarn-trunk #155 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/155/])
HADOOP-9397. Incremental dist tar build fails. Contributed by Chris Nauroth 
(Revision 1456212)

 Result = SUCCESS
jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1456212
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-dist/pom.xml


> Incremental dist tar build fails
> 
>
> Key: HADOOP-9397
> URL: https://issues.apache.org/jira/browse/HADOOP-9397
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Jason Lowe
>Assignee: Chris Nauroth
> Fix For: 3.0.0
>
> Attachments: HADOOP-9397.1.patch
>
>
> Building a dist tar build when the dist tarball already exists from a 
> previous build fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9258) Add stricter tests to FileSystemContractTestBase

2013-03-14 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13602210#comment-13602210
 ] 

Tom White commented on HADOOP-9258:
---

I tried running Jets3tS3FileSystemContractTest by setting the following 
properties in src/test/resources/core-site.xml:

{noformat}

  test.fs.s3.name
  s3://mytestbucket
  The name of the s3 file system for testing.



  fs.s3.awsAccessKeyId
  xxx



  fs.s3.awsSecretAccessKey
  xxx

{noformat}

However, testLSRootDir is consistently hanging with the following stacktrace.

Steve, did you manage to run against S3 yet?

{noformat}
"main" prio=5 tid=10500 nid=0x100601000 runnable [1005fd000]
   java.lang.Thread.State: RUNNABLE
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:129)
at 
com.sun.net.ssl.internal.ssl.InputRecord.readFully(InputRecord.java:293)
at com.sun.net.ssl.internal.ssl.InputRecord.read(InputRecord.java:331)
at 
com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:863)
- locked <7bb659288> (a java.lang.Object)
at 
com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:820)
at 
com.sun.net.ssl.internal.ssl.AppInputStream.read(AppInputStream.java:75)
- locked <7bb659338> (a com.sun.net.ssl.internal.ssl.AppInputStream)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
- locked <7bb66ad28> (a java.io.BufferedInputStream)
at 
org.apache.commons.httpclient.HttpParser.readRawLine(HttpParser.java:78)
at 
org.apache.commons.httpclient.HttpParser.readLine(HttpParser.java:106)
at 
org.apache.commons.httpclient.HttpConnection.readLine(HttpConnection.java:1116)
at 
org.apache.commons.httpclient.MultiThreadedHttpConnectionManager$HttpConnectionAdapter.readLine(MultiThreadedHttpConnectionManager.java:1413)
at 
org.apache.commons.httpclient.HttpMethodBase.readStatusLine(HttpMethodBase.java:1973)
at 
org.apache.commons.httpclient.HttpMethodBase.readResponse(HttpMethodBase.java:1735)
at 
org.apache.commons.httpclient.HttpMethodBase.execute(HttpMethodBase.java:1098)
at 
org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:398)
at 
org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171)
at 
org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397)
at 
org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:323)
at 
org.jets3t.service.impl.rest.httpclient.RestS3Service.performRequest(RestS3Service.java:357)
at 
org.jets3t.service.impl.rest.httpclient.RestS3Service.performRestGet(RestS3Service.java:686)
at 
org.jets3t.service.impl.rest.httpclient.RestS3Service.listObjectsInternal(RestS3Service.java:1083)
at 
org.jets3t.service.impl.rest.httpclient.RestS3Service.listObjectsImpl(RestS3Service.java:1046)
at org.jets3t.service.S3Service.listObjects(S3Service.java:1299)
at org.jets3t.service.S3Service.listObjects(S3Service.java:1271)
at org.jets3t.service.S3Service.listObjects(S3Service.java:1137)
at 
org.apache.hadoop.fs.s3.Jets3tFileSystemStore.listSubPaths(Jets3tFileSystemStore.java:279)
at 
org.apache.hadoop.fs.s3.S3FileSystem.listStatus(S3FileSystem.java:202)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1430)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1470)
at org.apache.hadoop.fs.FileSystem$4.(FileSystem.java:1745)
at 
org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:1744)
at 
org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:1727)
at 
org.apache.hadoop.fs.FileSystem$5.handleFileStat(FileSystem.java:1820)
at org.apache.hadoop.fs.FileSystem$5.hasNext(FileSystem.java:1797)
at 
org.apache.hadoop.fs.FileSystemContractBaseTest.assertListFilesFinds(FileSystemContractBaseTest.java:719)
at 
org.apache.hadoop.fs.FileSystemContractBaseTest.testLSRootDir(FileSystemContractBaseTest.java:704)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at junit.framework.TestCase.runTest(TestCase.java:168)
at junit.framework.TestCase.runBare(TestCase.java:134)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.Te

[jira] [Commented] (HADOOP-8816) HTTP Error 413 full HEAD if using kerberos authentication

2013-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1360#comment-1360
 ] 

Hudson commented on HADOOP-8816:


Integrated in Hadoop-Hdfs-0.23-Build #553 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/553/])
HADOOP-8816.  HTTP Error 413 full HEAD if using kerberos authentication 
(daryn) (Revision 1455974)

 Result = SUCCESS
daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1455974
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServer.java


> HTTP Error 413 full HEAD if using kerberos authentication
> -
>
> Key: HADOOP-8816
> URL: https://issues.apache.org/jira/browse/HADOOP-8816
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 2.0.1-alpha
> Environment: ubuntu linux with active directory kerberos.
>Reporter: Moritz Moeller
>Assignee: Moritz Moeller
> Fix For: 2.0.3-alpha, 0.23.7
>
> Attachments: HADOOP-8816.patch, 
> hadoop-common-kerberos-increase-http-header-buffer-size.patch
>
>
> The HTTP Authentication: header is too large if using kerberos and the 
> request is rejected by Jetty because Jetty has a too low default header size 
> limit.
> Can be fixed by adding ret.setHeaderBufferSize(1024*128); in 
> org.apache.hadoop.http.HttpServer.createDefaultChannelConnector

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9403) in case of zero map jobs map completion graph is broken

2013-03-14 Thread Abhishek Gayakwad (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13602238#comment-13602238
 ] 

Abhishek Gayakwad commented on HADOOP-9403:
---

Version is correct and as you mentioned this issue belongs to MAPREDUCE project.
Should I create issue there and change status of this issue ?

> in case of zero map jobs map completion graph is broken
> ---
>
> Key: HADOOP-9403
> URL: https://issues.apache.org/jira/browse/HADOOP-9403
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.2
>Reporter: Abhishek Gayakwad
>Priority: Minor
> Attachments: map-completion-graph-broken.jpg
>
>
> In case of zero map jobs (normal case in hive MR jobs) jobs completion map is 
> broken on jobDetails.jsp. 
> This doesn't happen in case of reduce because we have a check saying if 
> job.getTasks(TaskType.REDUCE).length > 0 then only show reduce completion 
> graph

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9397) Incremental dist tar build fails

2013-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13602244#comment-13602244
 ] 

Hudson commented on HADOOP-9397:


Integrated in Hadoop-Hdfs-trunk #1344 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1344/])
HADOOP-9397. Incremental dist tar build fails. Contributed by Chris Nauroth 
(Revision 1456212)

 Result = FAILURE
jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1456212
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-dist/pom.xml


> Incremental dist tar build fails
> 
>
> Key: HADOOP-9397
> URL: https://issues.apache.org/jira/browse/HADOOP-9397
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Jason Lowe
>Assignee: Chris Nauroth
> Fix For: 3.0.0
>
> Attachments: HADOOP-9397.1.patch
>
>
> Building a dist tar build when the dist tarball already exists from a 
> previous build fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9299) kerberos name resolution is kicking in even when kerberos is not configured

2013-03-14 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13602271#comment-13602271
 ] 

Daryn Sharp commented on HADOOP-9299:
-

bq. In UGI, the skip check should be kept, else it will break things for things 
using hadoop-auth which don't use hadoop config files in their classpath.

I think in those cases the defaults will kick in, which is the behavior we'd 
want?  Or is there a quirk that will cause problems?

bq. Unless I'm missing something, we are using the Kerberos principal short 
name when interacting with an unsecure cluster, that seems wrong, no?

Please elaborate?  Overall, I don't believe this patch causes any fundamental 
change in behavior - other than making insecure clients/servers be able to 
perform simple reduction of a principal in the absence of explicit rules.  A 
few observations:
* Within the UGI, the short name appears to be computed in the ctor for use as 
a key in the group name cache.  That's typically not required on the 
client-side.  External to the UGI usage should experience no change in 
behavior.  In either case, I don't think new issues have been introduced?
* An IPC connection will (already) pass the full principal in the connection 
context.  It's up to the insecure server to reduce the principal to a short 
name.  I haven't tested, but w/o this patch, I think an insecure server will 
choke on the kerberos principal unless configured with rules.
* The token operations do already inconsistently pass a short name or principal 
for the various fields.  Now that's wrong, and should always be the full 
principal, but it's a separate issue to fix.

Is there somewhere that I've made something "worse"?  Or are you referring to 
pre-existing issues that should be addressed on another jira?

> kerberos name resolution is kicking in even when kerberos is not configured
> ---
>
> Key: HADOOP-9299
> URL: https://issues.apache.org/jira/browse/HADOOP-9299
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.3-alpha
>Reporter: Roman Shaposhnik
>Assignee: Daryn Sharp
>Priority: Blocker
> Attachments: HADOOP-9299.patch
>
>
> Here's what I'm observing on a fully distributed cluster deployed via Bigtop 
> from the RC0 2.0.3-alpha tarball:
> {noformat}
> 528077-oozie-tucu-W@mr-node] Error starting action [mr-node]. ErrorType 
> [TRANSIENT], ErrorCode [JA009], Message [JA009: 
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to yarn/localhost@LOCALREALM
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.(AbstractDelegationTokenIdentifier.java:68)
> at 
> org.apache.hadoop.mapreduce.v2.api.MRDelegationTokenIdentifier.(MRDelegationTokenIdentifier.java:51)
> at 
> org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getDelegationToken(HistoryClientService.java:336)
> at 
> org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.getDelegationToken(MRClientProtocolPBServiceImpl.java:210)
> at 
> org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:240)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
> Caused by: 
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to yarn/localhost@LOCALREALM
> at 
> org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:378)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.(AbstractDelegationTokenIdentifier.java:66)
> ... 12 more
> ]
> {noformat}
> This is submitting a mapreduce job via Oozie 3.3.1. The reason I think this 
> is a Hadoop issue rather than the oozie one is because when I hack 
> /etc/krb5.conf to be:
> {noformat}
> [libdefaults]
>ticket_lifetime = 600
>default_realm = LOCALHOST
>default_tkt_enctypes = des3-hmac-sha1 des-cbc-crc
>default_tgs_enctypes = des3-hmac-sha1 des-cbc-crc
> [realms]
>LOCALHOST = {
>kdc = localh

[jira] [Commented] (HADOOP-9397) Incremental dist tar build fails

2013-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13602289#comment-13602289
 ] 

Hudson commented on HADOOP-9397:


Integrated in Hadoop-Mapreduce-trunk #1372 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1372/])
HADOOP-9397. Incremental dist tar build fails. Contributed by Chris Nauroth 
(Revision 1456212)

 Result = SUCCESS
jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1456212
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-dist/pom.xml


> Incremental dist tar build fails
> 
>
> Key: HADOOP-9397
> URL: https://issues.apache.org/jira/browse/HADOOP-9397
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Jason Lowe
>Assignee: Chris Nauroth
> Fix For: 3.0.0
>
> Attachments: HADOOP-9397.1.patch
>
>
> Building a dist tar build when the dist tarball already exists from a 
> previous build fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9326) maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-common: There a test failures.

2013-03-14 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13602302#comment-13602302
 ] 

Suresh Srinivas commented on HADOOP-9326:
-

[~aymenjla] http://hadoop.apache.org/mailing_lists.html#Developers is the 
appropriate mailing list.


> maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-common: 
> There a test failures.
> -
>
> Key: HADOOP-9326
> URL: https://issues.apache.org/jira/browse/HADOOP-9326
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, test
> Environment: For information, i take hadoop with GIT and i run it on 
> mac OS 
>Reporter: JLASSI Aymen
>   Original Estimate: 500h
>  Remaining Estimate: 500h
>
> I'd like to compile hadoop from source code, and when i launch test-step, i 
> have the desciption as follows, when i skip the test-step to the package 
> step, i have the same problem, the same description of bug:
> Results :
> Failed tests:   testFailFullyDelete(org.apache.hadoop.fs.TestFileUtil): The 
> directory xSubDir *should* not have been deleted. expected: but 
> was:
>   testFailFullyDeleteContents(org.apache.hadoop.fs.TestFileUtil): The 
> directory xSubDir *should* not have been deleted. expected: but 
> was:
>   
> testListStatusThrowsExceptionForUnreadableDir(org.apache.hadoop.fs.TestFSMainOperationsLocalFileSystem):
>  Should throw IOException
>   test0[0](org.apache.hadoop.fs.TestLocalDirAllocator): Checking for 
> build/test/temp/RELATIVE1 in 
> build/test/temp/RELATIVE0/block4197707426846287299.tmp - FAILED!
>   
> testROBufferDirAndRWBufferDir[0](org.apache.hadoop.fs.TestLocalDirAllocator): 
> Checking for build/test/temp/RELATIVE2 in 
> build/test/temp/RELATIVE1/block138767728739012230.tmp - FAILED!
>   testRWBufferDirBecomesRO[0](org.apache.hadoop.fs.TestLocalDirAllocator): 
> Checking for build/test/temp/RELATIVE3 in 
> build/test/temp/RELATIVE4/block4888615109050601773.tmp - FAILED!
>   test0[1](org.apache.hadoop.fs.TestLocalDirAllocator): Checking for 
> /Users/aymenjlassi/Desktop/hadoop_source/releaseGit/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE1
>  in 
> /Users/aymenjlassi/Desktop/hadoop_source/releaseGit/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE0/block4663369813226761504.tmp
>  - FAILED!
>   
> testROBufferDirAndRWBufferDir[1](org.apache.hadoop.fs.TestLocalDirAllocator): 
> Checking for 
> /Users/aymenjlassi/Desktop/hadoop_source/releaseGit/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE2
>  in 
> /Users/aymenjlassi/Desktop/hadoop_source/releaseGit/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE1/block2846944239985650460.tmp
>  - FAILED!
>   testRWBufferDirBecomesRO[1](org.apache.hadoop.fs.TestLocalDirAllocator): 
> Checking for 
> /Users/aymenjlassi/Desktop/hadoop_source/releaseGit/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE3
>  in 
> /Users/aymenjlassi/Desktop/hadoop_source/releaseGit/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE4/block4367331619344952181.tmp
>  - FAILED!
>   test0[2](org.apache.hadoop.fs.TestLocalDirAllocator): Checking for 
> file:/Users/aymenjlassi/Desktop/hadoop_source/releaseGit/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED1
>  in 
> /Users/aymenjlassi/Desktop/hadoop_source/releaseGit/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED0/block5687619346377173125.tmp
>  - FAILED!
>   
> testROBufferDirAndRWBufferDir[2](org.apache.hadoop.fs.TestLocalDirAllocator): 
> Checking for 
> file:/Users/aymenjlassi/Desktop/hadoop_source/releaseGit/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED2
>  in 
> /Users/aymenjlassi/Desktop/hadoop_source/releaseGit/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED1/block2235209534902942511.tmp
>  - FAILED!
>   testRWBufferDirBecomesRO[2](org.apache.hadoop.fs.TestLocalDirAllocator): 
> Checking for 
> file:/Users/aymenjlassi/Desktop/hadoop_source/releaseGit/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED3
>  in 
> /Users/aymenjlassi/Desktop/hadoop_source/releaseGit/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED4/block6994640486900109274.tmp
>  - FAILED!
>   testReportChecksumFailure(org.apache.hadoop.fs.TestLocalFileSystem)
>   
> testListStatusThrowsExceptionForUnreadableDir(org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem):
>  Should throw IOException
>   testCount(org.apache.hadoop.metrics2.util.TestSampleQuantiles): 
> expected:<50[.00 %ile +/- 5.00%: 1337(..)
>   testCheckDir_notDir_local(org.apache.hadoop.util.

[jira] [Commented] (HADOOP-9387) TestDFVariations fails on Windows after the merge

2013-03-14 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13602308#comment-13602308
 ] 

Chris Nauroth commented on HADOOP-9387:
---

+1 for the current patch.

Sorry, Ivan.  I saw removal of the now-unused {{OSType}} and assumed that your 
patch had also removed the AIX check.  Since that part of the change was 
already addressed in HADOOP-9337, there is no concern.


> TestDFVariations fails on Windows after the merge
> -
>
> Key: HADOOP-9387
> URL: https://issues.apache.org/jira/browse/HADOOP-9387
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Ivan Mitic
>Assignee: Ivan Mitic
> Attachments: HADOOP-9387.trunk.2.patch, HADOOP-9387.trunk.patch
>
>
> Test fails with the following errors:
> {code}
> Running org.apache.hadoop.fs.TestDFVariations
> Tests run: 4, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 0.186 sec <<< 
> FAILURE!
> testOSParsing(org.apache.hadoop.fs.TestDFVariations)  Time elapsed: 109 sec  
> <<< ERROR!
> java.io.IOException: Fewer lines of output than expected
> at org.apache.hadoop.fs.DF.parseOutput(DF.java:203)
> at org.apache.hadoop.fs.DF.getMount(DF.java:150)
> at 
> org.apache.hadoop.fs.TestDFVariations.testOSParsing(TestDFVariations.java:59)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)
> testGetMountCurrentDirectory(org.apache.hadoop.fs.TestDFVariations)  Time 
> elapsed: 1 sec  <<< ERROR!
> java.io.IOException: Fewer lines of output than expected
> at org.apache.hadoop.fs.DF.parseOutput(DF.java:203)
> at org.apache.hadoop.fs.DF.getMount(DF.java:150)
> at 
> org.apache.hadoop.fs.TestDFVariations.testGetMountCurrentDirectory(TestDFVariations.java:139)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9299) kerberos name resolution is kicking in even when kerberos is not configured

2013-03-14 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13602358#comment-13602358
 ] 

Daryn Sharp commented on HADOOP-9299:
-

Oh, I see the problem regarding skipping rules.  Hadoop-auth is explicitly 
setting the rules itself.  I'm surprised I missed that since I scrutinized the 
code after seeing that seemingly odd condition.  The interactions seem a bit 
fragile, but I'll put it back in. 

> kerberos name resolution is kicking in even when kerberos is not configured
> ---
>
> Key: HADOOP-9299
> URL: https://issues.apache.org/jira/browse/HADOOP-9299
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.3-alpha
>Reporter: Roman Shaposhnik
>Assignee: Daryn Sharp
>Priority: Blocker
> Attachments: HADOOP-9299.patch
>
>
> Here's what I'm observing on a fully distributed cluster deployed via Bigtop 
> from the RC0 2.0.3-alpha tarball:
> {noformat}
> 528077-oozie-tucu-W@mr-node] Error starting action [mr-node]. ErrorType 
> [TRANSIENT], ErrorCode [JA009], Message [JA009: 
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to yarn/localhost@LOCALREALM
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.(AbstractDelegationTokenIdentifier.java:68)
> at 
> org.apache.hadoop.mapreduce.v2.api.MRDelegationTokenIdentifier.(MRDelegationTokenIdentifier.java:51)
> at 
> org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getDelegationToken(HistoryClientService.java:336)
> at 
> org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.getDelegationToken(MRClientProtocolPBServiceImpl.java:210)
> at 
> org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:240)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
> Caused by: 
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to yarn/localhost@LOCALREALM
> at 
> org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:378)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.(AbstractDelegationTokenIdentifier.java:66)
> ... 12 more
> ]
> {noformat}
> This is submitting a mapreduce job via Oozie 3.3.1. The reason I think this 
> is a Hadoop issue rather than the oozie one is because when I hack 
> /etc/krb5.conf to be:
> {noformat}
> [libdefaults]
>ticket_lifetime = 600
>default_realm = LOCALHOST
>default_tkt_enctypes = des3-hmac-sha1 des-cbc-crc
>default_tgs_enctypes = des3-hmac-sha1 des-cbc-crc
> [realms]
>LOCALHOST = {
>kdc = localhost:88
>default_domain = .local
>}
> [domain_realm]
>.local = LOCALHOST
> [logging]
>kdc = FILE:/var/log/krb5kdc.log
>admin_server = FILE:/var/log/kadmin.log
>default = FILE:/var/log/krb5lib.log
> {noformat}
> The issue goes away. 
> Now, once again -- the kerberos auth is NOT configured for Hadoop, hence it 
> should NOT pay attention to /etc/krb5.conf to begin with.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9258) Add stricter tests to FileSystemContractTestBase

2013-03-14 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13602388#comment-13602388
 ] 

Steve Loughran commented on HADOOP-9258:


I thought I'd run against S3  & S3n all the tests, but trying today on S3 I see 
the same stack trace. cancelling the patch until I know more.

> Add stricter tests to FileSystemContractTestBase
> 
>
> Key: HADOOP-9258
> URL: https://issues.apache.org/jira/browse/HADOOP-9258
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 1.1.1, 2.0.3-alpha
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-9528-2.patch, HADOOP-9528-3.patch, 
> HADOOP-9528-4.patch, HADOOP-9528-5.patch, HADOOP-9528-6.patch, 
> HADOOP-9528-7.patch, HADOOP-9528.patch
>
>
> The File System Contract contains implicit assumptions that aren't checked in 
> the contract test base. Add more tests to define the contract's assumptions 
> more rigorously for those filesystems that are tested by this (not Local, BTW)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9258) Add stricter tests to FileSystemContractTestBase

2013-03-14 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9258:
---

Status: Open  (was: Patch Available)

> Add stricter tests to FileSystemContractTestBase
> 
>
> Key: HADOOP-9258
> URL: https://issues.apache.org/jira/browse/HADOOP-9258
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.0.3-alpha, 1.1.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-9528-2.patch, HADOOP-9528-3.patch, 
> HADOOP-9528-4.patch, HADOOP-9528-5.patch, HADOOP-9528-6.patch, 
> HADOOP-9528-7.patch, HADOOP-9528.patch
>
>
> The File System Contract contains implicit assumptions that aren't checked in 
> the contract test base. Add more tests to define the contract's assumptions 
> more rigorously for those filesystems that are tested by this (not Local, BTW)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9387) TestDFVariations fails on Windows after the merge

2013-03-14 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13602405#comment-13602405
 ] 

Ivan Mitic commented on HADOOP-9387:


No worries, thanks for the review!

> TestDFVariations fails on Windows after the merge
> -
>
> Key: HADOOP-9387
> URL: https://issues.apache.org/jira/browse/HADOOP-9387
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Ivan Mitic
>Assignee: Ivan Mitic
> Attachments: HADOOP-9387.trunk.2.patch, HADOOP-9387.trunk.patch
>
>
> Test fails with the following errors:
> {code}
> Running org.apache.hadoop.fs.TestDFVariations
> Tests run: 4, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 0.186 sec <<< 
> FAILURE!
> testOSParsing(org.apache.hadoop.fs.TestDFVariations)  Time elapsed: 109 sec  
> <<< ERROR!
> java.io.IOException: Fewer lines of output than expected
> at org.apache.hadoop.fs.DF.parseOutput(DF.java:203)
> at org.apache.hadoop.fs.DF.getMount(DF.java:150)
> at 
> org.apache.hadoop.fs.TestDFVariations.testOSParsing(TestDFVariations.java:59)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)
> testGetMountCurrentDirectory(org.apache.hadoop.fs.TestDFVariations)  Time 
> elapsed: 1 sec  <<< ERROR!
> java.io.IOException: Fewer lines of output than expected
> at org.apache.hadoop.fs.DF.parseOutput(DF.java:203)
> at org.apache.hadoop.fs.DF.getMount(DF.java:150)
> at 
> org.apache.hadoop.fs.TestDFVariations.testGetMountCurrentDirectory(TestDFVariations.java:139)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9258) Add stricter tests to FileSystemContractTestBase

2013-03-14 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9258:
---

Status: Patch Available  (was: Open)

resubmitting -I think it is S3-from-the-UK that isn't working today, as other 
tools of mine are failing, and this test did work yesterday

> Add stricter tests to FileSystemContractTestBase
> 
>
> Key: HADOOP-9258
> URL: https://issues.apache.org/jira/browse/HADOOP-9258
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.0.3-alpha, 1.1.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-9528-2.patch, HADOOP-9528-3.patch, 
> HADOOP-9528-4.patch, HADOOP-9528-5.patch, HADOOP-9528-6.patch, 
> HADOOP-9528-7.patch, HADOOP-9528.patch
>
>
> The File System Contract contains implicit assumptions that aren't checked in 
> the contract test base. Add more tests to define the contract's assumptions 
> more rigorously for those filesystems that are tested by this (not Local, BTW)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9299) kerberos name resolution is kicking in even when kerberos is not configured

2013-03-14 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13602438#comment-13602438
 ] 

Alejandro Abdelnur commented on HADOOP-9299:


on the rules setting, you got it.

on using the kerberos principal short name instead the uid for non-secure 
cluster, you are right that you are not changing behavior or making it worse, 
that is the current behavior. still it seems wrong to me as it would mean that 
a client depending on being kinit-ed or not it could be a different user for a 
non-secure cluster. we should not fix this as part of this ticket, but I think 
we should fix that.



> kerberos name resolution is kicking in even when kerberos is not configured
> ---
>
> Key: HADOOP-9299
> URL: https://issues.apache.org/jira/browse/HADOOP-9299
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.3-alpha
>Reporter: Roman Shaposhnik
>Assignee: Daryn Sharp
>Priority: Blocker
> Attachments: HADOOP-9299.patch
>
>
> Here's what I'm observing on a fully distributed cluster deployed via Bigtop 
> from the RC0 2.0.3-alpha tarball:
> {noformat}
> 528077-oozie-tucu-W@mr-node] Error starting action [mr-node]. ErrorType 
> [TRANSIENT], ErrorCode [JA009], Message [JA009: 
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to yarn/localhost@LOCALREALM
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.(AbstractDelegationTokenIdentifier.java:68)
> at 
> org.apache.hadoop.mapreduce.v2.api.MRDelegationTokenIdentifier.(MRDelegationTokenIdentifier.java:51)
> at 
> org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getDelegationToken(HistoryClientService.java:336)
> at 
> org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.getDelegationToken(MRClientProtocolPBServiceImpl.java:210)
> at 
> org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:240)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
> Caused by: 
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to yarn/localhost@LOCALREALM
> at 
> org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:378)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.(AbstractDelegationTokenIdentifier.java:66)
> ... 12 more
> ]
> {noformat}
> This is submitting a mapreduce job via Oozie 3.3.1. The reason I think this 
> is a Hadoop issue rather than the oozie one is because when I hack 
> /etc/krb5.conf to be:
> {noformat}
> [libdefaults]
>ticket_lifetime = 600
>default_realm = LOCALHOST
>default_tkt_enctypes = des3-hmac-sha1 des-cbc-crc
>default_tgs_enctypes = des3-hmac-sha1 des-cbc-crc
> [realms]
>LOCALHOST = {
>kdc = localhost:88
>default_domain = .local
>}
> [domain_realm]
>.local = LOCALHOST
> [logging]
>kdc = FILE:/var/log/krb5kdc.log
>admin_server = FILE:/var/log/kadmin.log
>default = FILE:/var/log/krb5lib.log
> {noformat}
> The issue goes away. 
> Now, once again -- the kerberos auth is NOT configured for Hadoop, hence it 
> should NOT pay attention to /etc/krb5.conf to begin with.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9400) Investigate emulating sticky bit directory permissions on Windows

2013-03-14 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-9400:
--

Priority: Minor  (was: Major)

> Investigate emulating sticky bit directory permissions on Windows
> -
>
> Key: HADOOP-9400
> URL: https://issues.apache.org/jira/browse/HADOOP-9400
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
> Environment: Windows
>Reporter: Arpit Agarwal
>Priority: Minor
>  Labels: windows
> Fix For: 3.0.0
>
>
> It should be possible to emulate sticky bit permissions on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9400) Investigate emulating sticky bit directory permissions on Windows

2013-03-14 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-9400:
--

Issue Type: Improvement  (was: Bug)

> Investigate emulating sticky bit directory permissions on Windows
> -
>
> Key: HADOOP-9400
> URL: https://issues.apache.org/jira/browse/HADOOP-9400
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
> Environment: Windows
>Reporter: Arpit Agarwal
>  Labels: windows
> Fix For: 3.0.0
>
>
> It should be possible to emulate sticky bit permissions on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Moved] (HADOOP-9405) TestGridmixSummary#testExecutionSummarizer is broken

2013-03-14 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers moved HDFS-4599 to HADOOP-9405:
--

  Component/s: (was: tools)
   tools
   test
Affects Version/s: (was: 3.0.0)
   3.0.0
  Key: HADOOP-9405  (was: HDFS-4599)
  Project: Hadoop Common  (was: Hadoop HDFS)

> TestGridmixSummary#testExecutionSummarizer is broken
> 
>
> Key: HADOOP-9405
> URL: https://issues.apache.org/jira/browse/HADOOP-9405
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test, tools
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: hdfs-4599-1.patch
>
>
> HADOOP-9252 changed how human readable numbers are printed, and required 
> updating a number of test cases. This one was missed because the Jenkins 
> precommit job apparently isn't running the tests in hadoop-tools.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9405) TestGridmixSummary#testExecutionSummarizer is broken

2013-03-14 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-9405:
---

   Resolution: Fixed
Fix Version/s: 3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've just committed this to trunk.

Thanks a lot for the contribution, Andrew.

> TestGridmixSummary#testExecutionSummarizer is broken
> 
>
> Key: HADOOP-9405
> URL: https://issues.apache.org/jira/browse/HADOOP-9405
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test, tools
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: hdfs-4599-1.patch
>
>
> HADOOP-9252 changed how human readable numbers are printed, and required 
> updating a number of test cases. This one was missed because the Jenkins 
> precommit job apparently isn't running the tests in hadoop-tools.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9405) TestGridmixSummary#testExecutionSummarizer is broken

2013-03-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13602670#comment-13602670
 ] 

Hadoop QA commented on HADOOP-9405:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12573562/hdfs-4599-1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 tests included appear to have a timeout.{color}

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 11 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-tools/hadoop-gridmix.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2327//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2327//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-gridmix.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2327//console

This message is automatically generated.

> TestGridmixSummary#testExecutionSummarizer is broken
> 
>
> Key: HADOOP-9405
> URL: https://issues.apache.org/jira/browse/HADOOP-9405
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test, tools
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: hdfs-4599-1.patch
>
>
> HADOOP-9252 changed how human readable numbers are printed, and required 
> updating a number of test cases. This one was missed because the Jenkins 
> precommit job apparently isn't running the tests in hadoop-tools.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9405) TestGridmixSummary#testExecutionSummarizer is broken

2013-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13602671#comment-13602671
 ] 

Hudson commented on HADOOP-9405:


Integrated in Hadoop-trunk-Commit #3474 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3474/])
HADOOP-9405. TestGridmixSummary#testExecutionSummarizer is broken. 
Contributed by Andrew Wang. (Revision 1456639)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1456639
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestGridmixSummary.java


> TestGridmixSummary#testExecutionSummarizer is broken
> 
>
> Key: HADOOP-9405
> URL: https://issues.apache.org/jira/browse/HADOOP-9405
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test, tools
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: hdfs-4599-1.patch
>
>
> HADOOP-9252 changed how human readable numbers are printed, and required 
> updating a number of test cases. This one was missed because the Jenkins 
> precommit job apparently isn't running the tests in hadoop-tools.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9406) hadoop-client leaks dependency on JDK tools jar

2013-03-14 Thread Alejandro Abdelnur (JIRA)
Alejandro Abdelnur created HADOOP-9406:
--

 Summary: hadoop-client leaks dependency on JDK tools jar
 Key: HADOOP-9406
 URL: https://issues.apache.org/jira/browse/HADOOP-9406
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.3-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.0.4-alpha


hadoop-client leaks out JDK tools jar as dependency. 

JDK tools jar is defined as a system dependency for 
hadoop-annotation/jdiff/javadocs purposes, if not done javadoc generation fails.

The problem is that in the way it is defined now, this dependency ends up 
leaking to hadoop-client and downstream projects that depend on hadoop-client 
may end up including/bundling JDK tools JAR.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9406) hadoop-client leaks dependency on JDK tools jar

2013-03-14 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-9406:
---

Attachment: HADOOP-9406.patch

verified tools dependency leak is gone from hadoop-client.

created full binary, source builds and site documentation successfully.

> hadoop-client leaks dependency on JDK tools jar
> ---
>
> Key: HADOOP-9406
> URL: https://issues.apache.org/jira/browse/HADOOP-9406
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.3-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 2.0.4-alpha
>
> Attachments: HADOOP-9406.patch
>
>
> hadoop-client leaks out JDK tools jar as dependency. 
> JDK tools jar is defined as a system dependency for 
> hadoop-annotation/jdiff/javadocs purposes, if not done javadoc generation 
> fails.
> The problem is that in the way it is defined now, this dependency ends up 
> leaking to hadoop-client and downstream projects that depend on hadoop-client 
> may end up including/bundling JDK tools JAR.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9406) hadoop-client leaks dependency on JDK tools jar

2013-03-14 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-9406:
---

Status: Patch Available  (was: Open)

> hadoop-client leaks dependency on JDK tools jar
> ---
>
> Key: HADOOP-9406
> URL: https://issues.apache.org/jira/browse/HADOOP-9406
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.3-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 2.0.4-alpha
>
> Attachments: HADOOP-9406.patch
>
>
> hadoop-client leaks out JDK tools jar as dependency. 
> JDK tools jar is defined as a system dependency for 
> hadoop-annotation/jdiff/javadocs purposes, if not done javadoc generation 
> fails.
> The problem is that in the way it is defined now, this dependency ends up 
> leaking to hadoop-client and downstream projects that depend on hadoop-client 
> may end up including/bundling JDK tools JAR.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9380) Add totalLength to rpc response

2013-03-14 Thread Luke Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13602708#comment-13602708
 ] 

Luke Lu commented on HADOOP-9380:
-

Thanks for the patch, Sanjay! A few issues for the v2 patch:

# _All_ responses (not just regular rpc responses) need to have a total length 
prefix (symmetric to the server request handling). i.e., we also need it for 
bad version, serialization and sasl etc. responses. Otherwise, we still can't 
have non-blocking clients.
# We should use exceptions instead of asserts for for total length checking at 
client side, as mismatch can happen due to server fault/network disconnect etc.

> Add totalLength to rpc response
> ---
>
> Key: HADOOP-9380
> URL: https://issues.apache.org/jira/browse/HADOOP-9380
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sanjay Radia
>Assignee: Sanjay Radia
> Attachments: HADOOP-9380-2.patch, HADOOP-9380.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9299) kerberos name resolution is kicking in even when kerberos is not configured

2013-03-14 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-9299:


Attachment: HADOOP-9299.patch

Restored prior behavior of rule setting, and added a bunch of tests for it.

> kerberos name resolution is kicking in even when kerberos is not configured
> ---
>
> Key: HADOOP-9299
> URL: https://issues.apache.org/jira/browse/HADOOP-9299
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.3-alpha
>Reporter: Roman Shaposhnik
>Assignee: Daryn Sharp
>Priority: Blocker
> Attachments: HADOOP-9299.patch, HADOOP-9299.patch
>
>
> Here's what I'm observing on a fully distributed cluster deployed via Bigtop 
> from the RC0 2.0.3-alpha tarball:
> {noformat}
> 528077-oozie-tucu-W@mr-node] Error starting action [mr-node]. ErrorType 
> [TRANSIENT], ErrorCode [JA009], Message [JA009: 
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to yarn/localhost@LOCALREALM
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.(AbstractDelegationTokenIdentifier.java:68)
> at 
> org.apache.hadoop.mapreduce.v2.api.MRDelegationTokenIdentifier.(MRDelegationTokenIdentifier.java:51)
> at 
> org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getDelegationToken(HistoryClientService.java:336)
> at 
> org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.getDelegationToken(MRClientProtocolPBServiceImpl.java:210)
> at 
> org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:240)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
> Caused by: 
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to yarn/localhost@LOCALREALM
> at 
> org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:378)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.(AbstractDelegationTokenIdentifier.java:66)
> ... 12 more
> ]
> {noformat}
> This is submitting a mapreduce job via Oozie 3.3.1. The reason I think this 
> is a Hadoop issue rather than the oozie one is because when I hack 
> /etc/krb5.conf to be:
> {noformat}
> [libdefaults]
>ticket_lifetime = 600
>default_realm = LOCALHOST
>default_tkt_enctypes = des3-hmac-sha1 des-cbc-crc
>default_tgs_enctypes = des3-hmac-sha1 des-cbc-crc
> [realms]
>LOCALHOST = {
>kdc = localhost:88
>default_domain = .local
>}
> [domain_realm]
>.local = LOCALHOST
> [logging]
>kdc = FILE:/var/log/krb5kdc.log
>admin_server = FILE:/var/log/kadmin.log
>default = FILE:/var/log/krb5lib.log
> {noformat}
> The issue goes away. 
> Now, once again -- the kerberos auth is NOT configured for Hadoop, hence it 
> should NOT pay attention to /etc/krb5.conf to begin with.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9406) hadoop-client leaks dependency on JDK tools jar

2013-03-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13602739#comment-13602739
 ] 

Hadoop QA commented on HADOOP-9406:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12573762/HADOOP-9406.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-client hadoop-common-project/hadoop-annotations.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2328//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2328//console

This message is automatically generated.

> hadoop-client leaks dependency on JDK tools jar
> ---
>
> Key: HADOOP-9406
> URL: https://issues.apache.org/jira/browse/HADOOP-9406
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.3-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 2.0.4-alpha
>
> Attachments: HADOOP-9406.patch
>
>
> hadoop-client leaks out JDK tools jar as dependency. 
> JDK tools jar is defined as a system dependency for 
> hadoop-annotation/jdiff/javadocs purposes, if not done javadoc generation 
> fails.
> The problem is that in the way it is defined now, this dependency ends up 
> leaking to hadoop-client and downstream projects that depend on hadoop-client 
> may end up including/bundling JDK tools JAR.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9299) kerberos name resolution is kicking in even when kerberos is not configured

2013-03-14 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13602738#comment-13602738
 ] 

Daryn Sharp commented on HADOOP-9299:
-

I can see the kinit and non-secure issue going either way.  I'd actually lean 
towards the TGT is the source of truth if present.  Having the user appear as 
the TGT user on a secure cluster, and as the unix user to a insecure cluster 
may be a bit jarring.  Or if security is enabled but then disabled on a 
cluster, the user may be surprised that the TGT user is no longer their 
identity.

> kerberos name resolution is kicking in even when kerberos is not configured
> ---
>
> Key: HADOOP-9299
> URL: https://issues.apache.org/jira/browse/HADOOP-9299
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.3-alpha
>Reporter: Roman Shaposhnik
>Assignee: Daryn Sharp
>Priority: Blocker
> Attachments: HADOOP-9299.patch, HADOOP-9299.patch
>
>
> Here's what I'm observing on a fully distributed cluster deployed via Bigtop 
> from the RC0 2.0.3-alpha tarball:
> {noformat}
> 528077-oozie-tucu-W@mr-node] Error starting action [mr-node]. ErrorType 
> [TRANSIENT], ErrorCode [JA009], Message [JA009: 
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to yarn/localhost@LOCALREALM
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.(AbstractDelegationTokenIdentifier.java:68)
> at 
> org.apache.hadoop.mapreduce.v2.api.MRDelegationTokenIdentifier.(MRDelegationTokenIdentifier.java:51)
> at 
> org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getDelegationToken(HistoryClientService.java:336)
> at 
> org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.getDelegationToken(MRClientProtocolPBServiceImpl.java:210)
> at 
> org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:240)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
> Caused by: 
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to yarn/localhost@LOCALREALM
> at 
> org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:378)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.(AbstractDelegationTokenIdentifier.java:66)
> ... 12 more
> ]
> {noformat}
> This is submitting a mapreduce job via Oozie 3.3.1. The reason I think this 
> is a Hadoop issue rather than the oozie one is because when I hack 
> /etc/krb5.conf to be:
> {noformat}
> [libdefaults]
>ticket_lifetime = 600
>default_realm = LOCALHOST
>default_tkt_enctypes = des3-hmac-sha1 des-cbc-crc
>default_tgs_enctypes = des3-hmac-sha1 des-cbc-crc
> [realms]
>LOCALHOST = {
>kdc = localhost:88
>default_domain = .local
>}
> [domain_realm]
>.local = LOCALHOST
> [logging]
>kdc = FILE:/var/log/krb5kdc.log
>admin_server = FILE:/var/log/kadmin.log
>default = FILE:/var/log/krb5lib.log
> {noformat}
> The issue goes away. 
> Now, once again -- the kerberos auth is NOT configured for Hadoop, hence it 
> should NOT pay attention to /etc/krb5.conf to begin with.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9299) kerberos name resolution is kicking in even when kerberos is not configured

2013-03-14 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13602778#comment-13602778
 ] 

Alejandro Abdelnur commented on HADOOP-9299:


Daryin, patch looks good, still don't see the need for the skipRulesSetting to 
overrideNameRules logic change. Is that necessary ? I don't see how the new || 
condition would exercise a different selection than the previous logic.

On the kerberos principal shortname and uid, I don't know what to make out of 
it, I don't think the current behavior is entirely correct but I see you point.

> kerberos name resolution is kicking in even when kerberos is not configured
> ---
>
> Key: HADOOP-9299
> URL: https://issues.apache.org/jira/browse/HADOOP-9299
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.3-alpha
>Reporter: Roman Shaposhnik
>Assignee: Daryn Sharp
>Priority: Blocker
> Attachments: HADOOP-9299.patch, HADOOP-9299.patch
>
>
> Here's what I'm observing on a fully distributed cluster deployed via Bigtop 
> from the RC0 2.0.3-alpha tarball:
> {noformat}
> 528077-oozie-tucu-W@mr-node] Error starting action [mr-node]. ErrorType 
> [TRANSIENT], ErrorCode [JA009], Message [JA009: 
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to yarn/localhost@LOCALREALM
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.(AbstractDelegationTokenIdentifier.java:68)
> at 
> org.apache.hadoop.mapreduce.v2.api.MRDelegationTokenIdentifier.(MRDelegationTokenIdentifier.java:51)
> at 
> org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getDelegationToken(HistoryClientService.java:336)
> at 
> org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.getDelegationToken(MRClientProtocolPBServiceImpl.java:210)
> at 
> org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:240)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
> Caused by: 
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to yarn/localhost@LOCALREALM
> at 
> org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:378)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.(AbstractDelegationTokenIdentifier.java:66)
> ... 12 more
> ]
> {noformat}
> This is submitting a mapreduce job via Oozie 3.3.1. The reason I think this 
> is a Hadoop issue rather than the oozie one is because when I hack 
> /etc/krb5.conf to be:
> {noformat}
> [libdefaults]
>ticket_lifetime = 600
>default_realm = LOCALHOST
>default_tkt_enctypes = des3-hmac-sha1 des-cbc-crc
>default_tgs_enctypes = des3-hmac-sha1 des-cbc-crc
> [realms]
>LOCALHOST = {
>kdc = localhost:88
>default_domain = .local
>}
> [domain_realm]
>.local = LOCALHOST
> [logging]
>kdc = FILE:/var/log/krb5kdc.log
>admin_server = FILE:/var/log/kadmin.log
>default = FILE:/var/log/krb5lib.log
> {noformat}
> The issue goes away. 
> Now, once again -- the kerberos auth is NOT configured for Hadoop, hence it 
> should NOT pay attention to /etc/krb5.conf to begin with.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9299) kerberos name resolution is kicking in even when kerberos is not configured

2013-03-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13602792#comment-13602792
 ] 

Hadoop QA commented on HADOOP-9299:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12573765/HADOOP-9299.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 tests included appear to have a timeout.{color}

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-auth hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2329//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2329//console

This message is automatically generated.

> kerberos name resolution is kicking in even when kerberos is not configured
> ---
>
> Key: HADOOP-9299
> URL: https://issues.apache.org/jira/browse/HADOOP-9299
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.3-alpha
>Reporter: Roman Shaposhnik
>Assignee: Daryn Sharp
>Priority: Blocker
> Attachments: HADOOP-9299.patch, HADOOP-9299.patch
>
>
> Here's what I'm observing on a fully distributed cluster deployed via Bigtop 
> from the RC0 2.0.3-alpha tarball:
> {noformat}
> 528077-oozie-tucu-W@mr-node] Error starting action [mr-node]. ErrorType 
> [TRANSIENT], ErrorCode [JA009], Message [JA009: 
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to yarn/localhost@LOCALREALM
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.(AbstractDelegationTokenIdentifier.java:68)
> at 
> org.apache.hadoop.mapreduce.v2.api.MRDelegationTokenIdentifier.(MRDelegationTokenIdentifier.java:51)
> at 
> org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getDelegationToken(HistoryClientService.java:336)
> at 
> org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.getDelegationToken(MRClientProtocolPBServiceImpl.java:210)
> at 
> org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:240)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
> Caused by: 
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to yarn/localhost@LOCALREALM
> at 
> org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:378)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.(AbstractDelegationTokenIdentifier.java:66)
> ... 12 more
> ]
> {noformat}
> This is submitting a mapreduce job via Oozie 3.3.1. The reason I think this 
> is a Hadoop issue rather than the oozie one is because when I hack 
> /etc/krb5.conf to be:
> {noformat}
> [libdefaults]
>ticket_lifetime = 600
>default_realm = LOCALHOST
>default_tkt_enctypes = des3-hmac-sha1 des-cbc-crc
>default_tgs_enctypes = des3-hmac-sha1 des-cbc-crc
> [realms]
>LOCALHOST = {
>kdc = localhost:88
>default_domain = .local
>}
> [domain_realm]
>.local = LOCALHOST
> [logging]
>kdc = FILE:/var/log/krb5kdc.log
>admin_server = FILE:/var/log/kadmin.log

[jira] [Commented] (HADOOP-9299) kerberos name resolution is kicking in even when kerberos is not configured

2013-03-14 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13602803#comment-13602803
 ] 

Daryn Sharp commented on HADOOP-9299:
-

The boolean & corresponding logic change aren't strictly necessary.  The 
dispersed logic for setting rules was a bit puzzling when I started on the 
jira.  I thought it would be cleaner and easier to understand if all the rule 
logic is encapsulated within one method.

> kerberos name resolution is kicking in even when kerberos is not configured
> ---
>
> Key: HADOOP-9299
> URL: https://issues.apache.org/jira/browse/HADOOP-9299
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.3-alpha
>Reporter: Roman Shaposhnik
>Assignee: Daryn Sharp
>Priority: Blocker
> Attachments: HADOOP-9299.patch, HADOOP-9299.patch
>
>
> Here's what I'm observing on a fully distributed cluster deployed via Bigtop 
> from the RC0 2.0.3-alpha tarball:
> {noformat}
> 528077-oozie-tucu-W@mr-node] Error starting action [mr-node]. ErrorType 
> [TRANSIENT], ErrorCode [JA009], Message [JA009: 
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to yarn/localhost@LOCALREALM
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.(AbstractDelegationTokenIdentifier.java:68)
> at 
> org.apache.hadoop.mapreduce.v2.api.MRDelegationTokenIdentifier.(MRDelegationTokenIdentifier.java:51)
> at 
> org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getDelegationToken(HistoryClientService.java:336)
> at 
> org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.getDelegationToken(MRClientProtocolPBServiceImpl.java:210)
> at 
> org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:240)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
> Caused by: 
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to yarn/localhost@LOCALREALM
> at 
> org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:378)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.(AbstractDelegationTokenIdentifier.java:66)
> ... 12 more
> ]
> {noformat}
> This is submitting a mapreduce job via Oozie 3.3.1. The reason I think this 
> is a Hadoop issue rather than the oozie one is because when I hack 
> /etc/krb5.conf to be:
> {noformat}
> [libdefaults]
>ticket_lifetime = 600
>default_realm = LOCALHOST
>default_tkt_enctypes = des3-hmac-sha1 des-cbc-crc
>default_tgs_enctypes = des3-hmac-sha1 des-cbc-crc
> [realms]
>LOCALHOST = {
>kdc = localhost:88
>default_domain = .local
>}
> [domain_realm]
>.local = LOCALHOST
> [logging]
>kdc = FILE:/var/log/krb5kdc.log
>admin_server = FILE:/var/log/kadmin.log
>default = FILE:/var/log/krb5lib.log
> {noformat}
> The issue goes away. 
> Now, once again -- the kerberos auth is NOT configured for Hadoop, hence it 
> should NOT pay attention to /etc/krb5.conf to begin with.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9406) hadoop-client leaks dependency on JDK tools jar

2013-03-14 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13602824#comment-13602824
 ] 

Aaron T. Myers commented on HADOOP-9406:


I'm no Maven expert, but in the abstract this change makes sense to me.

+1

> hadoop-client leaks dependency on JDK tools jar
> ---
>
> Key: HADOOP-9406
> URL: https://issues.apache.org/jira/browse/HADOOP-9406
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.3-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 2.0.4-alpha
>
> Attachments: HADOOP-9406.patch
>
>
> hadoop-client leaks out JDK tools jar as dependency. 
> JDK tools jar is defined as a system dependency for 
> hadoop-annotation/jdiff/javadocs purposes, if not done javadoc generation 
> fails.
> The problem is that in the way it is defined now, this dependency ends up 
> leaking to hadoop-client and downstream projects that depend on hadoop-client 
> may end up including/bundling JDK tools JAR.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8989) hadoop dfs -find feature

2013-03-14 Thread Jonathan Allen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Allen updated HADOOP-8989:
---

Attachment: HADOOP-8989.patch

> hadoop dfs -find feature
> 
>
> Key: HADOOP-8989
> URL: https://issues.apache.org/jira/browse/HADOOP-8989
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Marco Nicosia
>Assignee: Jonathan Allen
> Attachments: HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
> HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
> HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch
>
>
> Both sysadmins and users make frequent use of the unix 'find' command, but 
> Hadoop has no correlate. Without this, users are writing scripts which make 
> heavy use of hadoop dfs -lsr, and implementing find one-offs. I think hdfs 
> -lsr is somewhat taxing on the NameNode, and a really slow experience on the 
> client side. Possibly an in-NameNode find operation would be only a bit more 
> taxing on the NameNode, but significantly faster from the client's point of 
> view?
> The minimum set of options I can think of which would make a Hadoop find 
> command generally useful is (in priority order):
> * -type (file or directory, for now)
> * -atime/-ctime-mtime (... and -creationtime?) (both + and - arguments)
> * -print0 (for piping to xargs -0)
> * -depth
> * -owner/-group (and -nouser/-nogroup)
> * -name (allowing for shell pattern, or even regex?)
> * -perm
> * -size
> One possible special case, but could possibly be really cool if it ran from 
> within the NameNode:
> * -delete
> The "hadoop dfs -lsr | hadoop dfs -rm" cycle is really, really slow.
> Lower priority, some people do use operators, mostly to execute -or searches 
> such as:
> * find / \(-nouser -or -nogroup\)
> Finally, I thought I'd include a link to the [Posix spec for 
> find|http://www.opengroup.org/onlinepubs/009695399/utilities/find.html]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Moved] (HADOOP-9407) commons-daemon 1.0.3 dependency has bad group id causing build issues

2013-03-14 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas moved HDFS-4497 to HADOOP-9407:
---

  Component/s: (was: build)
   build
Affects Version/s: (was: 2.0.3-alpha)
   2.0.3-alpha
  Key: HADOOP-9407  (was: HDFS-4497)
  Project: Hadoop Common  (was: Hadoop HDFS)

> commons-daemon 1.0.3 dependency has bad group id causing build issues
> -
>
> Key: HADOOP-9407
> URL: https://issues.apache.org/jira/browse/HADOOP-9407
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.3-alpha
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: HDFS-4497.patch
>
>
> The commons-daemon dependency of the hadoop-hdfs module has been at version 
> 1.0.3 for a while. However, 1.0.3 has a pretty well-known groupId error in 
> its pom ("org.apache.commons" as opposed to "commons-daemon"). This problem 
> has since been corrected on commons-daemon starting 1.0.4.
> This causes build problems for many who depend on hadoop-hdfs directly and 
> indirectly, however. Maven can skip over this metadata inconsistency. But 
> other less forgiving build systems such as ivy and gradle have much harder 
> time working around this problem. For example, in gradle, pretty much the 
> only obvious way to work around this is to override this dependency version.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9407) commons-daemon 1.0.3 dependency has bad group id causing build issues

2013-03-14 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-9407:


   Resolution: Fixed
Fix Version/s: 2.0.5-beta
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

[~sjlee0] Added you as a contributor and assigned the jira to you. I do not 
think you will be able to commit the patch though.

I committed the patch to branch-2 and trunk. Thank you Sangjin.


> commons-daemon 1.0.3 dependency has bad group id causing build issues
> -
>
> Key: HADOOP-9407
> URL: https://issues.apache.org/jira/browse/HADOOP-9407
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.3-alpha
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Fix For: 2.0.5-beta
>
> Attachments: HDFS-4497.patch
>
>
> The commons-daemon dependency of the hadoop-hdfs module has been at version 
> 1.0.3 for a while. However, 1.0.3 has a pretty well-known groupId error in 
> its pom ("org.apache.commons" as opposed to "commons-daemon"). This problem 
> has since been corrected on commons-daemon starting 1.0.4.
> This causes build problems for many who depend on hadoop-hdfs directly and 
> indirectly, however. Maven can skip over this metadata inconsistency. But 
> other less forgiving build systems such as ivy and gradle have much harder 
> time working around this problem. For example, in gradle, pretty much the 
> only obvious way to work around this is to override this dependency version.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9407) commons-daemon 1.0.3 dependency has bad group id causing build issues

2013-03-14 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13602850#comment-13602850
 ] 

Suresh Srinivas commented on HADOOP-9407:
-

Also thanks to Lohit for the review and testing.

> commons-daemon 1.0.3 dependency has bad group id causing build issues
> -
>
> Key: HADOOP-9407
> URL: https://issues.apache.org/jira/browse/HADOOP-9407
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.3-alpha
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Fix For: 2.0.5-beta
>
> Attachments: HDFS-4497.patch
>
>
> The commons-daemon dependency of the hadoop-hdfs module has been at version 
> 1.0.3 for a while. However, 1.0.3 has a pretty well-known groupId error in 
> its pom ("org.apache.commons" as opposed to "commons-daemon"). This problem 
> has since been corrected on commons-daemon starting 1.0.4.
> This causes build problems for many who depend on hadoop-hdfs directly and 
> indirectly, however. Maven can skip over this metadata inconsistency. But 
> other less forgiving build systems such as ivy and gradle have much harder 
> time working around this problem. For example, in gradle, pretty much the 
> only obvious way to work around this is to override this dependency version.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9407) commons-daemon 1.0.3 dependency has bad group id causing build issues

2013-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13602857#comment-13602857
 ] 

Hudson commented on HADOOP-9407:


Integrated in Hadoop-trunk-Commit #3475 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3475/])
HADOOP-9407. commons-daemon 1.0.3 dependency has bad group id causing build 
issues. Contributed by Sangjin Lee. (Revision 1456704)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1456704
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-project/pom.xml


> commons-daemon 1.0.3 dependency has bad group id causing build issues
> -
>
> Key: HADOOP-9407
> URL: https://issues.apache.org/jira/browse/HADOOP-9407
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.3-alpha
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Fix For: 2.0.5-beta
>
> Attachments: HDFS-4497.patch
>
>
> The commons-daemon dependency of the hadoop-hdfs module has been at version 
> 1.0.3 for a while. However, 1.0.3 has a pretty well-known groupId error in 
> its pom ("org.apache.commons" as opposed to "commons-daemon"). This problem 
> has since been corrected on commons-daemon starting 1.0.4.
> This causes build problems for many who depend on hadoop-hdfs directly and 
> indirectly, however. Maven can skip over this metadata inconsistency. But 
> other less forgiving build systems such as ivy and gradle have much harder 
> time working around this problem. For example, in gradle, pretty much the 
> only obvious way to work around this is to override this dependency version.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9371) Define Semantics of FileSystem and FileContext more rigorously

2013-03-14 Thread Mike Liddell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Liddell updated HADOOP-9371:
-

Attachment: HADOOP-9361.2.patch

Added HADOOP-9361.2.patch with minor edits.   
 - additional assumptions
 - changed detail for fs.delete("/")

This patch was created via svn diff is not a delta over the original patch.

Please let me know if the patch format is incorrect.

> Define Semantics of FileSystem and FileContext more rigorously
> --
>
> Key: HADOOP-9371
> URL: https://issues.apache.org/jira/browse/HADOOP-9371
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 1.2.0, 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-9361.2.patch, HADOOP-9361.patch, 
> HadoopFilesystemContract.pdf
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> The semantics of {{FileSystem}} and {{FileContext}} are not completely 
> defined in terms of 
> # core expectations of a filesystem
> # consistency requirements.
> # concurrency requirements.
> # minimum scale limits
> Furthermore, methods are not defined strictly enough in terms of their 
> outcomes and failure modes.
> The requirements and method semantics should be defined more strictly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9407) commons-daemon 1.0.3 dependency has bad group id causing build issues

2013-03-14 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13602875#comment-13602875
 ] 

Sangjin Lee commented on HADOOP-9407:
-

[~sureshms] Thanks!

> commons-daemon 1.0.3 dependency has bad group id causing build issues
> -
>
> Key: HADOOP-9407
> URL: https://issues.apache.org/jira/browse/HADOOP-9407
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.3-alpha
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Fix For: 2.0.5-beta
>
> Attachments: HDFS-4497.patch
>
>
> The commons-daemon dependency of the hadoop-hdfs module has been at version 
> 1.0.3 for a while. However, 1.0.3 has a pretty well-known groupId error in 
> its pom ("org.apache.commons" as opposed to "commons-daemon"). This problem 
> has since been corrected on commons-daemon starting 1.0.4.
> This causes build problems for many who depend on hadoop-hdfs directly and 
> indirectly, however. Maven can skip over this metadata inconsistency. But 
> other less forgiving build systems such as ivy and gradle have much harder 
> time working around this problem. For example, in gradle, pretty much the 
> only obvious way to work around this is to override this dependency version.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9301) hadoop client servlet/jsp/jetty/tomcat JARs creating conflicts in Oozie & HttpFS

2013-03-14 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-9301:
---

Status: Patch Available  (was: Open)

I've done some limited testing in Oozie (due to other issues popping out) and 
the new hadoop-client seem to be right.

> hadoop client servlet/jsp/jetty/tomcat JARs creating conflicts in Oozie & 
> HttpFS
> 
>
> Key: HADOOP-9301
> URL: https://issues.apache.org/jira/browse/HADOOP-9301
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.3-alpha
>Reporter: Roman Shaposhnik
>Assignee: Alejandro Abdelnur
>Priority: Blocker
> Fix For: 2.0.4-alpha
>
> Attachments: HADOOP-9301.patch
>
>
> Here's how to reproduce:
> {noformat}
> $ cd hadoop-client
> $ mvn dependency:tree | egrep 'jsp|jetty'
> [INFO] |  +- org.mortbay.jetty:jetty:jar:6.1.26.cloudera.2:compile
> [INFO] |  +- org.mortbay.jetty:jetty-util:jar:6.1.26.cloudera.2:compile
> [INFO] |  +- javax.servlet.jsp:jsp-api:jar:2.1:compile
> {noformat}
> This has a potential for completely screwing up clients like Oozie, etc – 
> hence a blocker.
> It seems that while common excludes those JARs, they are sneaking in via 
> hdfs, we need to exclude them too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8989) hadoop dfs -find feature

2013-03-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13602935#comment-13602935
 ] 

Hadoop QA commented on HADOOP-8989:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12573783/HADOOP-8989.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 42 new 
or modified test files.

{color:green}+1 tests included appear to have a timeout.{color}

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2330//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2330//console

This message is automatically generated.

> hadoop dfs -find feature
> 
>
> Key: HADOOP-8989
> URL: https://issues.apache.org/jira/browse/HADOOP-8989
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Marco Nicosia
>Assignee: Jonathan Allen
> Attachments: HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
> HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
> HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch
>
>
> Both sysadmins and users make frequent use of the unix 'find' command, but 
> Hadoop has no correlate. Without this, users are writing scripts which make 
> heavy use of hadoop dfs -lsr, and implementing find one-offs. I think hdfs 
> -lsr is somewhat taxing on the NameNode, and a really slow experience on the 
> client side. Possibly an in-NameNode find operation would be only a bit more 
> taxing on the NameNode, but significantly faster from the client's point of 
> view?
> The minimum set of options I can think of which would make a Hadoop find 
> command generally useful is (in priority order):
> * -type (file or directory, for now)
> * -atime/-ctime-mtime (... and -creationtime?) (both + and - arguments)
> * -print0 (for piping to xargs -0)
> * -depth
> * -owner/-group (and -nouser/-nogroup)
> * -name (allowing for shell pattern, or even regex?)
> * -perm
> * -size
> One possible special case, but could possibly be really cool if it ran from 
> within the NameNode:
> * -delete
> The "hadoop dfs -lsr | hadoop dfs -rm" cycle is really, really slow.
> Lower priority, some people do use operators, mostly to execute -or searches 
> such as:
> * find / \(-nouser -or -nogroup\)
> Finally, I thought I'd include a link to the [Posix spec for 
> find|http://www.opengroup.org/onlinepubs/009695399/utilities/find.html]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9406) hadoop-client leaks dependency on JDK tools jar

2013-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13602941#comment-13602941
 ] 

Hudson commented on HADOOP-9406:


Integrated in Hadoop-trunk-Commit #3476 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3476/])
HADOOP-9406. hadoop-client leaks dependency on JDK tools jar. (tucu) 
(Revision 1456729)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1456729
Files : 
* /hadoop/common/trunk/hadoop-client/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-annotations/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-project/pom.xml


> hadoop-client leaks dependency on JDK tools jar
> ---
>
> Key: HADOOP-9406
> URL: https://issues.apache.org/jira/browse/HADOOP-9406
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.3-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 2.0.4-alpha
>
> Attachments: HADOOP-9406.patch
>
>
> hadoop-client leaks out JDK tools jar as dependency. 
> JDK tools jar is defined as a system dependency for 
> hadoop-annotation/jdiff/javadocs purposes, if not done javadoc generation 
> fails.
> The problem is that in the way it is defined now, this dependency ends up 
> leaking to hadoop-client and downstream projects that depend on hadoop-client 
> may end up including/bundling JDK tools JAR.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9406) hadoop-client leaks dependency on JDK tools jar

2013-03-14 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-9406:
---

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> hadoop-client leaks dependency on JDK tools jar
> ---
>
> Key: HADOOP-9406
> URL: https://issues.apache.org/jira/browse/HADOOP-9406
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.3-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 2.0.4-alpha
>
> Attachments: HADOOP-9406.patch
>
>
> hadoop-client leaks out JDK tools jar as dependency. 
> JDK tools jar is defined as a system dependency for 
> hadoop-annotation/jdiff/javadocs purposes, if not done javadoc generation 
> fails.
> The problem is that in the way it is defined now, this dependency ends up 
> leaking to hadoop-client and downstream projects that depend on hadoop-client 
> may end up including/bundling JDK tools JAR.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9301) hadoop client servlet/jsp/jetty/tomcat JARs creating conflicts in Oozie & HttpFS

2013-03-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13602971#comment-13602971
 ] 

Hadoop QA commented on HADOOP-9301:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12571486/HADOOP-9301.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-client hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2331//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2331//console

This message is automatically generated.

> hadoop client servlet/jsp/jetty/tomcat JARs creating conflicts in Oozie & 
> HttpFS
> 
>
> Key: HADOOP-9301
> URL: https://issues.apache.org/jira/browse/HADOOP-9301
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.3-alpha
>Reporter: Roman Shaposhnik
>Assignee: Alejandro Abdelnur
>Priority: Blocker
> Fix For: 2.0.4-alpha
>
> Attachments: HADOOP-9301.patch
>
>
> Here's how to reproduce:
> {noformat}
> $ cd hadoop-client
> $ mvn dependency:tree | egrep 'jsp|jetty'
> [INFO] |  +- org.mortbay.jetty:jetty:jar:6.1.26.cloudera.2:compile
> [INFO] |  +- org.mortbay.jetty:jetty-util:jar:6.1.26.cloudera.2:compile
> [INFO] |  +- javax.servlet.jsp:jsp-api:jar:2.1:compile
> {noformat}
> This has a potential for completely screwing up clients like Oozie, etc – 
> hence a blocker.
> It seems that while common excludes those JARs, they are sneaking in via 
> hdfs, we need to exclude them too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira