[jira] [Commented] (HADOOP-9880) SASL changes from HADOOP-9421 breaks Secure HA NN

2013-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13741927#comment-13741927
 ] 

Hadoop QA commented on HADOOP-9880:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12598343/HADOOP-9880.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2992//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2992//console

This message is automatically generated.

> SASL changes from HADOOP-9421 breaks Secure HA NN 
> --
>
> Key: HADOOP-9880
> URL: https://issues.apache.org/jira/browse/HADOOP-9880
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Kihwal Lee
>Assignee: Daryn Sharp
>Priority: Blocker
> Attachments: HADOOP-9880.patch
>
>
> buildSaslNegotiateResponse() will create a SaslRpcServer with TOKEN auth. 
> When create() is called against it, secretManager.checkAvailableForRead() is 
> called, which fails in HA standby. Thus HA standby nodes cannot be 
> transitioned to active.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9877) hadoop fsshell can not ls .snapshot dir after HADOOP-9817

2013-08-15 Thread Binglin Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13741896#comment-13741896
 ] 

Binglin Chang commented on HADOOP-9877:
---

Thanks for the review and advice, here is the new patch with test, changes:
1. Add some comments to explain the change.
2. Change getFileStatus to getFileLinkStatus
3. Add unquotePathComponent to fix test failure of org.apache.hadoop.fs.TestPath
4. Add test in hdfs to check .snapshot can be correctly globbed.

@Colin
I tried .reserved but it can not work currently, because:
getFileStatus("/.reserved") failed
getFileStatus("/.reserved/.inodes") failed
getFileStatus("/.reserved/.inodes/[id]") success
This behavior is not consistent with /.snapshot, I you agree, I can fire a bug 
to fix this.


 


> hadoop fsshell can not ls .snapshot dir after HADOOP-9817
> -
>
> Key: HADOOP-9877
> URL: https://issues.apache.org/jira/browse/HADOOP-9877
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Binglin Chang
>Assignee: Binglin Chang
> Attachments: HADOOP-9877.v1.patch, HADOOP-9877.v2.patch
>
>
> {code}
> decster:~/hadoop> bin/hadoop fs -ls "/foo/.snapshot"
> 13/08/16 01:17:22 INFO hdfs.DFSClient: + listPath(/)
> 13/08/16 01:17:22 INFO hdfs.DFSClient: + listPath(/foo)
> ls: `/foo/.snapshot': No such file or directory
> {code}
> HADOOP-9817 refactor some globStatus code, but forgot to handle special case 
> that .snapshot dir is not show up in listStatus but exists, so we need to 
> explicitly check path existence using getFileStatus, rather than depending on 
> listStatus results.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9868) Server must not advertise kerberos realm

2013-08-15 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13741887#comment-13741887
 ] 

Alejandro Abdelnur commented on HADOOP-9868:


[~kihwal], the scenario I'm thinking does not require a compromised DNS. It is 
the following:

real service listening at host1:1 with principal foo/host1 crashes.
fake service starts listening at host1:1 with principal bar/host1, and it 
advertises its principal is bar/host1.

granted this requires a bar/host1 keytab.

I have not looked at the patch, so I don't know what safeguards it has. Can you 
confirm the behavior in the following 2 scenarios?

1. Does the client accept an arbitrary principal without a service host name? 
the server advertising 'bar' as principal, no hostname.

I think the client should not accept this alternate.

2. Does the client accept an alternate advertised with the a different 
shortname than one used originally by the client? using example above: original 
server principal submitted by the client foo/host1, advertised server principal 
bar/host1.

I think we should reject that scenario, as it would cover the case when keytabs 
for foo/* principals are not compromised. So an alternate o foo/host1a would be 
ok but a bar/host1 would not.


> Server must not advertise kerberos realm
> 
>
> Key: HADOOP-9868
> URL: https://issues.apache.org/jira/browse/HADOOP-9868
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 3.0.0, 2.1.1-beta
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Blocker
> Fix For: 3.0.0, 2.1.1-beta
>
> Attachments: HADOOP-9868.patch
>
>
> HADOOP-9789 broke kerberos authentication by making the RPC server advertise 
> the kerberos service principal realm.  SASL clients and servers do not 
> support specifying a realm, so it must be removed from the advertisement.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9877) hadoop fsshell can not ls .snapshot dir after HADOOP-9817

2013-08-15 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HADOOP-9877:
--

Attachment: HADOOP-9877.v2.patch

> hadoop fsshell can not ls .snapshot dir after HADOOP-9817
> -
>
> Key: HADOOP-9877
> URL: https://issues.apache.org/jira/browse/HADOOP-9877
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Binglin Chang
>Assignee: Binglin Chang
> Attachments: HADOOP-9877.v1.patch, HADOOP-9877.v2.patch
>
>
> {code}
> decster:~/hadoop> bin/hadoop fs -ls "/foo/.snapshot"
> 13/08/16 01:17:22 INFO hdfs.DFSClient: + listPath(/)
> 13/08/16 01:17:22 INFO hdfs.DFSClient: + listPath(/foo)
> ls: `/foo/.snapshot': No such file or directory
> {code}
> HADOOP-9817 refactor some globStatus code, but forgot to handle special case 
> that .snapshot dir is not show up in listStatus but exists, so we need to 
> explicitly check path existence using getFileStatus, rather than depending on 
> listStatus results.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9745) TestZKFailoverController test fails

2013-08-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-9745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13741874#comment-13741874
 ] 

Jean-Baptiste Onofré commented on HADOOP-9745:
--

Hi, yes, when I tested (on trunk), the bugs worked. I gonna test it again. If 
it fails again, it means that it's back or it's a random failure.

Let me take a new look on that. I will get back to you soon.

> TestZKFailoverController test fails
> ---
>
> Key: HADOOP-9745
> URL: https://issues.apache.org/jira/browse/HADOOP-9745
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Jean-Baptiste Onofré
>
>   - 
> testGracefulFailoverFailBecomingActive(org.apache.hadoop.ha.TestZKFailoverController):
>  Did not fail to graceful failover when target failed to become active!
>   - 
> testGracefulFailoverFailBecomingStandby(org.apache.hadoop.ha.TestZKFailoverController):
>  expected:<1> but was:<0>
>   - 
> testGracefulFailoverFailBecomingStandbyAndFailFence(org.apache.hadoop.ha.TestZKFailoverController):
>  Failover should have failed when old node wont fence

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9745) TestZKFailoverController test fails

2013-08-15 Thread saravana kumar periyasamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13741869#comment-13741869
 ] 

saravana kumar periyasamy commented on HADOOP-9745:
---

[~j...@nanthrax.net] - seems this issue is not resolved. How did you resolve 
it. Could you help us.

[~elizabetht] - FYI

> TestZKFailoverController test fails
> ---
>
> Key: HADOOP-9745
> URL: https://issues.apache.org/jira/browse/HADOOP-9745
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Jean-Baptiste Onofré
>
>   - 
> testGracefulFailoverFailBecomingActive(org.apache.hadoop.ha.TestZKFailoverController):
>  Did not fail to graceful failover when target failed to become active!
>   - 
> testGracefulFailoverFailBecomingStandby(org.apache.hadoop.ha.TestZKFailoverController):
>  expected:<1> but was:<0>
>   - 
> testGracefulFailoverFailBecomingStandbyAndFailFence(org.apache.hadoop.ha.TestZKFailoverController):
>  Failover should have failed when old node wont fence

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9880) SASL changes from HADOOP-9421 breaks Secure HA NN

2013-08-15 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-9880:


Status: Patch Available  (was: Open)

> SASL changes from HADOOP-9421 breaks Secure HA NN 
> --
>
> Key: HADOOP-9880
> URL: https://issues.apache.org/jira/browse/HADOOP-9880
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Kihwal Lee
>Assignee: Daryn Sharp
>Priority: Blocker
> Attachments: HADOOP-9880.patch
>
>
> buildSaslNegotiateResponse() will create a SaslRpcServer with TOKEN auth. 
> When create() is called against it, secretManager.checkAvailableForRead() is 
> called, which fails in HA standby. Thus HA standby nodes cannot be 
> transitioned to active.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9880) SASL changes from HADOOP-9421 breaks Secure HA NN

2013-08-15 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-9880:


Attachment: HADOOP-9880.patch

This is slightly more appealing hack than HDFS-3083.

I've moved the call to the NN-specific {{checkAvailableForRead}} from the RPC 
layer into the NN's secret manager so it's only called when token auth is being 
performed.

However, the current method signatures only allow {{InvalidToken}} to be 
thrown.  So rather than change a bunch of signatures that may impact other 
projects, I've tunneled the {{StandyException}} in the cause of an 
{{InvalidToken}}.  The RPC server will unwrap the nested exception.

> SASL changes from HADOOP-9421 breaks Secure HA NN 
> --
>
> Key: HADOOP-9880
> URL: https://issues.apache.org/jira/browse/HADOOP-9880
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Kihwal Lee
>Assignee: Daryn Sharp
>Priority: Blocker
> Attachments: HADOOP-9880.patch
>
>
> buildSaslNegotiateResponse() will create a SaslRpcServer with TOKEN auth. 
> When create() is called against it, secretManager.checkAvailableForRead() is 
> called, which fails in HA standby. Thus HA standby nodes cannot be 
> transitioned to active.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-9880) SASL changes from HADOOP-9421 breaks Secure HA NN

2013-08-15 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp reassigned HADOOP-9880:
---

Assignee: Daryn Sharp

> SASL changes from HADOOP-9421 breaks Secure HA NN 
> --
>
> Key: HADOOP-9880
> URL: https://issues.apache.org/jira/browse/HADOOP-9880
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Kihwal Lee
>Assignee: Daryn Sharp
>Priority: Blocker
>
> buildSaslNegotiateResponse() will create a SaslRpcServer with TOKEN auth. 
> When create() is called against it, secretManager.checkAvailableForRead() is 
> called, which fails in HA standby. Thus HA standby nodes cannot be 
> transitioned to active.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9879) Move the version info of zookeeper dependencies to hadoop-project/pom

2013-08-15 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13741781#comment-13741781
 ] 

Sandy Ryza commented on HADOOP-9879:


+1

> Move the version info of zookeeper dependencies to hadoop-project/pom
> -
>
> Key: HADOOP-9879
> URL: https://issues.apache.org/jira/browse/HADOOP-9879
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.1.0-beta
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Minor
> Attachments: hadoop-9879-1.patch, hdfs-5082-1.patch
>
>
> As different projects (HDFS, YARN) depend on zookeeper, it is better to keep 
> the version information in hadoop-project/pom.xml.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9880) SASL changes from HADOOP-9421 breaks Secure HA NN

2013-08-15 Thread Sanjay Radia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanjay Radia updated HADOOP-9880:
-

Summary: SASL changes from HADOOP-9421 breaks Secure HA NN   (was: RPC 
Server should not unconditionally create SaslServer with Token auth.)

> SASL changes from HADOOP-9421 breaks Secure HA NN 
> --
>
> Key: HADOOP-9880
> URL: https://issues.apache.org/jira/browse/HADOOP-9880
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Kihwal Lee
>Priority: Blocker
>
> buildSaslNegotiateResponse() will create a SaslRpcServer with TOKEN auth. 
> When create() is called against it, secretManager.checkAvailableForRead() is 
> called, which fails in HA standby. Thus HA standby nodes cannot be 
> transitioned to active.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9880) RPC Server should not unconditionally create SaslServer with Token auth.

2013-08-15 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13741771#comment-13741771
 ] 

Sanjay Radia commented on HADOOP-9880:
--

We see exactly the same error during a test this morning.
The 2 Jiras that  caused this problem are the recent HADOOP-9421 and the 
earlier HDFS-3083.

HADOOP-9421 improved SASL protocol.
ZKFC uses Kerberos. But the server-side initiates the token-based challenge 
just in case the client wants token. As part of doing that the server does  
secretManager.checkAvailableForRead()  fails because the NN is in standby. 

It is really bizzare that there is check for the server's state (active or 
standby) as part of SASL. This was introduced in HDFS-3083 to deal with a 
failover bug. In HDFS-3083, Aaron noted that he does not like the solution: 
"I'm not in love with this solution, as it leaks abstractions all over the 
place,". The abstraction layer violation finally caught up with us. 

Turns out even prior to Dary's HADOOP-9421 a similar problem could have 
occurred if the ZKFC had used Kerberos for first connection and Tokens for any 
subsequent connections.

An immediate fix is required to fix what HADOOP-9421 broke but I believe we 
need to also fix the fix that HDFS-3083 introduced - the abstraction layer 
violations need to be cleaned up.

> RPC Server should not unconditionally create SaslServer with Token auth.
> 
>
> Key: HADOOP-9880
> URL: https://issues.apache.org/jira/browse/HADOOP-9880
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Kihwal Lee
>Priority: Blocker
>
> buildSaslNegotiateResponse() will create a SaslRpcServer with TOKEN auth. 
> When create() is called against it, secretManager.checkAvailableForRead() is 
> called, which fails in HA standby. Thus HA standby nodes cannot be 
> transitioned to active.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9879) Move the version info of zookeeper dependencies to hadoop-project/pom

2013-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13741749#comment-13741749
 ] 

Hadoop QA commented on HADOOP-9879:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12598306/hadoop-9879-1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-hdfs-project/hadoop-hdfs-nfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2991//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2991//console

This message is automatically generated.

> Move the version info of zookeeper dependencies to hadoop-project/pom
> -
>
> Key: HADOOP-9879
> URL: https://issues.apache.org/jira/browse/HADOOP-9879
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.1.0-beta
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Minor
> Attachments: hadoop-9879-1.patch, hdfs-5082-1.patch
>
>
> As different projects (HDFS, YARN) depend on zookeeper, it is better to keep 
> the version information in hadoop-project/pom.xml.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9879) Move the version info of zookeeper dependencies to hadoop-project/pom

2013-08-15 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza updated HADOOP-9879:
---

Status: Patch Available  (was: Open)

> Move the version info of zookeeper dependencies to hadoop-project/pom
> -
>
> Key: HADOOP-9879
> URL: https://issues.apache.org/jira/browse/HADOOP-9879
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.1.0-beta
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Minor
> Attachments: hadoop-9879-1.patch, hdfs-5082-1.patch
>
>
> As different projects (HDFS, YARN) depend on zookeeper, it is better to keep 
> the version information in hadoop-project/pom.xml.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9879) Move the version info of zookeeper dependencies to hadoop-project/pom

2013-08-15 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza updated HADOOP-9879:
---

Status: Open  (was: Patch Available)

> Move the version info of zookeeper dependencies to hadoop-project/pom
> -
>
> Key: HADOOP-9879
> URL: https://issues.apache.org/jira/browse/HADOOP-9879
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.1.0-beta
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Minor
> Attachments: hadoop-9879-1.patch, hdfs-5082-1.patch
>
>
> As different projects (HDFS, YARN) depend on zookeeper, it is better to keep 
> the version information in hadoop-project/pom.xml.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9865) FileContext.globStatus() has a regression with respect to relative path

2013-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13741682#comment-13741682
 ] 

Hudson commented on HADOOP-9865:


SUCCESS: Integrated in Hadoop-trunk-Commit #4276 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4276/])
HADOOP-9865.  FileContext#globStatus has a regression with respect to relative 
path.  (Contributed by Chaun Lin) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1514531)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Globber.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestGlobPaths.java


> FileContext.globStatus() has a regression with respect to relative path
> ---
>
> Key: HADOOP-9865
> URL: https://issues.apache.org/jira/browse/HADOOP-9865
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Chuan Liu
>Assignee: Chuan Liu
> Fix For: 2.3.0
>
> Attachments: HADOOP-9865-demo.patch, HADOOP-9865-trunk.2.patch, 
> HADOOP-9865-trunk.3.patch, HADOOP-9865-trunk.patch
>
>
> I discovered the problem when running unit test TestMRJobClient on Windows. 
> The cause is indirect in this case. In the unit test, we try to launch a job 
> and list its status. The job failed, and caused the list command get a result 
> of 0, which triggered the unit test assert. From the log and debug, the job 
> failed because we failed to create the Jar with classpath (see code around 
> {{FileUtil.createJarWithClassPath}}) in {{ContainerLaunch}}. This is a 
> Windows specific step right now; so the test still passes on Linux. This step 
> failed because we passed in a relative path to {{FileContext.globStatus()}} 
> in {{FileUtil.createJarWithClassPath}}. The relevant log looks like the 
> following.
> {noformat}
> 2013-08-12 16:12:05,937 WARN  [ContainersLauncher #0] 
> launcher.ContainerLaunch (ContainerLaunch.java:call(270)) - Failed to launch 
> container.
> org.apache.hadoop.HadoopIllegalArgumentException: Path is relative
>   at org.apache.hadoop.fs.Path.checkNotRelative(Path.java:74)
>   at org.apache.hadoop.fs.FileContext.getFSofPath(FileContext.java:304)
>   at org.apache.hadoop.fs.Globber.schemeFromPath(Globber.java:107)
>   at org.apache.hadoop.fs.Globber.glob(Globber.java:128)
>   at 
> org.apache.hadoop.fs.FileContext$Util.globStatus(FileContext.java:1908)
>   at 
> org.apache.hadoop.fs.FileUtil.createJarWithClassPath(FileUtil.java:1247)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.sanitizeEnv(ContainerLaunch.java:679)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:232)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:1)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:662)
> {noformat}
> I think this is a regression from HADOOP-9817. I modified some code and the 
> unit test passed. (See the attached patch.) However, I think the impact is 
> larger. I will add some unit tests to verify the behavior, and work on a more 
> complete fix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9866) convert hadoop-auth testcases requiring kerberos to use minikdc

2013-08-15 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13741681#comment-13741681
 ] 

Alejandro Abdelnur commented on HADOOP-9866:


Feedback on the patch:

There are several changes which are false changes due to reformatting, please 
undo all those.

All test methods you are modifying should have a time out: @Test(timeout=) 
where  is milliseconds. I guess setting them to 1 min should be safe, else 
set it to 10 times the time the test takes locally.
 
AuthenticatorTestCase: undo import changes to use class wildcard '*'

KerberosTestUtils: the getKeytabFile() path should be under target now that we 
create the keytabs on the fly, doing new 
File(System.getProperty("build.directory", "target"), UUID.randomUUID()) and 
creating that dir would do (I believe MiniKDC is using similar logic)

KerberosTestUtils: we can get rid of all the system properties getting of 
KEYTAB, REALM, CLIENT_PRINCIPAL, SERVER_PRINCIPAL, KEYTAB_FILE and use 
hardcoded values as we now do everything within the MiniKDC scope.

TestKerberosName: if we need keep the krb5.conf file in test/resources  for 
this test, you'll have to set/unset the system properties setting the 
realm/krb5.conf.





> convert hadoop-auth testcases requiring kerberos to use minikdc
> ---
>
> Key: HADOOP-9866
> URL: https://issues.apache.org/jira/browse/HADOOP-9866
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.3.0
>Reporter: Alejandro Abdelnur
>Assignee: Wei Yan
> Attachments: HADOOP-9866.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9865) FileContext.globStatus() has a regression with respect to relative path

2013-08-15 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-9865:
-

  Resolution: Fixed
   Fix Version/s: 2.3.0
Target Version/s: 2.3.0
  Status: Resolved  (was: Patch Available)

> FileContext.globStatus() has a regression with respect to relative path
> ---
>
> Key: HADOOP-9865
> URL: https://issues.apache.org/jira/browse/HADOOP-9865
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Chuan Liu
>Assignee: Chuan Liu
> Fix For: 2.3.0
>
> Attachments: HADOOP-9865-demo.patch, HADOOP-9865-trunk.2.patch, 
> HADOOP-9865-trunk.3.patch, HADOOP-9865-trunk.patch
>
>
> I discovered the problem when running unit test TestMRJobClient on Windows. 
> The cause is indirect in this case. In the unit test, we try to launch a job 
> and list its status. The job failed, and caused the list command get a result 
> of 0, which triggered the unit test assert. From the log and debug, the job 
> failed because we failed to create the Jar with classpath (see code around 
> {{FileUtil.createJarWithClassPath}}) in {{ContainerLaunch}}. This is a 
> Windows specific step right now; so the test still passes on Linux. This step 
> failed because we passed in a relative path to {{FileContext.globStatus()}} 
> in {{FileUtil.createJarWithClassPath}}. The relevant log looks like the 
> following.
> {noformat}
> 2013-08-12 16:12:05,937 WARN  [ContainersLauncher #0] 
> launcher.ContainerLaunch (ContainerLaunch.java:call(270)) - Failed to launch 
> container.
> org.apache.hadoop.HadoopIllegalArgumentException: Path is relative
>   at org.apache.hadoop.fs.Path.checkNotRelative(Path.java:74)
>   at org.apache.hadoop.fs.FileContext.getFSofPath(FileContext.java:304)
>   at org.apache.hadoop.fs.Globber.schemeFromPath(Globber.java:107)
>   at org.apache.hadoop.fs.Globber.glob(Globber.java:128)
>   at 
> org.apache.hadoop.fs.FileContext$Util.globStatus(FileContext.java:1908)
>   at 
> org.apache.hadoop.fs.FileUtil.createJarWithClassPath(FileUtil.java:1247)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.sanitizeEnv(ContainerLaunch.java:679)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:232)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:1)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:662)
> {noformat}
> I think this is a regression from HADOOP-9817. I modified some code and the 
> unit test passed. (See the attached patch.) However, I think the impact is 
> larger. I will add some unit tests to verify the behavior, and work on a more 
> complete fix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9865) FileContext.globStatus() has a regression with respect to relative path

2013-08-15 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13741615#comment-13741615
 ] 

Colin Patrick McCabe commented on HADOOP-9865:
--

+1, thanks Chuan

> FileContext.globStatus() has a regression with respect to relative path
> ---
>
> Key: HADOOP-9865
> URL: https://issues.apache.org/jira/browse/HADOOP-9865
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Chuan Liu
>Assignee: Chuan Liu
> Attachments: HADOOP-9865-demo.patch, HADOOP-9865-trunk.2.patch, 
> HADOOP-9865-trunk.3.patch, HADOOP-9865-trunk.patch
>
>
> I discovered the problem when running unit test TestMRJobClient on Windows. 
> The cause is indirect in this case. In the unit test, we try to launch a job 
> and list its status. The job failed, and caused the list command get a result 
> of 0, which triggered the unit test assert. From the log and debug, the job 
> failed because we failed to create the Jar with classpath (see code around 
> {{FileUtil.createJarWithClassPath}}) in {{ContainerLaunch}}. This is a 
> Windows specific step right now; so the test still passes on Linux. This step 
> failed because we passed in a relative path to {{FileContext.globStatus()}} 
> in {{FileUtil.createJarWithClassPath}}. The relevant log looks like the 
> following.
> {noformat}
> 2013-08-12 16:12:05,937 WARN  [ContainersLauncher #0] 
> launcher.ContainerLaunch (ContainerLaunch.java:call(270)) - Failed to launch 
> container.
> org.apache.hadoop.HadoopIllegalArgumentException: Path is relative
>   at org.apache.hadoop.fs.Path.checkNotRelative(Path.java:74)
>   at org.apache.hadoop.fs.FileContext.getFSofPath(FileContext.java:304)
>   at org.apache.hadoop.fs.Globber.schemeFromPath(Globber.java:107)
>   at org.apache.hadoop.fs.Globber.glob(Globber.java:128)
>   at 
> org.apache.hadoop.fs.FileContext$Util.globStatus(FileContext.java:1908)
>   at 
> org.apache.hadoop.fs.FileUtil.createJarWithClassPath(FileUtil.java:1247)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.sanitizeEnv(ContainerLaunch.java:679)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:232)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:1)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:662)
> {noformat}
> I think this is a regression from HADOOP-9817. I modified some code and the 
> unit test passed. (See the attached patch.) However, I think the impact is 
> larger. I will add some unit tests to verify the behavior, and work on a more 
> complete fix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9880) RPC Server should not unconditionally create SaslServer with Token auth.

2013-08-15 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-9880:
---

Description: buildSaslNegotiateResponse() will create a SaslRpcServer with 
TOKEN auth. When create() is called against it, 
secretManager.checkAvailableForRead() is called, which fails in HA standby. 
Thus HA standby nodes cannot be transitioned to active.  (was: 
buildSaslNegotiateResponse() will a SaslRpcServer to be created with TOKEN 
auth. When create() is called against it, secretManager.checkAvailableForRead() 
is called, which fails in HA standby. Thus HA standby nodes cannot be 
transitioned to active.)

> RPC Server should not unconditionally create SaslServer with Token auth.
> 
>
> Key: HADOOP-9880
> URL: https://issues.apache.org/jira/browse/HADOOP-9880
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Kihwal Lee
>Priority: Blocker
>
> buildSaslNegotiateResponse() will create a SaslRpcServer with TOKEN auth. 
> When create() is called against it, secretManager.checkAvailableForRead() is 
> called, which fails in HA standby. Thus HA standby nodes cannot be 
> transitioned to active.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9880) RPC Server should not unconditionally create SaslServer with Token auth.

2013-08-15 Thread Kihwal Lee (JIRA)
Kihwal Lee created HADOOP-9880:
--

 Summary: RPC Server should not unconditionally create SaslServer 
with Token auth.
 Key: HADOOP-9880
 URL: https://issues.apache.org/jira/browse/HADOOP-9880
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Kihwal Lee
Priority: Blocker


buildSaslNegotiateResponse() will a SaslRpcServer to be created with TOKEN 
auth. When create() is called against it, secretManager.checkAvailableForRead() 
is called, which fails in HA standby. Thus HA standby nodes cannot be 
transitioned to active.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9879) Move the version info of zookeeper dependencies to hadoop-project/pom

2013-08-15 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13741551#comment-13741551
 ] 

Sandy Ryza commented on HADOOP-9879:


+1 pending jenkins

> Move the version info of zookeeper dependencies to hadoop-project/pom
> -
>
> Key: HADOOP-9879
> URL: https://issues.apache.org/jira/browse/HADOOP-9879
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.1.0-beta
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Minor
> Attachments: hadoop-9879-1.patch, hdfs-5082-1.patch
>
>
> As different projects (HDFS, YARN) depend on zookeeper, it is better to keep 
> the version information in hadoop-project/pom.xml.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9879) Move the version info of zookeeper dependencies to hadoop-project/pom

2013-08-15 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-9879:
-

Attachment: (was: hadoop-9879-1.patch)

> Move the version info of zookeeper dependencies to hadoop-project/pom
> -
>
> Key: HADOOP-9879
> URL: https://issues.apache.org/jira/browse/HADOOP-9879
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.1.0-beta
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Minor
> Attachments: hadoop-9879-1.patch, hdfs-5082-1.patch
>
>
> As different projects (HDFS, YARN) depend on zookeeper, it is better to keep 
> the version information in hadoop-project/pom.xml.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9879) Move the version info of zookeeper dependencies to hadoop-project/pom

2013-08-15 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-9879:
-

Attachment: hadoop-9879-1.patch

> Move the version info of zookeeper dependencies to hadoop-project/pom
> -
>
> Key: HADOOP-9879
> URL: https://issues.apache.org/jira/browse/HADOOP-9879
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.1.0-beta
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Minor
> Attachments: hadoop-9879-1.patch, hdfs-5082-1.patch
>
>
> As different projects (HDFS, YARN) depend on zookeeper, it is better to keep 
> the version information in hadoop-project/pom.xml.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9879) Move the version info of zookeeper dependencies to hadoop-project/pom

2013-08-15 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-9879:
-

Attachment: (was: hadoop-9879-1.patch)

> Move the version info of zookeeper dependencies to hadoop-project/pom
> -
>
> Key: HADOOP-9879
> URL: https://issues.apache.org/jira/browse/HADOOP-9879
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.1.0-beta
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Minor
> Attachments: hadoop-9879-1.patch, hdfs-5082-1.patch
>
>
> As different projects (HDFS, YARN) depend on zookeeper, it is better to keep 
> the version information in hadoop-project/pom.xml.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9879) Move the version info of zookeeper dependencies to hadoop-project/pom

2013-08-15 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-9879:
-

Attachment: hadoop-9879-1.patch

> Move the version info of zookeeper dependencies to hadoop-project/pom
> -
>
> Key: HADOOP-9879
> URL: https://issues.apache.org/jira/browse/HADOOP-9879
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.1.0-beta
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Minor
> Attachments: hadoop-9879-1.patch, hdfs-5082-1.patch
>
>
> As different projects (HDFS, YARN) depend on zookeeper, it is better to keep 
> the version information in hadoop-project/pom.xml.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9879) Move the version info of zookeeper dependencies to hadoop-project/pom

2013-08-15 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-9879:
-

Status: Patch Available  (was: Open)

> Move the version info of zookeeper dependencies to hadoop-project/pom
> -
>
> Key: HADOOP-9879
> URL: https://issues.apache.org/jira/browse/HADOOP-9879
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.1.0-beta
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Minor
> Attachments: hadoop-9879-1.patch, hdfs-5082-1.patch
>
>
> As different projects (HDFS, YARN) depend on zookeeper, it is better to keep 
> the version information in hadoop-project/pom.xml.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9879) Move the version info of zookeeper dependencies to hadoop-project/pom

2013-08-15 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-9879:
-

Status: Open  (was: Patch Available)

> Move the version info of zookeeper dependencies to hadoop-project/pom
> -
>
> Key: HADOOP-9879
> URL: https://issues.apache.org/jira/browse/HADOOP-9879
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.1.0-beta
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Minor
> Attachments: hadoop-9879-1.patch, hdfs-5082-1.patch
>
>
> As different projects (HDFS, YARN) depend on zookeeper, it is better to keep 
> the version information in hadoop-project/pom.xml.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9879) Move the version info of zookeeper dependencies to hadoop-project/pom

2013-08-15 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-9879:
-

Attachment: hadoop-9879-1.patch

New patch addresses the zk version info in all sub-projects.

> Move the version info of zookeeper dependencies to hadoop-project/pom
> -
>
> Key: HADOOP-9879
> URL: https://issues.apache.org/jira/browse/HADOOP-9879
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.1.0-beta
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Minor
> Attachments: hadoop-9879-1.patch, hdfs-5082-1.patch
>
>
> As different projects (HDFS, YARN) depend on zookeeper, it is better to keep 
> the version information in hadoop-project/pom.xml.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9789) Support server advertised kerberos principals

2013-08-15 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13741528#comment-13741528
 ] 

Alejandro Abdelnur commented on HADOOP-9789:


Crossposting my comment in HADOOP-9868, 

I'm a bit puzzled by this HADOOP-9789. While I understand the reasoning for it, 
doesn't that weaken security? An impersonator can publish an alternate 
principal for which it has a keytab for.



> Support server advertised kerberos principals
> -
>
> Key: HADOOP-9789
> URL: https://issues.apache.org/jira/browse/HADOOP-9789
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc, security
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Fix For: 3.0.0, 2.1.1-beta
>
> Attachments: HADOOP-9789.2.patch, HADOOP-9789.patch, 
> HADOOP-9789.patch, hadoop-ojoshi-datanode-HW10351.local.log, 
> hadoop-ojoshi-namenode-HW10351.local.log
>
>
> The RPC client currently constructs the kerberos principal based on the a 
> config value, usually with an _HOST substitution.  This means the service 
> principal must match the hostname the client is using to connect.  This 
> causes problems:
> * Prevents using HA with IP failover when the servers have distinct 
> principals from the failover hostname
> * Prevents clients from being able to access a service bound to multiple 
> interfaces.  Only the interface that matches the server's principal may be 
> used.
> The client should be able to use the SASL advertised principal (HADOOP-9698), 
> with appropriate safeguards, to acquire the correct service ticket.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9878) getting rid of all the 'bin/../' from all the paths

2013-08-15 Thread kaveh minooie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kaveh minooie updated HADOOP-9878:
--

Description: 
by simply replacing line 34 of libexec/hadoop-config.sh from:

{quote}
export HADOOP_PREFIX=`dirname "$this"`/..
{quote}

to 
{quote}
export HADOOP_PREFIX=$( cd "$config_bin/.."; pwd -P )
{quote}

we can eliminate all the annoying 'bin/../' from the library paths and make the 
output of commands like ps a lot more readable. not to mention that OS  would 
do just a bit less work as well. I can post a patch for it as well if it is 
needed


  was:
by simply replacing line 34 of libexec/hadoop-config.sh from:

export HADOOP_PREFIX=`dirname "$this"`/..

to 

export HADOOP_PREFIX=$( cd ${config_bin}/..; pwd -P )

we can eliminate all the annoying 'bin/../' from the library paths and make the 
output of commands like ps a lot more readable. not to mention that OS  would 
do just a bit less work as well. I can post a patch for it as well if it is 
needed



> getting rid of all the 'bin/../' from all the paths
> ---
>
> Key: HADOOP-9878
> URL: https://issues.apache.org/jira/browse/HADOOP-9878
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Reporter: kaveh minooie
>Priority: Trivial
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> by simply replacing line 34 of libexec/hadoop-config.sh from:
> {quote}
> export HADOOP_PREFIX=`dirname "$this"`/..
> {quote}
> to 
> {quote}
> export HADOOP_PREFIX=$( cd "$config_bin/.."; pwd -P )
> {quote}
> we can eliminate all the annoying 'bin/../' from the library paths and make 
> the output of commands like ps a lot more readable. not to mention that OS  
> would do just a bit less work as well. I can post a patch for it as well if 
> it is needed

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Moved] (HADOOP-9879) Move the version info of zookeeper test dependency to hadoop-project/pom

2013-08-15 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla moved HDFS-5082 to HADOOP-9879:


  Component/s: (was: build)
   build
 Target Version/s: 2.1.1-beta  (was: 2.1.1-beta)
Affects Version/s: (was: 2.1.0-beta)
   2.1.0-beta
   Issue Type: Improvement  (was: Bug)
  Key: HADOOP-9879  (was: HDFS-5082)
  Project: Hadoop Common  (was: Hadoop HDFS)

> Move the version info of zookeeper test dependency to hadoop-project/pom
> 
>
> Key: HADOOP-9879
> URL: https://issues.apache.org/jira/browse/HADOOP-9879
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.1.0-beta
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Minor
> Attachments: hdfs-5082-1.patch
>
>
> As different projects (HDFS, YARN) depend on zookeeper, it is better to keep 
> the version information in hadoop-project/pom.xml.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9879) Move the version info of zookeeper dependencies to hadoop-project/pom

2013-08-15 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-9879:
-

Summary: Move the version info of zookeeper dependencies to 
hadoop-project/pom  (was: Move the version info of zookeeper test dependency to 
hadoop-project/pom)

> Move the version info of zookeeper dependencies to hadoop-project/pom
> -
>
> Key: HADOOP-9879
> URL: https://issues.apache.org/jira/browse/HADOOP-9879
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.1.0-beta
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Minor
> Attachments: hdfs-5082-1.patch
>
>
> As different projects (HDFS, YARN) depend on zookeeper, it is better to keep 
> the version information in hadoop-project/pom.xml.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9878) getting rid of all the 'bin/../' from all the paths

2013-08-15 Thread kaveh minooie (JIRA)
kaveh minooie created HADOOP-9878:
-

 Summary: getting rid of all the 'bin/../' from all the paths
 Key: HADOOP-9878
 URL: https://issues.apache.org/jira/browse/HADOOP-9878
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Reporter: kaveh minooie
Priority: Trivial


by simply replacing line 34 of libexec/hadoop-config.sh from:

export HADOOP_PREFIX=`dirname "$this"`/..

to 

export HADOOP_PREFIX=$( cd ${config_bin}/..; pwd -P )

we can eliminate all the annoying 'bin/../' from the library paths and make the 
output of commands like ps a lot more readable. not to mention that OS  would 
do just a bit less work as well. I can post a patch for it as well if it is 
needed


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9877) hadoop fsshell can not ls .snapshot dir after HADOOP-9817

2013-08-15 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13741497#comment-13741497
 ] 

Colin Patrick McCabe commented on HADOOP-9877:
--

Thanks for finding this.

{code}
+  } else {
+FileStatus s = getFileStatus(new Path(candidate.getPath(), 
component));
+if (s != null) {
+  newCandidates.add(s);
{code}

This is incorrect.  If you try to list a symlink this way, it will list the 
target file instead.
You need to build up the path the same way the other code path does, except 
using {{getFileLinkStatus}} (not {{getFileStatus}}).

I agree with Andrew's suggestion about comments and Suresh's suggestion about 
tests.  Rather than creating snapshots in the unit test, you could list things 
in the /.reserved directory, since that always exists (and will not be returned 
by listStatus).

> hadoop fsshell can not ls .snapshot dir after HADOOP-9817
> -
>
> Key: HADOOP-9877
> URL: https://issues.apache.org/jira/browse/HADOOP-9877
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Binglin Chang
>Assignee: Binglin Chang
> Attachments: HADOOP-9877.v1.patch
>
>
> {code}
> decster:~/hadoop> bin/hadoop fs -ls "/foo/.snapshot"
> 13/08/16 01:17:22 INFO hdfs.DFSClient: + listPath(/)
> 13/08/16 01:17:22 INFO hdfs.DFSClient: + listPath(/foo)
> ls: `/foo/.snapshot': No such file or directory
> {code}
> HADOOP-9817 refactor some globStatus code, but forgot to handle special case 
> that .snapshot dir is not show up in listStatus but exists, so we need to 
> explicitly check path existence using getFileStatus, rather than depending on 
> listStatus results.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9877) hadoop fsshell can not ls .snapshot dir after HADOOP-9817

2013-08-15 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13741456#comment-13741456
 ] 

Suresh Srinivas commented on HADOOP-9877:
-

Can you please add a unit test to ensure in future if someone broke this 
functionality it is caught by the unit test failure.

bq. normally, patches are generated via a command like "git diff --no-prefix", 
I had to apply your patch with -p1 rather than -p0.
This is no longer the norm. Now that Jenkins handles both variants, lot of 
people no longer care to generate diff with --no-prefix.

> hadoop fsshell can not ls .snapshot dir after HADOOP-9817
> -
>
> Key: HADOOP-9877
> URL: https://issues.apache.org/jira/browse/HADOOP-9877
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Binglin Chang
>Assignee: Binglin Chang
> Attachments: HADOOP-9877.v1.patch
>
>
> {code}
> decster:~/hadoop> bin/hadoop fs -ls "/foo/.snapshot"
> 13/08/16 01:17:22 INFO hdfs.DFSClient: + listPath(/)
> 13/08/16 01:17:22 INFO hdfs.DFSClient: + listPath(/foo)
> ls: `/foo/.snapshot': No such file or directory
> {code}
> HADOOP-9817 refactor some globStatus code, but forgot to handle special case 
> that .snapshot dir is not show up in listStatus but exists, so we need to 
> explicitly check path existence using getFileStatus, rather than depending on 
> listStatus results.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9877) hadoop fsshell can not ls .snapshot dir after HADOOP-9817

2013-08-15 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13741452#comment-13741452
 ] 

Andrew Wang commented on HADOOP-9877:
-

Hi Binglin,

Patch looks good, nice catch! Two requests:

- Can you add a code comment explaining why we need this additional check? For 
example, "Some special filesystem directories (e.g. HDFS snapshot directories) 
are not returned by listStatus, but do exist if checked explicitly via 
getFileStatus."
- Need to fix Jenkins issues. It's fine to include a "list snapshots via shell" 
test in HDFS, especially because shell+snapshots seems to be a weak spot in our 
current unit tests.

+1 once these are addressed.

p.s. normally, patches are generated via a command like "git diff --no-prefix", 
I had to apply your patch with -p1 rather than -p0.

> hadoop fsshell can not ls .snapshot dir after HADOOP-9817
> -
>
> Key: HADOOP-9877
> URL: https://issues.apache.org/jira/browse/HADOOP-9877
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Binglin Chang
>Assignee: Binglin Chang
> Attachments: HADOOP-9877.v1.patch
>
>
> {code}
> decster:~/hadoop> bin/hadoop fs -ls "/foo/.snapshot"
> 13/08/16 01:17:22 INFO hdfs.DFSClient: + listPath(/)
> 13/08/16 01:17:22 INFO hdfs.DFSClient: + listPath(/foo)
> ls: `/foo/.snapshot': No such file or directory
> {code}
> HADOOP-9817 refactor some globStatus code, but forgot to handle special case 
> that .snapshot dir is not show up in listStatus but exists, so we need to 
> explicitly check path existence using getFileStatus, rather than depending on 
> listStatus results.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9868) Server must not advertise kerberos realm

2013-08-15 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-9868:
---

   Resolution: Fixed
Fix Version/s: 2.1.1-beta
   3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

The patch has been committed to trunk, branch-2 and branch-2.1-beta.

> Server must not advertise kerberos realm
> 
>
> Key: HADOOP-9868
> URL: https://issues.apache.org/jira/browse/HADOOP-9868
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 3.0.0, 2.1.1-beta
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Blocker
> Fix For: 3.0.0, 2.1.1-beta
>
> Attachments: HADOOP-9868.patch
>
>
> HADOOP-9789 broke kerberos authentication by making the RPC server advertise 
> the kerberos service principal realm.  SASL clients and servers do not 
> support specifying a realm, so it must be removed from the advertisement.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9868) Server must not advertise kerberos realm

2013-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13741389#comment-13741389
 ] 

Hudson commented on HADOOP-9868:


SUCCESS: Integrated in Hadoop-trunk-Commit #4272 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4272/])
HADOOP-9868. Server must not advertise kerberos realm. Contributed by Daryn 
Sharp. (kihwal: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1514448)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcServer.java


> Server must not advertise kerberos realm
> 
>
> Key: HADOOP-9868
> URL: https://issues.apache.org/jira/browse/HADOOP-9868
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 3.0.0, 2.1.1-beta
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Blocker
> Attachments: HADOOP-9868.patch
>
>
> HADOOP-9789 broke kerberos authentication by making the RPC server advertise 
> the kerberos service principal realm.  SASL clients and servers do not 
> support specifying a realm, so it must be removed from the advertisement.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9877) hadoop fsshell can not ls .snapshot dir after HADOOP-9817

2013-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13741384#comment-13741384
 ] 

Hadoop QA commented on HADOOP-9877:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12598267/HADOOP-9877.v1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.fs.TestPath

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2990//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2990//console

This message is automatically generated.

> hadoop fsshell can not ls .snapshot dir after HADOOP-9817
> -
>
> Key: HADOOP-9877
> URL: https://issues.apache.org/jira/browse/HADOOP-9877
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Binglin Chang
>Assignee: Binglin Chang
> Attachments: HADOOP-9877.v1.patch
>
>
> {code}
> decster:~/hadoop> bin/hadoop fs -ls "/foo/.snapshot"
> 13/08/16 01:17:22 INFO hdfs.DFSClient: + listPath(/)
> 13/08/16 01:17:22 INFO hdfs.DFSClient: + listPath(/foo)
> ls: `/foo/.snapshot': No such file or directory
> {code}
> HADOOP-9817 refactor some globStatus code, but forgot to handle special case 
> that .snapshot dir is not show up in listStatus but exists, so we need to 
> explicitly check path existence using getFileStatus, rather than depending on 
> listStatus results.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9868) Server must not advertise kerberos realm

2013-08-15 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13741376#comment-13741376
 ] 

Kihwal Lee commented on HADOOP-9868:


+1 I've verified the patch in a secure cluster.

> Server must not advertise kerberos realm
> 
>
> Key: HADOOP-9868
> URL: https://issues.apache.org/jira/browse/HADOOP-9868
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 3.0.0, 2.1.1-beta
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Blocker
> Attachments: HADOOP-9868.patch
>
>
> HADOOP-9789 broke kerberos authentication by making the RPC server advertise 
> the kerberos service principal realm.  SASL clients and servers do not 
> support specifying a realm, so it must be removed from the advertisement.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9868) Server must not advertise kerberos realm

2013-08-15 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13741375#comment-13741375
 ] 

Kihwal Lee commented on HADOOP-9868:


bq. Daryn Sharp, I'm a bit puzzled by this HADOOP-9789. While I understand the 
reasoning for it, doesn't that weaken security? An impersonator can publish an 
alternate principal for which it has a keytab for.

Please note that server advertised principals won't be honored by default. 

In order for the scenario you mentioned to happen, the client needs to connect 
to the fake service. It means DNS or the server is compromised or something 
like man-in-the-middle. If this happens, one can pretend to be a service, 
regardless of HADOOP-9789. For client-side exploits, if the client-side is 
compromised, a fake server address and a wide open SPN pattern may be placed in 
the config to trick the client. But if the system is compromised to this level, 
one can trick the client in many different ways anyway.



> Server must not advertise kerberos realm
> 
>
> Key: HADOOP-9868
> URL: https://issues.apache.org/jira/browse/HADOOP-9868
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 3.0.0, 2.1.1-beta
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Blocker
> Attachments: HADOOP-9868.patch
>
>
> HADOOP-9789 broke kerberos authentication by making the RPC server advertise 
> the kerberos service principal realm.  SASL clients and servers do not 
> support specifying a realm, so it must be removed from the advertisement.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-7985) maven build should be super fast when there are no changes

2013-08-15 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved HADOOP-7985.
--

Resolution: Won't Fix

> maven build should be super fast when there are no changes
> --
>
> Key: HADOOP-7985
> URL: https://issues.apache.org/jira/browse/HADOOP-7985
> Project: Hadoop Common
>  Issue Type: Wish
>  Components: build
>Affects Versions: 0.23.0
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
>  Labels: build, maven
> Attachments: HADOOP-7985.patch
>
>
> I use this command "mvn -Pdist -P-cbuild -Dmaven.javadoc.skip -DskipTests 
> install" to build. Without ANY changes in code, running this command takes 
> 1:32. It seems to me this is too long. Investigate if this time can be 
> reduced drastically.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9877) hadoop fsshell can not ls .snapshot dir after HADOOP-9817

2013-08-15 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HADOOP-9877:
--

Attachment: HADOOP-9877.v1.patch

Initial patch without test, I think the test should be in hdfs cause only hdfs 
filesystem has .snapshot dir.

> hadoop fsshell can not ls .snapshot dir after HADOOP-9817
> -
>
> Key: HADOOP-9877
> URL: https://issues.apache.org/jira/browse/HADOOP-9877
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Binglin Chang
>Assignee: Binglin Chang
> Attachments: HADOOP-9877.v1.patch
>
>
> {code}
> decster:~/hadoop> bin/hadoop fs -ls "/foo/.snapshot"
> 13/08/16 01:17:22 INFO hdfs.DFSClient: + listPath(/)
> 13/08/16 01:17:22 INFO hdfs.DFSClient: + listPath(/foo)
> ls: `/foo/.snapshot': No such file or directory
> {code}
> HADOOP-9817 refactor some globStatus code, but forgot to handle special case 
> that .snapshot dir is not show up in listStatus but exists, so we need to 
> explicitly check path existence using getFileStatus, rather than depending on 
> listStatus results.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9877) hadoop fsshell can not ls .snapshot dir after HADOOP-9817

2013-08-15 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HADOOP-9877:
--

 Target Version/s: 2.1.0-beta
Affects Version/s: 2.1.0-beta
   Status: Patch Available  (was: Open)

> hadoop fsshell can not ls .snapshot dir after HADOOP-9817
> -
>
> Key: HADOOP-9877
> URL: https://issues.apache.org/jira/browse/HADOOP-9877
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Binglin Chang
>Assignee: Binglin Chang
>
> {code}
> decster:~/hadoop> bin/hadoop fs -ls "/foo/.snapshot"
> 13/08/16 01:17:22 INFO hdfs.DFSClient: + listPath(/)
> 13/08/16 01:17:22 INFO hdfs.DFSClient: + listPath(/foo)
> ls: `/foo/.snapshot': No such file or directory
> {code}
> HADOOP-9817 refactor some globStatus code, but forgot to handle special case 
> that .snapshot dir is not show up in listStatus but exists, so we need to 
> explicitly check path existence using getFileStatus, rather than depending on 
> listStatus results.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9877) hadoop fsshell can not ls .snapshot dir after HADOOP-9817

2013-08-15 Thread Binglin Chang (JIRA)
Binglin Chang created HADOOP-9877:
-

 Summary: hadoop fsshell can not ls .snapshot dir after HADOOP-9817
 Key: HADOOP-9877
 URL: https://issues.apache.org/jira/browse/HADOOP-9877
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang


{code}
decster:~/hadoop> bin/hadoop fs -ls "/foo/.snapshot"
13/08/16 01:17:22 INFO hdfs.DFSClient: + listPath(/)
13/08/16 01:17:22 INFO hdfs.DFSClient: + listPath(/foo)
ls: `/foo/.snapshot': No such file or directory
{code}

HADOOP-9817 refactor some globStatus code, but forgot to handle special case 
that .snapshot dir is not show up in listStatus but exists, so we need to 
explicitly check path existence using getFileStatus, rather than depending on 
listStatus results.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9382) Add dfs mv overwrite option

2013-08-15 Thread Keegan Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13741249#comment-13741249
 ] 

Keegan Witt commented on HADOOP-9382:
-

By "in the patch", I meant "that I fixed in the previous patch".  This latest 
patch should be good to go.

> Add dfs mv overwrite option
> ---
>
> Key: HADOOP-9382
> URL: https://issues.apache.org/jira/browse/HADOOP-9382
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.0.0-alpha
>Reporter: Keegan Witt
>Assignee: Keegan Witt
>Priority: Minor
> Attachments: HADOOP-9382.1.patch, HADOOP-9382.2.patch, 
> HADOOP-9382.patch
>
>
> Add a -f option to allow overwriting existing destinations in dfs mv command.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9774) RawLocalFileSystem.listStatus() return absolute paths when input path is relative on Windows

2013-08-15 Thread shanyu zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13741212#comment-13741212
 ] 

shanyu zhao commented on HADOOP-9774:
-

Sure. I actually have run all yarn unit tests on patch v4, I can run another 
test on v5.

> RawLocalFileSystem.listStatus() return absolute paths when input path is 
> relative on Windows
> 
>
> Key: HADOOP-9774
> URL: https://issues.apache.org/jira/browse/HADOOP-9774
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: shanyu zhao
>Assignee: shanyu zhao
> Attachments: HADOOP-9774-2.patch, HADOOP-9774-3.patch, 
> HADOOP-9774-4.patch, HADOOP-9774-5.patch, HADOOP-9774.patch
>
>
> On Windows, when using RawLocalFileSystem.listStatus() to enumerate a 
> relative path (without drive spec), e.g., "file:///mydata", the resulting 
> paths become absolute paths, e.g., ["file://E:/mydata/t1.txt", 
> "file://E:/mydata/t2.txt"...].
> Note that if we use it to enumerate an absolute path, e.g., 
> "file://E:/mydata" then the we get the same results as above.
> This breaks some hive unit tests which uses local file system to simulate 
> HDFS when testing, therefore the drive spec is removed. Then after 
> listStatus() the path is changed to absolute path, hive failed to find the 
> path in its map reduce job.
> You'll see the following exception:
> [junit] java.io.IOException: cannot find dir = 
> pfile:/E:/GitHub/hive-monarch/build/ql/test/data/warehouse/src/kv1.txt in 
> pathToPartitionInfo: 
> [pfile:/GitHub/hive-monarch/build/ql/test/data/warehouse/src]
> [junit]   at 
> org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getPartitionDescFromPathRecursively(HiveFileFormatUtils.java:298)
> This problem is introduced by this JIRA:
> HADOOP-8962
> Prior to the fix for HADOOP-8962 (merged in 0.23.5), the resulting paths are 
> relative paths if the parent paths are relative, e.g., 
> ["file:///mydata/t1.txt", "file:///mydata/t2.txt"...]
> This behavior change is a side effect of the fix in HADOOP-8962, not an 
> intended change. The resulting behavior, even though is legitimate from a 
> function point of view, break consistency from the caller's point of view. 
> When the caller use a relative path (without drive spec) to do listStatus() 
> the resulting path should be relative. Therefore, I think this should be 
> fixed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9417) Support for symlink resolution in LocalFileSystem / RawLocalFileSystem

2013-08-15 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13741180#comment-13741180
 ] 

Arpit Agarwal commented on HADOOP-9417:
---

Hi Andrew, thanks for the offer! We worked around the conflicts by leaving out 
the FileSystem changes. It makes sense to skip 2.1 if the support is incomplete.

> Support for symlink resolution in LocalFileSystem / RawLocalFileSystem
> --
>
> Key: HADOOP-9417
> URL: https://issues.apache.org/jira/browse/HADOOP-9417
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 2.3.0
>
> Attachments: hadoop-9417-1.patch, hadoop-9417-2.patch, 
> hadoop-9417-3.patch, hadoop-9417-4.patch, hadoop-9417-5.patch, 
> hadoop-9417-6.patch
>
>
> Add symlink resolution support to LocalFileSystem/RawLocalFileSystem as well 
> as tests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9860) Remove class HackedKeytab and HackedKeytabEncoder from hadoop-minikdc once jira DIRSERVER-1882 solved

2013-08-15 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1374#comment-1374
 ] 

Wei Yan commented on HADOOP-9860:
-

Thanks, [~elecharny]. Waiting for the release.

> Remove class HackedKeytab and HackedKeytabEncoder from hadoop-minikdc once 
> jira DIRSERVER-1882 solved
> -
>
> Key: HADOOP-9860
> URL: https://issues.apache.org/jira/browse/HADOOP-9860
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei Yan
>Assignee: Wei Yan
>
> Remove class {{HackedKeytab}} and {{HackedKeytabEncoder}} from hadoop-minikdc 
> (HADOOP-9848) once jira DIRSERVER-1882 solved.
> Also update the apacheds version in the pom.xml.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9876) Take advantage of protobuf 2.5.0 new features for increased performance

2013-08-15 Thread Alejandro Abdelnur (JIRA)
Alejandro Abdelnur created HADOOP-9876:
--

 Summary: Take advantage of protobuf 2.5.0 new features for 
increased performance
 Key: HADOOP-9876
 URL: https://issues.apache.org/jira/browse/HADOOP-9876
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Alejandro Abdelnur


>From protobuf 2.5.0 release notes:

  * Comments in proto files are now collected and put into generated code as
comments for corresponding classes and data members.
  * Added Parser to parse directly into messages without a Builder. For
example,
  Foo foo = Foo.PARSER.ParseFrom(input);
Using Parser is ~25% faster than using Builder to parse messages.
  * Added getters/setters to access the underlying ByteString of a string field
directly.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9873) hadoop-env.sh got called multiple times

2013-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13741083#comment-13741083
 ] 

Hadoop QA commented on HADOOP-9873:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12598216/HADOOP-9873.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2989//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2989//console

This message is automatically generated.

> hadoop-env.sh got called multiple times
> ---
>
> Key: HADOOP-9873
> URL: https://issues.apache.org/jira/browse/HADOOP-9873
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>Priority: Minor
> Attachments: HADOOP-9873.patch
>
>
> Ref. below, it can be seen hadoop-env.sh got called multiple times when 
> running something like 'hadoop-daemon.sh start namenode'.
> {noformat}
> [drankye@zkdev ~]$ cd $HADOOP_PREFIX
> [drankye@zkdev hadoop-3.0.0-SNAPSHOT]$ grep -r hadoop-env *
> libexec/hadoop-config.sh:if [ -e "${HADOOP_PREFIX}/conf/hadoop-env.sh" ]; then
> libexec/hadoop-config.sh:if [ -f "${HADOOP_CONF_DIR}/hadoop-env.sh" ]; then
> libexec/hadoop-config.sh:  . "${HADOOP_CONF_DIR}/hadoop-env.sh"
> sbin/hadoop-daemon.sh:if [ -f "${HADOOP_CONF_DIR}/hadoop-env.sh" ]; then
> sbin/hadoop-daemon.sh:  . "${HADOOP_CONF_DIR}/hadoop-env.sh"
> {noformat}
> Considering the following lines in hadoop-env.sh
> {code}
> # Command specific options appended to HADOOP_OPTS when specified
> export 
> HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS}
>  -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} 
> $HADOOP_NAMENODE_OPTS"
> {code}
> It may end with some redundant result like below when called multiple times.
> {noformat}
> HADOOP_NAMENODE_OPTS='-Dhadoop.security.logger=INFO,RFAS 
> -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS 
> -Dhdfs.audit.logger=INFO,NullAppender '
> {noformat}
> It's not a big issue for now however it would be better to be clean and avoid 
> this since it can cause the final JAVA command line is very lengthy and hard 
> to read.
> A possible fix would be to add a flag variable like HADOOP_ENV_INITED in 
> hadoop-env.sh, and then at the beginning of it check the flag. If the flag 
> evaluates true, then return immediately.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9382) Add dfs mv overwrite option

2013-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13741072#comment-13741072
 ] 

Hadoop QA commented on HADOOP-9382:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12598214/HADOOP-9382.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2988//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2988//console

This message is automatically generated.

> Add dfs mv overwrite option
> ---
>
> Key: HADOOP-9382
> URL: https://issues.apache.org/jira/browse/HADOOP-9382
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.0.0-alpha
>Reporter: Keegan Witt
>Assignee: Keegan Witt
>Priority: Minor
> Attachments: HADOOP-9382.1.patch, HADOOP-9382.2.patch, 
> HADOOP-9382.patch
>
>
> Add a -f option to allow overwriting existing destinations in dfs mv command.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9873) hadoop-env.sh got called multiple times

2013-08-15 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-9873:
--

Status: Patch Available  (was: Open)

The simple patch was tested manually and it works fine.

> hadoop-env.sh got called multiple times
> ---
>
> Key: HADOOP-9873
> URL: https://issues.apache.org/jira/browse/HADOOP-9873
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>Priority: Minor
> Attachments: HADOOP-9873.patch
>
>
> Ref. below, it can be seen hadoop-env.sh got called multiple times when 
> running something like 'hadoop-daemon.sh start namenode'.
> {noformat}
> [drankye@zkdev ~]$ cd $HADOOP_PREFIX
> [drankye@zkdev hadoop-3.0.0-SNAPSHOT]$ grep -r hadoop-env *
> libexec/hadoop-config.sh:if [ -e "${HADOOP_PREFIX}/conf/hadoop-env.sh" ]; then
> libexec/hadoop-config.sh:if [ -f "${HADOOP_CONF_DIR}/hadoop-env.sh" ]; then
> libexec/hadoop-config.sh:  . "${HADOOP_CONF_DIR}/hadoop-env.sh"
> sbin/hadoop-daemon.sh:if [ -f "${HADOOP_CONF_DIR}/hadoop-env.sh" ]; then
> sbin/hadoop-daemon.sh:  . "${HADOOP_CONF_DIR}/hadoop-env.sh"
> {noformat}
> Considering the following lines in hadoop-env.sh
> {code}
> # Command specific options appended to HADOOP_OPTS when specified
> export 
> HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS}
>  -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} 
> $HADOOP_NAMENODE_OPTS"
> {code}
> It may end with some redundant result like below when called multiple times.
> {noformat}
> HADOOP_NAMENODE_OPTS='-Dhadoop.security.logger=INFO,RFAS 
> -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS 
> -Dhdfs.audit.logger=INFO,NullAppender '
> {noformat}
> It's not a big issue for now however it would be better to be clean and avoid 
> this since it can cause the final JAVA command line is very lengthy and hard 
> to read.
> A possible fix would be to add a flag variable like HADOOP_ENV_INITED in 
> hadoop-env.sh, and then at the beginning of it check the flag. If the flag 
> evaluates true, then return immediately.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9873) hadoop-env.sh got called multiple times

2013-08-15 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-9873:
--

Attachment: HADOOP-9873.patch

Attached a simple patch to avoid multiple calls.

> hadoop-env.sh got called multiple times
> ---
>
> Key: HADOOP-9873
> URL: https://issues.apache.org/jira/browse/HADOOP-9873
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>Priority: Minor
> Attachments: HADOOP-9873.patch
>
>
> Ref. below, it can be seen hadoop-env.sh got called multiple times when 
> running something like 'hadoop-daemon.sh start namenode'.
> {noformat}
> [drankye@zkdev ~]$ cd $HADOOP_PREFIX
> [drankye@zkdev hadoop-3.0.0-SNAPSHOT]$ grep -r hadoop-env *
> libexec/hadoop-config.sh:if [ -e "${HADOOP_PREFIX}/conf/hadoop-env.sh" ]; then
> libexec/hadoop-config.sh:if [ -f "${HADOOP_CONF_DIR}/hadoop-env.sh" ]; then
> libexec/hadoop-config.sh:  . "${HADOOP_CONF_DIR}/hadoop-env.sh"
> sbin/hadoop-daemon.sh:if [ -f "${HADOOP_CONF_DIR}/hadoop-env.sh" ]; then
> sbin/hadoop-daemon.sh:  . "${HADOOP_CONF_DIR}/hadoop-env.sh"
> {noformat}
> Considering the following lines in hadoop-env.sh
> {code}
> # Command specific options appended to HADOOP_OPTS when specified
> export 
> HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS}
>  -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} 
> $HADOOP_NAMENODE_OPTS"
> {code}
> It may end with some redundant result like below when called multiple times.
> {noformat}
> HADOOP_NAMENODE_OPTS='-Dhadoop.security.logger=INFO,RFAS 
> -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS 
> -Dhdfs.audit.logger=INFO,NullAppender '
> {noformat}
> It's not a big issue for now however it would be better to be clean and avoid 
> this since it can cause the final JAVA command line is very lengthy and hard 
> to read.
> A possible fix would be to add a flag variable like HADOOP_ENV_INITED in 
> hadoop-env.sh, and then at the beginning of it check the flag. If the flag 
> evaluates true, then return immediately.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9381) Document dfs cp -f option

2013-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13741016#comment-13741016
 ] 

Hudson commented on HADOOP-9381:


FAILURE: Integrated in Hadoop-Mapreduce-trunk #1519 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1519/])
HADOOP-9381. Document dfs cp -f option. Contributed by Keegan Witt and Suresh 
Srinivas. (suresh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1514089)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CopyCommands.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/testConf.xml


> Document dfs cp -f option
> -
>
> Key: HADOOP-9381
> URL: https://issues.apache.org/jira/browse/HADOOP-9381
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: Keegan Witt
>Assignee: Keegan Witt
>Priority: Trivial
> Fix For: 2.1.1-beta
>
> Attachments: HADOOP-9381.1.patch, HADOOP-9381.2.patch, 
> HADOOP-9381.patch, HADOOP-9381.patch
>
>
> dfs cp should document -f (overwrite) option in the page displayed by -help. 
> Additionally, the HTML documentation page should also document this option 
> and all the options should all be formatted the same.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9875) TestDoAsEffectiveUser can fail on JDK 7

2013-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13741018#comment-13741018
 ] 

Hudson commented on HADOOP-9875:


FAILURE: Integrated in Hadoop-Mapreduce-trunk #1519 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1519/])
HADOOP-9875.  TestDoAsEffectiveUser can fail on JDK 7.  (Aaron T. Myers via 
Colin Patrick McCabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1514147)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestDoAsEffectiveUser.java


> TestDoAsEffectiveUser can fail on JDK 7
> ---
>
> Key: HADOOP-9875
> URL: https://issues.apache.org/jira/browse/HADOOP-9875
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.1.0-beta
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
>Priority: Minor
> Fix For: 2.3.0
>
> Attachments: HADOOP-9875.patch
>
>
> Another issue with the test method execution order changing between JDK 6 and 
> 7.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9872) Improve protoc version handling and detection

2013-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13741015#comment-13741015
 ] 

Hudson commented on HADOOP-9872:


FAILURE: Integrated in Hadoop-Mapreduce-trunk #1519 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1519/])
HADOOP-9872. Improve protoc version handling and detection. (tucu) (tucu: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1514068)
* /hadoop/common/trunk/BUILDING.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/VersionInfo.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/common-version-info.properties
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/CLIMiniCluster.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/SingleCluster.apt.vm
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/pom.xml
* 
/hadoop/common/trunk/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java
* 
/hadoop/common/trunk/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/Exec.java
* /hadoop/common/trunk/hadoop-project/pom.xml
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/README
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml


> Improve protoc version handling and detection
> -
>
> Key: HADOOP-9872
> URL: https://issues.apache.org/jira/browse/HADOOP-9872
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.1.0-beta
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
>Priority: Blocker
> Fix For: 2.1.0-beta
>
> Attachments: HADOOP-9872.patch
>
>
> HADOOP-9845 bumped up protoc from 2.4.1 to 2.5.0, but we run into a few 
> quirks:
> * 'protoc --version' in 2.4.1 exits with 1
> * 'protoc --version' in 2.5.0 exits with 0
> * if you have multiple protoc in your environment, you have to the the one 
> you want to use in the PATH before building hadoop
> * build documentation and requirements of protoc are outdated
> This patch does:
> * handles protoc version correctly independently of the exit code
> * if HADOOP_PROTOC_PATH env var is defined, it uses it as the protoc 
> executable * if HADOOP_PROTOC_PATH is not defined, it picks protoc from the 
> PATH
> * documentation updated to reflect 2.5.0 is required
> * enforces the version of protoc and protobuf JAR are the same
> * Added to VersionInfo the protoc version used (sooner or later this will be 
> useful for in a troubleshooting situation).
> [~vicaya] suggested to make the version check for protoc lax (i.e. 2.5.*). 
> While working on the patch I've thought about that. But that would introduce 
> a potential mismatch between protoc and protobuff  JAR.
> Still If you want to use different version of protoc/protobuff from the one 
> defined in the POM, you can use the -Dprotobuf.version= to specify your 
> alternate version. But I would recommend not to do this, because if you 
> publish the artifacts to a Maven repo, the fact you used 
> -Dprotobuf.version= will be lost and the version defined in the POM 
> properties will be used (IMO Maven should use the effective POM on deploy, 
> but they don't).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9652) RawLocalFs#getFileLinkStatus does not fill in the link owner and mode

2013-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13741008#comment-13741008
 ] 

Hudson commented on HADOOP-9652:


FAILURE: Integrated in Hadoop-Mapreduce-trunk #1519 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1519/])
HADOOP-9652.  RawLocalFs#getFileLinkStatus does not fill in the link owner and 
mode.  (Andrew Wang via Colin Patrick McCabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1514088)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegateToFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/HardLink.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Stat.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/local/RawLocalFs.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestStat.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestSymlinkLocalFS.java


> RawLocalFs#getFileLinkStatus does not fill in the link owner and mode
> -
>
> Key: HADOOP-9652
> URL: https://issues.apache.org/jira/browse/HADOOP-9652
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Colin Patrick McCabe
>Assignee: Andrew Wang
> Fix For: 2.3.0
>
> Attachments: hadoop-9452-1.patch, hadoop-9652-2.patch, 
> hadoop-9652-3.patch, hadoop-9652-4.patch, hadoop-9652-5.patch, 
> hadoop-9652-6.patch
>
>
> {{RawLocalFs#getFileLinkStatus}} does not actually get the owner and mode of 
> the symlink, but instead uses the owner and mode of the symlink target.  If 
> the target can't be found, it fills in bogus values (the empty string and 
> FsPermission.getDefault) for these.
> Symlinks have an owner distinct from the owner of the target they point to, 
> and getFileLinkStatus ought to expose this.
> In some operating systems, symlinks can have a permission other than 0777.  
> We ought to expose this in RawLocalFilesystem and other places, although we 
> don't necessarily have to support this behavior in HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9382) Add dfs mv overwrite option

2013-08-15 Thread Keegan Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keegan Witt updated HADOOP-9382:


Attachment: HADOOP-9382.2.patch

> Add dfs mv overwrite option
> ---
>
> Key: HADOOP-9382
> URL: https://issues.apache.org/jira/browse/HADOOP-9382
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.0.0-alpha
>Reporter: Keegan Witt
>Assignee: Keegan Witt
>Priority: Minor
> Attachments: HADOOP-9382.1.patch, HADOOP-9382.2.patch, 
> HADOOP-9382.patch
>
>
> Add a -f option to allow overwriting existing destinations in dfs mv command.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9382) Add dfs mv overwrite option

2013-08-15 Thread Keegan Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13741004#comment-13741004
 ] 

Keegan Witt commented on HADOOP-9382:
-

OK, I took those out.  There are a couple of Javadoc errors that were in the 
patch that someone should look into at some point (I took those out too).

> Add dfs mv overwrite option
> ---
>
> Key: HADOOP-9382
> URL: https://issues.apache.org/jira/browse/HADOOP-9382
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.0.0-alpha
>Reporter: Keegan Witt
>Assignee: Keegan Witt
>Priority: Minor
> Attachments: HADOOP-9382.1.patch, HADOOP-9382.2.patch, 
> HADOOP-9382.patch
>
>
> Add a -f option to allow overwriting existing destinations in dfs mv command.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9382) Add dfs mv overwrite option

2013-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13740986#comment-13740986
 ] 

Hadoop QA commented on HADOOP-9382:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12598195/HADOOP-9382.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2987//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2987//console

This message is automatically generated.

> Add dfs mv overwrite option
> ---
>
> Key: HADOOP-9382
> URL: https://issues.apache.org/jira/browse/HADOOP-9382
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.0.0-alpha
>Reporter: Keegan Witt
>Assignee: Keegan Witt
>Priority: Minor
> Attachments: HADOOP-9382.1.patch, HADOOP-9382.patch
>
>
> Add a -f option to allow overwriting existing destinations in dfs mv command.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9382) Add dfs mv overwrite option

2013-08-15 Thread Keegan Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keegan Witt updated HADOOP-9382:


Attachment: (was: HADOOP-9382.2.patch)

> Add dfs mv overwrite option
> ---
>
> Key: HADOOP-9382
> URL: https://issues.apache.org/jira/browse/HADOOP-9382
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.0.0-alpha
>Reporter: Keegan Witt
>Assignee: Keegan Witt
>Priority: Minor
> Attachments: HADOOP-9382.1.patch, HADOOP-9382.patch
>
>
> Add a -f option to allow overwriting existing destinations in dfs mv command.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9382) Add dfs mv overwrite option

2013-08-15 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13740960#comment-13740960
 ] 

Suresh Srinivas commented on HADOOP-9382:
-

Comments:
# Please undo unnecessary empty line changes, such as the following:
{code}
   }
-  
+
   protected PathData getTargetPath(PathData src) throws IOException {
{code}

TestCLI failure is related to this change that needs to be fixed. 

You can run a specific test by:
* {{cd to hadoop-common-project/hadoop-common}}
* {{mvn -Dtest=TestCLI test}}


> Add dfs mv overwrite option
> ---
>
> Key: HADOOP-9382
> URL: https://issues.apache.org/jira/browse/HADOOP-9382
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.0.0-alpha
>Reporter: Keegan Witt
>Assignee: Keegan Witt
>Priority: Minor
> Attachments: HADOOP-9382.1.patch, HADOOP-9382.2.patch, 
> HADOOP-9382.patch
>
>
> Add a -f option to allow overwriting existing destinations in dfs mv command.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9382) Add dfs mv overwrite option

2013-08-15 Thread Keegan Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keegan Witt updated HADOOP-9382:


Attachment: HADOOP-9382.2.patch

> Add dfs mv overwrite option
> ---
>
> Key: HADOOP-9382
> URL: https://issues.apache.org/jira/browse/HADOOP-9382
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.0.0-alpha
>Reporter: Keegan Witt
>Assignee: Keegan Witt
>Priority: Minor
> Attachments: HADOOP-9382.1.patch, HADOOP-9382.2.patch, 
> HADOOP-9382.patch
>
>
> Add a -f option to allow overwriting existing destinations in dfs mv command.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9875) TestDoAsEffectiveUser can fail on JDK 7

2013-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13740947#comment-13740947
 ] 

Hudson commented on HADOOP-9875:


FAILURE: Integrated in Hadoop-Hdfs-trunk #1492 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1492/])
HADOOP-9875.  TestDoAsEffectiveUser can fail on JDK 7.  (Aaron T. Myers via 
Colin Patrick McCabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1514147)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestDoAsEffectiveUser.java


> TestDoAsEffectiveUser can fail on JDK 7
> ---
>
> Key: HADOOP-9875
> URL: https://issues.apache.org/jira/browse/HADOOP-9875
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.1.0-beta
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
>Priority: Minor
> Fix For: 2.3.0
>
> Attachments: HADOOP-9875.patch
>
>
> Another issue with the test method execution order changing between JDK 6 and 
> 7.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9652) RawLocalFs#getFileLinkStatus does not fill in the link owner and mode

2013-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13740937#comment-13740937
 ] 

Hudson commented on HADOOP-9652:


FAILURE: Integrated in Hadoop-Hdfs-trunk #1492 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1492/])
HADOOP-9652.  RawLocalFs#getFileLinkStatus does not fill in the link owner and 
mode.  (Andrew Wang via Colin Patrick McCabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1514088)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegateToFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/HardLink.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Stat.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/local/RawLocalFs.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestStat.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestSymlinkLocalFS.java


> RawLocalFs#getFileLinkStatus does not fill in the link owner and mode
> -
>
> Key: HADOOP-9652
> URL: https://issues.apache.org/jira/browse/HADOOP-9652
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Colin Patrick McCabe
>Assignee: Andrew Wang
> Fix For: 2.3.0
>
> Attachments: hadoop-9452-1.patch, hadoop-9652-2.patch, 
> hadoop-9652-3.patch, hadoop-9652-4.patch, hadoop-9652-5.patch, 
> hadoop-9652-6.patch
>
>
> {{RawLocalFs#getFileLinkStatus}} does not actually get the owner and mode of 
> the symlink, but instead uses the owner and mode of the symlink target.  If 
> the target can't be found, it fills in bogus values (the empty string and 
> FsPermission.getDefault) for these.
> Symlinks have an owner distinct from the owner of the target they point to, 
> and getFileLinkStatus ought to expose this.
> In some operating systems, symlinks can have a permission other than 0777.  
> We ought to expose this in RawLocalFilesystem and other places, although we 
> don't necessarily have to support this behavior in HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9872) Improve protoc version handling and detection

2013-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13740944#comment-13740944
 ] 

Hudson commented on HADOOP-9872:


FAILURE: Integrated in Hadoop-Hdfs-trunk #1492 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1492/])
HADOOP-9872. Improve protoc version handling and detection. (tucu) (tucu: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1514068)
* /hadoop/common/trunk/BUILDING.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/VersionInfo.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/common-version-info.properties
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/CLIMiniCluster.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/SingleCluster.apt.vm
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/pom.xml
* 
/hadoop/common/trunk/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java
* 
/hadoop/common/trunk/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/Exec.java
* /hadoop/common/trunk/hadoop-project/pom.xml
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/README
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml


> Improve protoc version handling and detection
> -
>
> Key: HADOOP-9872
> URL: https://issues.apache.org/jira/browse/HADOOP-9872
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.1.0-beta
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
>Priority: Blocker
> Fix For: 2.1.0-beta
>
> Attachments: HADOOP-9872.patch
>
>
> HADOOP-9845 bumped up protoc from 2.4.1 to 2.5.0, but we run into a few 
> quirks:
> * 'protoc --version' in 2.4.1 exits with 1
> * 'protoc --version' in 2.5.0 exits with 0
> * if you have multiple protoc in your environment, you have to the the one 
> you want to use in the PATH before building hadoop
> * build documentation and requirements of protoc are outdated
> This patch does:
> * handles protoc version correctly independently of the exit code
> * if HADOOP_PROTOC_PATH env var is defined, it uses it as the protoc 
> executable * if HADOOP_PROTOC_PATH is not defined, it picks protoc from the 
> PATH
> * documentation updated to reflect 2.5.0 is required
> * enforces the version of protoc and protobuf JAR are the same
> * Added to VersionInfo the protoc version used (sooner or later this will be 
> useful for in a troubleshooting situation).
> [~vicaya] suggested to make the version check for protoc lax (i.e. 2.5.*). 
> While working on the patch I've thought about that. But that would introduce 
> a potential mismatch between protoc and protobuff  JAR.
> Still If you want to use different version of protoc/protobuff from the one 
> defined in the POM, you can use the -Dprotobuf.version= to specify your 
> alternate version. But I would recommend not to do this, because if you 
> publish the artifacts to a Maven repo, the fact you used 
> -Dprotobuf.version= will be lost and the version defined in the POM 
> properties will be used (IMO Maven should use the effective POM on deploy, 
> but they don't).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9381) Document dfs cp -f option

2013-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13740945#comment-13740945
 ] 

Hudson commented on HADOOP-9381:


FAILURE: Integrated in Hadoop-Hdfs-trunk #1492 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1492/])
HADOOP-9381. Document dfs cp -f option. Contributed by Keegan Witt and Suresh 
Srinivas. (suresh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1514089)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CopyCommands.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/testConf.xml


> Document dfs cp -f option
> -
>
> Key: HADOOP-9381
> URL: https://issues.apache.org/jira/browse/HADOOP-9381
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: Keegan Witt
>Assignee: Keegan Witt
>Priority: Trivial
> Fix For: 2.1.1-beta
>
> Attachments: HADOOP-9381.1.patch, HADOOP-9381.2.patch, 
> HADOOP-9381.patch, HADOOP-9381.patch
>
>
> dfs cp should document -f (overwrite) option in the page displayed by -help. 
> Additionally, the HTML documentation page should also document this option 
> and all the options should all be formatted the same.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9875) TestDoAsEffectiveUser can fail on JDK 7

2013-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13740875#comment-13740875
 ] 

Hudson commented on HADOOP-9875:


SUCCESS: Integrated in Hadoop-Yarn-trunk #302 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/302/])
HADOOP-9875.  TestDoAsEffectiveUser can fail on JDK 7.  (Aaron T. Myers via 
Colin Patrick McCabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1514147)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestDoAsEffectiveUser.java


> TestDoAsEffectiveUser can fail on JDK 7
> ---
>
> Key: HADOOP-9875
> URL: https://issues.apache.org/jira/browse/HADOOP-9875
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.1.0-beta
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
>Priority: Minor
> Fix For: 2.3.0
>
> Attachments: HADOOP-9875.patch
>
>
> Another issue with the test method execution order changing between JDK 6 and 
> 7.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9652) RawLocalFs#getFileLinkStatus does not fill in the link owner and mode

2013-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13740865#comment-13740865
 ] 

Hudson commented on HADOOP-9652:


SUCCESS: Integrated in Hadoop-Yarn-trunk #302 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/302/])
HADOOP-9652.  RawLocalFs#getFileLinkStatus does not fill in the link owner and 
mode.  (Andrew Wang via Colin Patrick McCabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1514088)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegateToFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/HardLink.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Stat.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/local/RawLocalFs.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestStat.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestSymlinkLocalFS.java


> RawLocalFs#getFileLinkStatus does not fill in the link owner and mode
> -
>
> Key: HADOOP-9652
> URL: https://issues.apache.org/jira/browse/HADOOP-9652
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Colin Patrick McCabe
>Assignee: Andrew Wang
> Fix For: 2.3.0
>
> Attachments: hadoop-9452-1.patch, hadoop-9652-2.patch, 
> hadoop-9652-3.patch, hadoop-9652-4.patch, hadoop-9652-5.patch, 
> hadoop-9652-6.patch
>
>
> {{RawLocalFs#getFileLinkStatus}} does not actually get the owner and mode of 
> the symlink, but instead uses the owner and mode of the symlink target.  If 
> the target can't be found, it fills in bogus values (the empty string and 
> FsPermission.getDefault) for these.
> Symlinks have an owner distinct from the owner of the target they point to, 
> and getFileLinkStatus ought to expose this.
> In some operating systems, symlinks can have a permission other than 0777.  
> We ought to expose this in RawLocalFilesystem and other places, although we 
> don't necessarily have to support this behavior in HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9872) Improve protoc version handling and detection

2013-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13740872#comment-13740872
 ] 

Hudson commented on HADOOP-9872:


SUCCESS: Integrated in Hadoop-Yarn-trunk #302 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/302/])
HADOOP-9872. Improve protoc version handling and detection. (tucu) (tucu: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1514068)
* /hadoop/common/trunk/BUILDING.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/VersionInfo.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/common-version-info.properties
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/CLIMiniCluster.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/SingleCluster.apt.vm
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/pom.xml
* 
/hadoop/common/trunk/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java
* 
/hadoop/common/trunk/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/Exec.java
* /hadoop/common/trunk/hadoop-project/pom.xml
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/README
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml


> Improve protoc version handling and detection
> -
>
> Key: HADOOP-9872
> URL: https://issues.apache.org/jira/browse/HADOOP-9872
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.1.0-beta
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
>Priority: Blocker
> Fix For: 2.1.0-beta
>
> Attachments: HADOOP-9872.patch
>
>
> HADOOP-9845 bumped up protoc from 2.4.1 to 2.5.0, but we run into a few 
> quirks:
> * 'protoc --version' in 2.4.1 exits with 1
> * 'protoc --version' in 2.5.0 exits with 0
> * if you have multiple protoc in your environment, you have to the the one 
> you want to use in the PATH before building hadoop
> * build documentation and requirements of protoc are outdated
> This patch does:
> * handles protoc version correctly independently of the exit code
> * if HADOOP_PROTOC_PATH env var is defined, it uses it as the protoc 
> executable * if HADOOP_PROTOC_PATH is not defined, it picks protoc from the 
> PATH
> * documentation updated to reflect 2.5.0 is required
> * enforces the version of protoc and protobuf JAR are the same
> * Added to VersionInfo the protoc version used (sooner or later this will be 
> useful for in a troubleshooting situation).
> [~vicaya] suggested to make the version check for protoc lax (i.e. 2.5.*). 
> While working on the patch I've thought about that. But that would introduce 
> a potential mismatch between protoc and protobuff  JAR.
> Still If you want to use different version of protoc/protobuff from the one 
> defined in the POM, you can use the -Dprotobuf.version= to specify your 
> alternate version. But I would recommend not to do this, because if you 
> publish the artifacts to a Maven repo, the fact you used 
> -Dprotobuf.version= will be lost and the version defined in the POM 
> properties will be used (IMO Maven should use the effective POM on deploy, 
> but they don't).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9381) Document dfs cp -f option

2013-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13740873#comment-13740873
 ] 

Hudson commented on HADOOP-9381:


SUCCESS: Integrated in Hadoop-Yarn-trunk #302 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/302/])
HADOOP-9381. Document dfs cp -f option. Contributed by Keegan Witt and Suresh 
Srinivas. (suresh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1514089)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CopyCommands.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/testConf.xml


> Document dfs cp -f option
> -
>
> Key: HADOOP-9381
> URL: https://issues.apache.org/jira/browse/HADOOP-9381
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: Keegan Witt
>Assignee: Keegan Witt
>Priority: Trivial
> Fix For: 2.1.1-beta
>
> Attachments: HADOOP-9381.1.patch, HADOOP-9381.2.patch, 
> HADOOP-9381.patch, HADOOP-9381.patch
>
>
> dfs cp should document -f (overwrite) option in the page displayed by -help. 
> Additionally, the HTML documentation page should also document this option 
> and all the options should all be formatted the same.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9729) The example code of org.apache.hadoop.util.Tool is incorrect

2013-08-15 Thread hellojinjie (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13740850#comment-13740850
 ] 

hellojinjie commented on HADOOP-9729:
-

I just modify the comment, so, it's no need to add tests or modify tests.

> The example code of org.apache.hadoop.util.Tool is incorrect
> 
>
> Key: HADOOP-9729
> URL: https://issues.apache.org/jira/browse/HADOOP-9729
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 1.1.2
>Reporter: hellojinjie
> Attachments: HADOOP-9729.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> see http://hadoop.apache.org/docs/stable/api/org/apache/hadoop/util/Tool.html
> function  public int run(String[] args) has no return value in the example 
> code

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9729) The example code of org.apache.hadoop.util.Tool is incorrect

2013-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13740841#comment-13740841
 ] 

Hadoop QA commented on HADOOP-9729:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12598170/HADOOP-9729.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2986//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2986//console

This message is automatically generated.

> The example code of org.apache.hadoop.util.Tool is incorrect
> 
>
> Key: HADOOP-9729
> URL: https://issues.apache.org/jira/browse/HADOOP-9729
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 1.1.2
>Reporter: hellojinjie
> Attachments: HADOOP-9729.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> see http://hadoop.apache.org/docs/stable/api/org/apache/hadoop/util/Tool.html
> function  public int run(String[] args) has no return value in the example 
> code

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9729) The example code of org.apache.hadoop.util.Tool is incorrect

2013-08-15 Thread hellojinjie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hellojinjie updated HADOOP-9729:


Status: Patch Available  (was: Open)

> The example code of org.apache.hadoop.util.Tool is incorrect
> 
>
> Key: HADOOP-9729
> URL: https://issues.apache.org/jira/browse/HADOOP-9729
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 1.1.2
>Reporter: hellojinjie
> Attachments: HADOOP-9729.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> see http://hadoop.apache.org/docs/stable/api/org/apache/hadoop/util/Tool.html
> function  public int run(String[] args) has no return value in the example 
> code

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9729) The example code of org.apache.hadoop.util.Tool is incorrect

2013-08-15 Thread hellojinjie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hellojinjie updated HADOOP-9729:


Attachment: HADOOP-9729.patch

> The example code of org.apache.hadoop.util.Tool is incorrect
> 
>
> Key: HADOOP-9729
> URL: https://issues.apache.org/jira/browse/HADOOP-9729
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 1.1.2
>Reporter: hellojinjie
> Attachments: HADOOP-9729.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> see http://hadoop.apache.org/docs/stable/api/org/apache/hadoop/util/Tool.html
> function  public int run(String[] args) has no return value in the example 
> code

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9382) Add dfs mv overwrite option

2013-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13740773#comment-13740773
 ] 

Hadoop QA commented on HADOOP-9382:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12598152/HADOOP-9382.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.cli.TestCLI

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2985//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2985//console

This message is automatically generated.

> Add dfs mv overwrite option
> ---
>
> Key: HADOOP-9382
> URL: https://issues.apache.org/jira/browse/HADOOP-9382
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.0.0-alpha
>Reporter: Keegan Witt
>Assignee: Keegan Witt
>Priority: Minor
> Attachments: HADOOP-9382.1.patch, HADOOP-9382.patch
>
>
> Add a -f option to allow overwriting existing destinations in dfs mv command.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira