[jira] [Commented] (HADOOP-13800) Remove unused HADOOP_AUDIT_LOGGER from hadoop-env.sh

2016-11-09 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15650414#comment-15650414
 ] 

Akira Ajisaka commented on HADOOP-13800:


LGTM, +1.

> Remove unused HADOOP_AUDIT_LOGGER from hadoop-env.sh
> 
>
> Key: HADOOP-13800
> URL: https://issues.apache.org/jira/browse/HADOOP-13800
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-13800.001.patch
>
>
> The following example is misleading. The environment variable does not effect 
> on HDFS audit logger.
> {code:title=hadoop-env.sh}
> # Default log level for file system audit messages.
> # Generally, this is specifically set in the namenode-specific
> # options line.
> # Java property: hdfs.audit.logger
> # export HADOOP_AUDIT_LOGGER=INFO,NullAppender
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13590) Retry until TGT expires even if the UGI renewal thread encountered exception

2016-11-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15650658#comment-15650658
 ] 

Steve Loughran commented on HADOOP-13590:
-

LGTM

+1

> Retry until TGT expires even if the UGI renewal thread encountered exception
> 
>
> Key: HADOOP-13590
> URL: https://issues.apache.org/jira/browse/HADOOP-13590
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13590.01.patch, HADOOP-13590.02.patch, 
> HADOOP-13590.03.patch, HADOOP-13590.04.patch, HADOOP-13590.05.patch, 
> HADOOP-13590.06.patch, HADOOP-13590.07.patch, HADOOP-13590.08.patch, 
> HADOOP-13590.09.patch, HADOOP-13590.10.patch, HADOOP-13590.branch-2.01.patch
>
>
> The UGI has a background thread to renew the tgt. On exception, it 
> [terminates 
> itself|https://github.com/apache/hadoop/blob/bee9f57f5ca9f037ade932c6fd01b0dad47a1296/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L1013-L1014]
> If something temporarily goes wrong that results in an IOE, even if it 
> recovered no renewal will be done and client will eventually fail to 
> authenticate. We should retry with our best effort, until tgt expires, in the 
> hope that the error recovers before that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13800) Remove unused HADOOP_AUDIT_LOGGER from hadoop-env.sh

2016-11-09 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13800:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   Status: Resolved  (was: Patch Available)

> Remove unused HADOOP_AUDIT_LOGGER from hadoop-env.sh
> 
>
> Key: HADOOP-13800
> URL: https://issues.apache.org/jira/browse/HADOOP-13800
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13800.001.patch
>
>
> The following example is misleading. The environment variable does not effect 
> on HDFS audit logger.
> {code:title=hadoop-env.sh}
> # Default log level for file system audit messages.
> # Generally, this is specifically set in the namenode-specific
> # options line.
> # Java property: hdfs.audit.logger
> # export HADOOP_AUDIT_LOGGER=INFO,NullAppender
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13800) Remove unused HADOOP_AUDIT_LOGGER from hadoop-env.sh

2016-11-09 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15650870#comment-15650870
 ] 

Akira Ajisaka commented on HADOOP-13800:


Committed this to trunk. Thanks [~linyiqun] for the contribution.

> Remove unused HADOOP_AUDIT_LOGGER from hadoop-env.sh
> 
>
> Key: HADOOP-13800
> URL: https://issues.apache.org/jira/browse/HADOOP-13800
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13800.001.patch
>
>
> The following example is misleading. The environment variable does not effect 
> on HDFS audit logger.
> {code:title=hadoop-env.sh}
> # Default log level for file system audit messages.
> # Generally, this is specifically set in the namenode-specific
> # options line.
> # Java property: hdfs.audit.logger
> # export HADOOP_AUDIT_LOGGER=INFO,NullAppender
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13800) Remove unused HADOOP_AUDIT_LOGGER from hadoop-env.sh

2016-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15650910#comment-15650910
 ] 

Hudson commented on HADOOP-13800:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10799 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10799/])
HADOOP-13800. Remove unused HADOOP_AUDIT_LOGGER from hadoop-env.sh. (aajisaka: 
rev c074880096bd41470a3358f6002f30b57a725375)
* (edit) hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh


> Remove unused HADOOP_AUDIT_LOGGER from hadoop-env.sh
> 
>
> Key: HADOOP-13800
> URL: https://issues.apache.org/jira/browse/HADOOP-13800
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13800.001.patch
>
>
> The following example is misleading. The environment variable does not effect 
> on HDFS audit logger.
> {code:title=hadoop-env.sh}
> # Default log level for file system audit messages.
> # Generally, this is specifically set in the namenode-specific
> # options line.
> # Java property: hdfs.audit.logger
> # export HADOOP_AUDIT_LOGGER=INFO,NullAppender
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13742) Expose "NumOpenConnectionsPerUser" as a metric

2016-11-09 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15651134#comment-15651134
 ] 

Kihwal Lee commented on HADOOP-13742:
-

Sorry for the delay. I will review it today.

> Expose "NumOpenConnectionsPerUser" as a metric
> --
>
> Key: HADOOP-13742
> URL: https://issues.apache.org/jira/browse/HADOOP-13742
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-13742-002.patch, HADOOP-13742-003.patch, 
> HADOOP-13742-004.patch, HADOOP-13742.patch
>
>
> To track user level connections( How many connections for each user) in busy 
> cluster where so many connections to server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13755) Purge superfluous/obsolete S3A Tests

2016-11-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13755:

Parent Issue: HADOOP-11694  (was: HADOOP-13204)

> Purge superfluous/obsolete S3A Tests
> 
>
> Key: HADOOP-13755
> URL: https://issues.apache.org/jira/browse/HADOOP-13755
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13614-branch-2-008.patch
>
>
> Some of the slow test cases contain tests that are now obsoleted by newer 
> ones. For example, {{ITestS3ADeleteManyFiles}} has the test case 
> {{testOpenCreate()}} which writes then reads files up 25 MB.
> Have a look at which of the s3a tests are taking time, review them to see if 
> newer tests have superceded the slow ones; and cut them where appropriate.
>  
> Yetus-friendly rework of HADOOP-13614: *Purge some superfluous/obsolete S3 FS 
> tests that are slowing test runs down*
> Things got confused there about patches vs PRs; closing that and having this 
> patch only JIRA



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13735) ITestS3AFileContextStatistics.testStatistics() failing

2016-11-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13735:

Parent Issue: HADOOP-11694  (was: HADOOP-13204)

> ITestS3AFileContextStatistics.testStatistics() failing
> --
>
> Key: HADOOP-13735
> URL: https://issues.apache.org/jira/browse/HADOOP-13735
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Pieter Reuse
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13735-branch-2-001.patch
>
>
> The test {{ITestS3AFileContextStatistics.testStatistics()}} seems to fail 
> pretty reliably these days...I'd assumed it was some race condition, but 
> maybe not. 
> Fixing this will probably entail adding more diagnostics to the base test 
> case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13590) Retry until TGT expires even if the UGI renewal thread encountered exception

2016-11-09 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13590:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.8.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-2 and branch-2.8.

Thanks a lot for the reviews [~ste...@apache.org], [~drankye] and 
[~andrew.wang]!

> Retry until TGT expires even if the UGI renewal thread encountered exception
> 
>
> Key: HADOOP-13590
> URL: https://issues.apache.org/jira/browse/HADOOP-13590
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13590.01.patch, HADOOP-13590.02.patch, 
> HADOOP-13590.03.patch, HADOOP-13590.04.patch, HADOOP-13590.05.patch, 
> HADOOP-13590.06.patch, HADOOP-13590.07.patch, HADOOP-13590.08.patch, 
> HADOOP-13590.09.patch, HADOOP-13590.10.patch, HADOOP-13590.branch-2.01.patch
>
>
> The UGI has a background thread to renew the tgt. On exception, it 
> [terminates 
> itself|https://github.com/apache/hadoop/blob/bee9f57f5ca9f037ade932c6fd01b0dad47a1296/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L1013-L1014]
> If something temporarily goes wrong that results in an IOE, even if it 
> recovered no renewal will be done and client will eventually fail to 
> authenticate. We should retry with our best effort, until tgt expires, in the 
> hope that the error recovers before that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13720) Add more info to the msgs printed in AbstractDelegationTokenSecretManager for better supportability

2016-11-09 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HADOOP-13720:
---
Attachment: HADOOP-13720.007.patch

> Add more info to the msgs printed in AbstractDelegationTokenSecretManager for 
> better supportability
> ---
>
> Key: HADOOP-13720
> URL: https://issues.apache.org/jira/browse/HADOOP-13720
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, security
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Trivial
>  Labels: supportability
> Attachments: HADOOP-13720.001.patch, HADOOP-13720.002.patch, 
> HADOOP-13720.003.patch, HADOOP-13720.004.patch, HADOOP-13720.005.patch, 
> HADOOP-13720.006.patch, HADOOP-13720.007.patch
>
>
> Currently AbstractDelegationTokenSecretManager$checkToken does
> {code}
>   protected DelegationTokenInformation checkToken(TokenIdent identifier)
>   throws InvalidToken {
> assert Thread.holdsLock(this);
> DelegationTokenInformation info = getTokenInfo(identifier);
> if (info == null) {
>   throw new InvalidToken("token (" + identifier.toString()
>   + ") can't be found in cache");
> }
> if (info.getRenewDate() < Time.now()) {
>   throw new InvalidToken("token (" + identifier.toString() + ") is 
> expired");
> }
> return info;
>   } 
> {code}
> When a token is expried, we throw the above exception without printing out 
> the {{info.getRenewDate()}} in the message. If we print it out, we could know 
> for how long the token has not been renewed. This will help us investigate 
> certain issues.
> Create this jira as a request to add that part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13720) Add more info to the msgs printed in AbstractDelegationTokenSecretManager for better supportability

2016-11-09 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15651490#comment-15651490
 ] 

Yongjun Zhang commented on HADOOP-13720:


Thanks [~xiaochen], thanks for catching one I missed! uploaded new rev.



> Add more info to the msgs printed in AbstractDelegationTokenSecretManager for 
> better supportability
> ---
>
> Key: HADOOP-13720
> URL: https://issues.apache.org/jira/browse/HADOOP-13720
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, security
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Trivial
>  Labels: supportability
> Attachments: HADOOP-13720.001.patch, HADOOP-13720.002.patch, 
> HADOOP-13720.003.patch, HADOOP-13720.004.patch, HADOOP-13720.005.patch, 
> HADOOP-13720.006.patch, HADOOP-13720.007.patch
>
>
> Currently AbstractDelegationTokenSecretManager$checkToken does
> {code}
>   protected DelegationTokenInformation checkToken(TokenIdent identifier)
>   throws InvalidToken {
> assert Thread.holdsLock(this);
> DelegationTokenInformation info = getTokenInfo(identifier);
> if (info == null) {
>   throw new InvalidToken("token (" + identifier.toString()
>   + ") can't be found in cache");
> }
> if (info.getRenewDate() < Time.now()) {
>   throw new InvalidToken("token (" + identifier.toString() + ") is 
> expired");
> }
> return info;
>   } 
> {code}
> When a token is expried, we throw the above exception without printing out 
> the {{info.getRenewDate()}} in the message. If we print it out, we could know 
> for how long the token has not been renewed. This will help us investigate 
> certain issues.
> Create this jira as a request to add that part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13590) Retry until TGT expires even if the UGI renewal thread encountered exception

2016-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15651500#comment-15651500
 ] 

Hudson commented on HADOOP-13590:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10801 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10801/])
HADOOP-13590. Retry until TGT expires even if the UGI renewal thread (xiao: rev 
367c3d41217728c2e61252c5a5235e5bc1f9822f)
* (add) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGIWithMiniKdc.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java


> Retry until TGT expires even if the UGI renewal thread encountered exception
> 
>
> Key: HADOOP-13590
> URL: https://issues.apache.org/jira/browse/HADOOP-13590
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13590.01.patch, HADOOP-13590.02.patch, 
> HADOOP-13590.03.patch, HADOOP-13590.04.patch, HADOOP-13590.05.patch, 
> HADOOP-13590.06.patch, HADOOP-13590.07.patch, HADOOP-13590.08.patch, 
> HADOOP-13590.09.patch, HADOOP-13590.10.patch, HADOOP-13590.branch-2.01.patch
>
>
> The UGI has a background thread to renew the tgt. On exception, it 
> [terminates 
> itself|https://github.com/apache/hadoop/blob/bee9f57f5ca9f037ade932c6fd01b0dad47a1296/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L1013-L1014]
> If something temporarily goes wrong that results in an IOE, even if it 
> recovered no renewal will be done and client will eventually fail to 
> authenticate. We should retry with our best effort, until tgt expires, in the 
> hope that the error recovers before that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13346) DelegationTokenAuthenticationHandler writes via closed writer

2016-11-09 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13346:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   Status: Resolved  (was: Patch Available)

Committed this to trunk. Thanks [~gchanan] and [~hgadre] for the contribution!

> DelegationTokenAuthenticationHandler writes via closed writer
> -
>
> Key: HADOOP-13346
> URL: https://issues.apache.org/jira/browse/HADOOP-13346
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Gregory Chanan
>Assignee: Hrishikesh Gadre
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13346-001.patch, HADOOP-13346.patch
>
>
> By default, jackson's ObjectMapper closes the writer after writing, so in the 
> following code
> {code}
> ObjectMapper jsonMapper = new ObjectMapper();
> jsonMapper.writeValue(writer, map);
> writer.write(ENTER);
> {code}
> (https://github.com/apache/hadoop/blob/8a9d293dd60f6d51e1574e412d40746ba8175fe1/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java#L280-L282)
> writer.write actually writes to a closed stream.  This doesn't seem to cause 
> a problem with the version of jetty that hadoop uses (those just ignore 
> closes), but causes problems on later verisons of jetty -- I hit this on 
> jetty 8 while implementing SOLR-9200.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2016-11-09 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15651575#comment-15651575
 ] 

Sean Busbey commented on HADOOP-11804:
--

11035 is from another aborted build before the timeout change.

> POC Hadoop Client w/o transitive dependencies
> -
>
> Key: HADOOP-11804
> URL: https://issues.apache.org/jira/browse/HADOOP-11804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11804.1.patch, HADOOP-11804.2.patch, 
> HADOOP-11804.3.patch, HADOOP-11804.4.patch, HADOOP-11804.5.patch, 
> HADOOP-11804.6.patch, HADOOP-11804.7.patch
>
>
> make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to 
> talk with a Hadoop cluster without seeing any of the implementation 
> dependencies.
> see proposal on parent for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13346) DelegationTokenAuthenticationHandler writes via closed writer

2016-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15651595#comment-15651595
 ] 

Hudson commented on HADOOP-13346:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10803 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10803/])
HADOOP-13346. DelegationTokenAuthenticationHandler writes via closed (xiao: rev 
822ae88f7da638e15a25747f6965caee8198aca6)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/web/TestDelegationTokenAuthenticationHandlerWithMocks.java


> DelegationTokenAuthenticationHandler writes via closed writer
> -
>
> Key: HADOOP-13346
> URL: https://issues.apache.org/jira/browse/HADOOP-13346
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Gregory Chanan
>Assignee: Hrishikesh Gadre
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13346-001.patch, HADOOP-13346.patch
>
>
> By default, jackson's ObjectMapper closes the writer after writing, so in the 
> following code
> {code}
> ObjectMapper jsonMapper = new ObjectMapper();
> jsonMapper.writeValue(writer, map);
> writer.write(ENTER);
> {code}
> (https://github.com/apache/hadoop/blob/8a9d293dd60f6d51e1574e412d40746ba8175fe1/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java#L280-L282)
> writer.write actually writes to a closed stream.  This doesn't seem to cause 
> a problem with the version of jetty that hadoop uses (those just ignore 
> closes), but causes problems on later verisons of jetty -- I hit this on 
> jetty 8 while implementing SOLR-9200.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13720) Add more info to the msgs printed in AbstractDelegationTokenSecretManager for better supportability

2016-11-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15651629#comment-15651629
 ] 

Hadoop QA commented on HADOOP-13720:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 52 unchanged - 24 fixed = 52 total (was 76) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
21s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HADOOP-13720 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838199/HADOOP-13720.007.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 32f1520e245f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c619e9b |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11042/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11042/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add more info to the msgs printed in AbstractDelegationTokenSecretManager for 
> better supportability
> ---
>
> Key: HADOOP-13720
> URL: https://issues.apache.org/jira/browse/HADOOP-13720
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, security
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang

[jira] [Commented] (HADOOP-12718) Incorrect error message by fs -put local dir without permission

2016-11-09 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15651646#comment-15651646
 ] 

Yongjun Zhang commented on HADOOP-12718:


I saw quite a lot places refer to the string "Permission denied", which large 
mean the same thing referred to in the case here. We can replace the other 
places that refer to "Permission denied" with the constant defined in the patch 
here gradually when appropriate.  

About
{quote}
Please note test testPutSrcFileNoPerm should not use the constant
{quote}
I think we could actually use the constant in the above test now, but I'm ok to 
defer it when doing replacement in the source code that produce the message.

Thanks.


> Incorrect error message by fs -put local dir without permission
> ---
>
> Key: HADOOP-12718
> URL: https://issues.apache.org/jira/browse/HADOOP-12718
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>  Labels: supportability
> Attachments: HADOOP-12718.001.patch, HADOOP-12718.002.patch, 
> HADOOP-12718.003.patch, HADOOP-12718.004.patch, HADOOP-12718.005.patch, 
> HADOOP-12718.006.patch, HADOOP-12718.007.patch, HADOOP-12718.008.patch, 
> TestFsShellCopyPermission-output.001.txt, 
> TestFsShellCopyPermission-output.002.txt, TestFsShellCopyPermission.001.patch
>
>
> When the user doesn't have access permission to the local directory, the 
> "hadoop fs -put" command prints a confusing error message "No such file or 
> directory".
> {noformat}
> $ whoami
> systest
> $ cd /home/systest
> $ ls -ld .
> drwx--. 4 systest systest 4096 Jan 13 14:21 .
> $ mkdir d1
> $ sudo -u hdfs hadoop fs -put d1 /tmp
> put: `d1': No such file or directory
> {noformat}
> It will be more informative if the message is:
> {noformat}
> put: d1 (Permission denied)
> {noformat}
> If the source is a local file, the error message is ok:
> {noformat}
> put: f1 (Permission denied)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13794) JSON.org license is now CatX

2016-11-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15651716#comment-15651716
 ] 

Steve Loughran commented on HADOOP-13794:
-

too bad the license didn't say "be fluffy"; easier to argue compliance

> JSON.org license is now CatX
> 
>
> Key: HADOOP-13794
> URL: https://issues.apache.org/jira/browse/HADOOP-13794
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4, 3.0.0-alpha2, 2.6.6
>Reporter: Sean Busbey
>Priority: Blocker
>
> per [update resolved legal|http://www.apache.org/legal/resolved.html#json]:
> {quote}
> CAN APACHE PRODUCTS INCLUDE WORKS LICENSED UNDER THE JSON LICENSE?
> No. As of 2016-11-03 this has been moved to the 'Category X' license list. 
> Prior to this, use of the JSON Java library was allowed. See Debian's page 
> for a list of alternatives.
> {quote}
> We have a test-time transitive dependency on the {{org.json:json}} artifact 
> in trunk and branch-2. AFAICT, this test time dependency doesn't get exposed 
> to downstream at all (I checked assemblies and test-jar artifacts we publish 
> to maven), so it can be removed or kept at our leisure. keeping it risks it 
> being promoted out of test scope by maven without us noticing. We might be 
> able to add an enforcer rule to check for this.
> We also distribute it in bundled form through our use of the AWS Java SDK 
> artifacts in trunk and branch-2. Looking at the github project, [their 
> dependency on JSON.org was removed in 
> 1.11|https://github.com/aws/aws-sdk-java/pull/417], so if we upgrade to 
> 1.11.0+ we should be good to go. (this might be hard in branch-2.6 and 
> branch-2.7 where we're on 1.7.4)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13427) Eliminate needless uses of FileSystem#{exists(), isFile(), isDirectory()}

2016-11-09 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13427:
---
Attachment: HADOOP-13427.005.patch

old:
{code}
116 if (targetFS.exists(targetFinalPath) && 
targetFS.isFile(targetFinalPath)) { 
117   overWrite = true; // When target is an existing file, overwrite 
it.
{code}
new:
{code}
116 try {
117   overWrite = targetFS.getFileStatus(targetFinalPath).isFile();
118 } catch (FileNotFoundException ignored) {
118 }
{code}
If {{overWrite}} is true before this, we may overwrite it, which is not 
expected.

{code}
try {
  overWrite = overWrite || targetFS.getFileStatus(targetFinalPath).isFile();
} catch (FileNotFoundException ignored) {
}
{code}

This should heal the failing unit test {{hadoop.tools.TestIntegration}}, see v5 
patch.

> Eliminate needless uses of FileSystem#{exists(), isFile(), isDirectory()}
> -
>
> Key: HADOOP-13427
> URL: https://issues.apache.org/jira/browse/HADOOP-13427
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13427-001.patch, HADOOP-13427-002.patch, 
> HADOOP-13427.003.patch, HADOOP-13427.004.patch, HADOOP-13427.005.patch
>
>
> We're cleaning up Hive and Spark's use of FileSystem.exists, because it is 
> often the case we see code of exists+open, exists+delete, when the exists 
> probe is needless. Against object stores, expensive needless.
> Hadoop can set an example here by stripping them out. It will also show where 
> there are opportunities to optimise things better and/or improve reporting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13802) Make generic options help more consistent, and aligned

2016-11-09 Thread Grant Sohn (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15651744#comment-15651744
 ] 

Grant Sohn commented on HADOOP-13802:
-

[~liuml07], thanks for your help getting this in.

> Make generic options help more consistent, and aligned
> --
>
> Key: HADOOP-13802
> URL: https://issues.apache.org/jira/browse/HADOOP-13802
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Grant Sohn
>Assignee: Grant Sohn
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13802.1.patch, HADOOP-13802.2.patch, 
> HADOOP-13802.3.patch, HADOOP-13802.4.patch
>
>
> The generic options have always been this:
> {noformat}
> Generic options supported are
> -conf  specify an application configuration file
> -D use value for given property
> -fs   specify a namenode
> -jt specify a ResourceManager
> -files specify comma separated files to be 
> copied to the map reduce cluster
> -libjars specify comma separated jar files 
> to include in the classpath.
> -archives specify comma separated 
> archives to be unarchived on the compute machines.
> The general command line syntax is
> bin/hadoop command [genericOptions] [commandOptions]
> {noformat}
> However, almost all the hadoop command help follow an unwritten style which 
> is similar to man page descriptions.
> This is the proposed improvement.
> {noformat}
> Generic options supported are:
> -conf  specify an application configuration file
> -D define a value for given property
> -fs   specify a namenode
> -jt specify a job tracker
> -files  specify a comma-separated list of files to be 
> copied to the map reduce cluster
> -libjars specify a comma-separated list of jar files to 
> be included in the classpath
> -archivesspecify a comma-separated list of archives to 
> be unarchived on the compute machines
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13050) Upgrade to AWS SDK 10.11+

2016-11-09 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13050:
---
Assignee: Steve Loughran

> Upgrade to AWS SDK 10.11+
> -
>
> Key: HADOOP-13050
> URL: https://issues.apache.org/jira/browse/HADOOP-13050
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-13050-001.patch, HADOOP-13050-branch-2-003.patch, 
> HADOOP-13050-branch-2.002.patch, HADOOP-13050-branch-2.003.patch
>
>
> HADOOP-13044 highlights that AWS SDK 10.6 —shipping in Hadoop 2.7+, doesn't 
> work on open jdk >= 8u60, because a change in the JDK broke the version of 
> Joda time that AWS uses.
> Fix, update the JDK. Though, that implies updating http components: 
> HADOOP-12767.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13781) ZKFailoverController#initZK should use the ActiveStanbyElector constructor with failFast as false

2016-11-09 Thread Andres Perez (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15651837#comment-15651837
 ] 

Andres Perez commented on HADOOP-13781:
---

[~templedf] [~arpitagarwal] can you please review the patch and provide any 
possible feedback?

> ZKFailoverController#initZK should use the ActiveStanbyElector constructor 
> with failFast as false
> -
>
> Key: HADOOP-13781
> URL: https://issues.apache.org/jira/browse/HADOOP-13781
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ha
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Andres Perez
>Priority: Minor
> Attachments: HADOOP-13781.2.patch, HADOOP-13781.3.patch, 
> HADOOP-13781.4.patch, HADOOP-13781.5.patch, HADOOP-13781.6.patch, 
> HADOOP-13781.patch
>
>
> YARN-4243 introduced the logic that lets retry establishing the connection 
> when initializing the {{ActiveStandbyElector}} adding the parameter 
> {{failFast}}.
> {{ZKFailoverController#initZK}} should use this constructor with {{failFast}} 
> set to false, to let the ZFKC wait longer for the Zookeeper server be in a 
> ready state when first initializing it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12718) Incorrect error message by fs -put local dir without permission

2016-11-09 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15651853#comment-15651853
 ] 

John Zhuge commented on HADOOP-12718:
-

In testPutSrcFileNoPerm, JDK FileInputStream#open raises the exception 
containing "Permission denied" message . That's why I didn't use the constant.

> Incorrect error message by fs -put local dir without permission
> ---
>
> Key: HADOOP-12718
> URL: https://issues.apache.org/jira/browse/HADOOP-12718
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>  Labels: supportability
> Attachments: HADOOP-12718.001.patch, HADOOP-12718.002.patch, 
> HADOOP-12718.003.patch, HADOOP-12718.004.patch, HADOOP-12718.005.patch, 
> HADOOP-12718.006.patch, HADOOP-12718.007.patch, HADOOP-12718.008.patch, 
> TestFsShellCopyPermission-output.001.txt, 
> TestFsShellCopyPermission-output.002.txt, TestFsShellCopyPermission.001.patch
>
>
> When the user doesn't have access permission to the local directory, the 
> "hadoop fs -put" command prints a confusing error message "No such file or 
> directory".
> {noformat}
> $ whoami
> systest
> $ cd /home/systest
> $ ls -ld .
> drwx--. 4 systest systest 4096 Jan 13 14:21 .
> $ mkdir d1
> $ sudo -u hdfs hadoop fs -put d1 /tmp
> put: `d1': No such file or directory
> {noformat}
> It will be more informative if the message is:
> {noformat}
> put: d1 (Permission denied)
> {noformat}
> If the source is a local file, the error message is ok:
> {noformat}
> put: f1 (Permission denied)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-12718) Incorrect error message by fs -put local dir without permission

2016-11-09 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15651853#comment-15651853
 ] 

John Zhuge edited comment on HADOOP-12718 at 11/9/16 7:42 PM:
--

In testPutSrcFileNoPerm, JDK FileInputStream#open raises the exception 
containing "Permission denied" message. That's why I didn't use the constant.


was (Author: jzhuge):
In testPutSrcFileNoPerm, JDK FileInputStream#open raises the exception 
containing "Permission denied" message . That's why I didn't use the constant.

> Incorrect error message by fs -put local dir without permission
> ---
>
> Key: HADOOP-12718
> URL: https://issues.apache.org/jira/browse/HADOOP-12718
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>  Labels: supportability
> Attachments: HADOOP-12718.001.patch, HADOOP-12718.002.patch, 
> HADOOP-12718.003.patch, HADOOP-12718.004.patch, HADOOP-12718.005.patch, 
> HADOOP-12718.006.patch, HADOOP-12718.007.patch, HADOOP-12718.008.patch, 
> TestFsShellCopyPermission-output.001.txt, 
> TestFsShellCopyPermission-output.002.txt, TestFsShellCopyPermission.001.patch
>
>
> When the user doesn't have access permission to the local directory, the 
> "hadoop fs -put" command prints a confusing error message "No such file or 
> directory".
> {noformat}
> $ whoami
> systest
> $ cd /home/systest
> $ ls -ld .
> drwx--. 4 systest systest 4096 Jan 13 14:21 .
> $ mkdir d1
> $ sudo -u hdfs hadoop fs -put d1 /tmp
> put: `d1': No such file or directory
> {noformat}
> It will be more informative if the message is:
> {noformat}
> put: d1 (Permission denied)
> {noformat}
> If the source is a local file, the error message is ok:
> {noformat}
> put: f1 (Permission denied)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-12718) Incorrect error message by fs -put local dir without permission

2016-11-09 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15649513#comment-15649513
 ] 

John Zhuge edited comment on HADOOP-12718 at 11/9/16 7:43 PM:
--

Patch 008:
* Use {{AccessDeniedException}} instead of {{AccessControlException}}
* Add constant {{FSExceptionMessages.PERMISSION_DENIED}}
* Please note {{testPutSrcFileNoPerm}} should not use the constant because JDK 
FileInputStream#open raises FNFE exception containing "Permission denied" 
message.


was (Author: jzhuge):
Patch 008:
* Use {{AccessDeniedException}} instead of {{AccessControlException}}
* Add constant {{FSExceptionMessages.PERMISSION_DENIED}}
* Please note test {{testPutSrcFileNoPerm}} should not use the constant

> Incorrect error message by fs -put local dir without permission
> ---
>
> Key: HADOOP-12718
> URL: https://issues.apache.org/jira/browse/HADOOP-12718
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>  Labels: supportability
> Attachments: HADOOP-12718.001.patch, HADOOP-12718.002.patch, 
> HADOOP-12718.003.patch, HADOOP-12718.004.patch, HADOOP-12718.005.patch, 
> HADOOP-12718.006.patch, HADOOP-12718.007.patch, HADOOP-12718.008.patch, 
> TestFsShellCopyPermission-output.001.txt, 
> TestFsShellCopyPermission-output.002.txt, TestFsShellCopyPermission.001.patch
>
>
> When the user doesn't have access permission to the local directory, the 
> "hadoop fs -put" command prints a confusing error message "No such file or 
> directory".
> {noformat}
> $ whoami
> systest
> $ cd /home/systest
> $ ls -ld .
> drwx--. 4 systest systest 4096 Jan 13 14:21 .
> $ mkdir d1
> $ sudo -u hdfs hadoop fs -put d1 /tmp
> put: `d1': No such file or directory
> {noformat}
> It will be more informative if the message is:
> {noformat}
> put: d1 (Permission denied)
> {noformat}
> If the source is a local file, the error message is ok:
> {noformat}
> put: f1 (Permission denied)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12718) Incorrect error message by fs -put local dir without permission

2016-11-09 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15651978#comment-15651978
 ] 

Yongjun Zhang commented on HADOOP-12718:


Thanks [~jzhuge], I thought the exception is from some of the places in hadoop 
code base that use hard coded string "Permission denied" (I saw quite a lot), 
if it's from JDK, we don't have control.  So I'm +1 with the current patch. 

Since now we introduce the constant, it'd be nice to use it in hadoop code 
base, by replacing hardcoded strings to referring to the constant. But I think 
we don't need to do that in this jira. 

Any further comment [~ste...@apache.org]? If not, I will commit by tomorrow. 
Thanks.





> Incorrect error message by fs -put local dir without permission
> ---
>
> Key: HADOOP-12718
> URL: https://issues.apache.org/jira/browse/HADOOP-12718
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>  Labels: supportability
> Attachments: HADOOP-12718.001.patch, HADOOP-12718.002.patch, 
> HADOOP-12718.003.patch, HADOOP-12718.004.patch, HADOOP-12718.005.patch, 
> HADOOP-12718.006.patch, HADOOP-12718.007.patch, HADOOP-12718.008.patch, 
> TestFsShellCopyPermission-output.001.txt, 
> TestFsShellCopyPermission-output.002.txt, TestFsShellCopyPermission.001.patch
>
>
> When the user doesn't have access permission to the local directory, the 
> "hadoop fs -put" command prints a confusing error message "No such file or 
> directory".
> {noformat}
> $ whoami
> systest
> $ cd /home/systest
> $ ls -ld .
> drwx--. 4 systest systest 4096 Jan 13 14:21 .
> $ mkdir d1
> $ sudo -u hdfs hadoop fs -put d1 /tmp
> put: `d1': No such file or directory
> {noformat}
> It will be more informative if the message is:
> {noformat}
> put: d1 (Permission denied)
> {noformat}
> If the source is a local file, the error message is ok:
> {noformat}
> put: f1 (Permission denied)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13794) JSON.org license is now CatX

2016-11-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15651985#comment-15651985
 ] 

Steve Loughran commented on HADOOP-13794:
-

Feel like assigning the homework of dealing with it to [~tdunning] though, "you 
broke it —you fix it"

> JSON.org license is now CatX
> 
>
> Key: HADOOP-13794
> URL: https://issues.apache.org/jira/browse/HADOOP-13794
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4, 3.0.0-alpha2, 2.6.6
>Reporter: Sean Busbey
>Priority: Blocker
>
> per [update resolved legal|http://www.apache.org/legal/resolved.html#json]:
> {quote}
> CAN APACHE PRODUCTS INCLUDE WORKS LICENSED UNDER THE JSON LICENSE?
> No. As of 2016-11-03 this has been moved to the 'Category X' license list. 
> Prior to this, use of the JSON Java library was allowed. See Debian's page 
> for a list of alternatives.
> {quote}
> We have a test-time transitive dependency on the {{org.json:json}} artifact 
> in trunk and branch-2. AFAICT, this test time dependency doesn't get exposed 
> to downstream at all (I checked assemblies and test-jar artifacts we publish 
> to maven), so it can be removed or kept at our leisure. keeping it risks it 
> being promoted out of test scope by maven without us noticing. We might be 
> able to add an enforcer rule to check for this.
> We also distribute it in bundled form through our use of the AWS Java SDK 
> artifacts in trunk and branch-2. Looking at the github project, [their 
> dependency on JSON.org was removed in 
> 1.11|https://github.com/aws/aws-sdk-java/pull/417], so if we upgrade to 
> 1.11.0+ we should be good to go. (this might be hard in branch-2.6 and 
> branch-2.7 where we're on 1.7.4)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13427) Eliminate needless uses of FileSystem#{exists(), isFile(), isDirectory()}

2016-11-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15652259#comment-15652259
 ] 

Hadoop QA commented on HADOOP-13427:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  7m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 13m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
18s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
23s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 54s{color} | {color:orange} root: The patch generated 8 new + 870 unchanged 
- 10 fixed = 878 total (was 880) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  8m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
7s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
2s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
14s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
34s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
41s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
59s{color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-yarn-server-sharedcachemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
2s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
53s{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
23s{color} | {color:green} hadoop-mapreduce-client-hs in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
47s{color} | {color:green} hadoop-mapreduce-examples in t

[jira] [Updated] (HADOOP-13427) Eliminate needless uses of FileSystem#{exists(), isFile(), isDirectory()}

2016-11-09 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13427:
---
Attachment: HADOOP-13427.006.patch

V6 patch is fix the checkstyle and findbugs warnings.

> Eliminate needless uses of FileSystem#{exists(), isFile(), isDirectory()}
> -
>
> Key: HADOOP-13427
> URL: https://issues.apache.org/jira/browse/HADOOP-13427
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13427-001.patch, HADOOP-13427-002.patch, 
> HADOOP-13427.003.patch, HADOOP-13427.004.patch, HADOOP-13427.005.patch, 
> HADOOP-13427.006.patch
>
>
> We're cleaning up Hive and Spark's use of FileSystem.exists, because it is 
> often the case we see code of exists+open, exists+delete, when the exists 
> probe is needless. Against object stores, expensive needless.
> Hadoop can set an example here by stripping them out. It will also show where 
> there are opportunities to optimise things better and/or improve reporting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13558) UserGroupInformation created from a Subject incorrectly tries to renew the Kerberos ticket

2016-11-09 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15652339#comment-15652339
 ] 

Alejandro Abdelnur commented on HADOOP-13558:
-

[~zhz], I'm still running into issues with HDFS/KMS because of this. The issue 
is that {{UserGroupInformation.getCurrentUser()}} uses the old constructor of 
{{UserGroupInformation}} and it sets like the UGI has a keytab. And the 
{{KMSClientProvider}} uses the {{UserGroupInformation.getCurrentUser()}} method 
to get the UGI.

> UserGroupInformation created from a Subject incorrectly tries to renew the 
> Kerberos ticket
> --
>
> Key: HADOOP-13558
> URL: https://issues.apache.org/jira/browse/HADOOP-13558
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2, 2.6.4, 3.0.0-alpha2
>Reporter: Alejandro Abdelnur
>Assignee: Xiao Chen
> Fix For: 2.8.0, 2.9.0, 2.7.4, 3.0.0-alpha2
>
> Attachments: HADOOP-13558.01.patch, HADOOP-13558.02.patch, 
> HADOOP-13558.branch-2.7.patch
>
>
> The UGI {{checkTGTAndReloginFromKeytab()}} method checks certain conditions 
> and if they are met it invokes the {{reloginFromKeytab()}}. The 
> {{reloginFromKeytab()}} method then fails with an {{IOException}} 
> "loginUserFromKeyTab must be done first" because there is no keytab 
> associated with the UGI.
> The {{checkTGTAndReloginFromKeytab()}} method checks if there is a keytab 
> ({{isKeytab}} UGI instance variable) associated with the UGI, if there is one 
> it triggers a call to {{reloginFromKeytab()}}. The problem is that the 
> {{keytabFile}} UGI instance variable is NULL, and that triggers the mentioned 
> {{IOException}}.
> The root of the problem seems to be when creating a UGI via the 
> {{UGI.loginUserFromSubject(Subject)}} method, this method uses the 
> {{UserGroupInformation(Subject)}} constructor, and this constructor does the 
> following to determine if there is a keytab or not.
> {code}
>   this.isKeytab = KerberosUtil.hasKerberosKeyTab(subject);
> {code}
> If the {{Subject}} given had a keytab, then the UGI instance will have the 
> {{isKeytab}} set to TRUE.
> It sets the UGI instance as it would have a keytab because the Subject has a 
> keytab. This has 2 problems:
> First, it does not set the keytab file (and this, having the {{isKeytab}} set 
> to TRUE and the {{keytabFile}} set to NULL) is what triggers the 
> {{IOException}} in the method {{reloginFromKeytab()}}.
> Second (and even if the first problem is fixed, this still is a problem), it 
> assumes that because the subject has a keytab it is up to UGI to do the 
> relogin using the keytab. This is incorrect if the UGI was created using the 
> {{UGI.loginUserFromSubject(Subject)}} method. In such case, the owner of the 
> Subject is not the UGI, but the caller, so the caller is responsible for 
> renewing the Kerberos tickets and the UGI should not try to do so.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13558) UserGroupInformation created from a Subject incorrectly tries to renew the Kerberos ticket

2016-11-09 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15652339#comment-15652339
 ] 

Alejandro Abdelnur edited comment on HADOOP-13558 at 11/9/16 11:15 PM:
---

[~zhz], I'm still running into issues with HDFS/KMS because of this. The issue 
seems to be that {{UserGroupInformation.getCurrentUser()}} uses the old 
constructor of {{UserGroupInformation}} and it sets like the UGI has a keytab. 
And the {{KMSClientProvider}} uses the 
{{UserGroupInformation.getCurrentUser()}} method to get the UGI.

I'll dig the exact issue and open a follow up JIRA.


was (Author: tucu00):
[~zhz], I'm still running into issues with HDFS/KMS because of this. The issue 
is that {{UserGroupInformation.getCurrentUser()}} uses the old constructor of 
{{UserGroupInformation}} and it sets like the UGI has a keytab. And the 
{{KMSClientProvider}} uses the {{UserGroupInformation.getCurrentUser()}} method 
to get the UGI.

> UserGroupInformation created from a Subject incorrectly tries to renew the 
> Kerberos ticket
> --
>
> Key: HADOOP-13558
> URL: https://issues.apache.org/jira/browse/HADOOP-13558
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2, 2.6.4, 3.0.0-alpha2
>Reporter: Alejandro Abdelnur
>Assignee: Xiao Chen
> Fix For: 2.8.0, 2.9.0, 2.7.4, 3.0.0-alpha2
>
> Attachments: HADOOP-13558.01.patch, HADOOP-13558.02.patch, 
> HADOOP-13558.branch-2.7.patch
>
>
> The UGI {{checkTGTAndReloginFromKeytab()}} method checks certain conditions 
> and if they are met it invokes the {{reloginFromKeytab()}}. The 
> {{reloginFromKeytab()}} method then fails with an {{IOException}} 
> "loginUserFromKeyTab must be done first" because there is no keytab 
> associated with the UGI.
> The {{checkTGTAndReloginFromKeytab()}} method checks if there is a keytab 
> ({{isKeytab}} UGI instance variable) associated with the UGI, if there is one 
> it triggers a call to {{reloginFromKeytab()}}. The problem is that the 
> {{keytabFile}} UGI instance variable is NULL, and that triggers the mentioned 
> {{IOException}}.
> The root of the problem seems to be when creating a UGI via the 
> {{UGI.loginUserFromSubject(Subject)}} method, this method uses the 
> {{UserGroupInformation(Subject)}} constructor, and this constructor does the 
> following to determine if there is a keytab or not.
> {code}
>   this.isKeytab = KerberosUtil.hasKerberosKeyTab(subject);
> {code}
> If the {{Subject}} given had a keytab, then the UGI instance will have the 
> {{isKeytab}} set to TRUE.
> It sets the UGI instance as it would have a keytab because the Subject has a 
> keytab. This has 2 problems:
> First, it does not set the keytab file (and this, having the {{isKeytab}} set 
> to TRUE and the {{keytabFile}} set to NULL) is what triggers the 
> {{IOException}} in the method {{reloginFromKeytab()}}.
> Second (and even if the first problem is fixed, this still is a problem), it 
> assumes that because the subject has a keytab it is up to UGI to do the 
> relogin using the keytab. This is incorrect if the UGI was created using the 
> {{UGI.loginUserFromSubject(Subject)}} method. In such case, the owner of the 
> Subject is not the UGI, but the caller, so the caller is responsible for 
> renewing the Kerberos tickets and the UGI should not try to do so.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13687) Provide a unified dependency artifact that transitively includes the cloud storage modules shipped with Hadoop.

2016-11-09 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13687:
---
Attachment: HADOOP-13687-trunk.006.patch

> Provide a unified dependency artifact that transitively includes the cloud 
> storage modules shipped with Hadoop.
> ---
>
> Key: HADOOP-13687
> URL: https://issues.apache.org/jira/browse/HADOOP-13687
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13687-branch-2.001.patch, 
> HADOOP-13687-branch-2.002.patch, HADOOP-13687-branch-2.003.patch, 
> HADOOP-13687-trunk.001.patch, HADOOP-13687-trunk.002.patch, 
> HADOOP-13687-trunk.003.patch, HADOOP-13687-trunk.004.patch, 
> HADOOP-13687-trunk.005.patch, HADOOP-13687-trunk.006.patch, 
> HADOOP-13687-trunk.006.patch
>
>
> Currently, downstream projects that want to integrate with different 
> Hadoop-compatible file systems like WASB and S3A need to list dependencies on 
> each one.  This creates an ongoing maintenance burden for those projects, 
> because they need to update their build whenever a new Hadoop-compatible file 
> system is introduced.  This issue proposes adding a new artifact that 
> transitively includes all Hadoop-compatible file systems.  Similar to 
> hadoop-client, this new artifact will consist of just a pom.xml listing the 
> individual dependencies.  Downstream users can depend on this artifact to 
> sweep in everything, and picking up a new file system in a future version 
> will be just a matter of updating the Hadoop dependency version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13558) UserGroupInformation created from a Subject incorrectly tries to renew the Kerberos ticket

2016-11-09 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15652521#comment-15652521
 ] 

Xiao Chen commented on HADOOP-13558:


Thanks [~tucu00] for the heads up.

May be unrelated, but KMSCP's UGI logic also has a recent fix HADOOP-13749 - 
not sure if that will help here. Sounds more than that... Will follow up in the 
new jira to figure out.

> UserGroupInformation created from a Subject incorrectly tries to renew the 
> Kerberos ticket
> --
>
> Key: HADOOP-13558
> URL: https://issues.apache.org/jira/browse/HADOOP-13558
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2, 2.6.4, 3.0.0-alpha2
>Reporter: Alejandro Abdelnur
>Assignee: Xiao Chen
> Fix For: 2.8.0, 2.9.0, 2.7.4, 3.0.0-alpha2
>
> Attachments: HADOOP-13558.01.patch, HADOOP-13558.02.patch, 
> HADOOP-13558.branch-2.7.patch
>
>
> The UGI {{checkTGTAndReloginFromKeytab()}} method checks certain conditions 
> and if they are met it invokes the {{reloginFromKeytab()}}. The 
> {{reloginFromKeytab()}} method then fails with an {{IOException}} 
> "loginUserFromKeyTab must be done first" because there is no keytab 
> associated with the UGI.
> The {{checkTGTAndReloginFromKeytab()}} method checks if there is a keytab 
> ({{isKeytab}} UGI instance variable) associated with the UGI, if there is one 
> it triggers a call to {{reloginFromKeytab()}}. The problem is that the 
> {{keytabFile}} UGI instance variable is NULL, and that triggers the mentioned 
> {{IOException}}.
> The root of the problem seems to be when creating a UGI via the 
> {{UGI.loginUserFromSubject(Subject)}} method, this method uses the 
> {{UserGroupInformation(Subject)}} constructor, and this constructor does the 
> following to determine if there is a keytab or not.
> {code}
>   this.isKeytab = KerberosUtil.hasKerberosKeyTab(subject);
> {code}
> If the {{Subject}} given had a keytab, then the UGI instance will have the 
> {{isKeytab}} set to TRUE.
> It sets the UGI instance as it would have a keytab because the Subject has a 
> keytab. This has 2 problems:
> First, it does not set the keytab file (and this, having the {{isKeytab}} set 
> to TRUE and the {{keytabFile}} set to NULL) is what triggers the 
> {{IOException}} in the method {{reloginFromKeytab()}}.
> Second (and even if the first problem is fixed, this still is a problem), it 
> assumes that because the subject has a keytab it is up to UGI to do the 
> relogin using the keytab. This is incorrect if the UGI was created using the 
> {{UGI.loginUserFromSubject(Subject)}} method. In such case, the owner of the 
> Subject is not the UGI, but the caller, so the caller is responsible for 
> renewing the Kerberos tickets and the UGI should not try to do so.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13687) Provide a unified dependency artifact that transitively includes the cloud storage modules shipped with Hadoop.

2016-11-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15652622#comment-15652622
 ] 

Hadoop QA commented on HADOOP-13687:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
48s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
22s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-tools_hadoop-openstack generated 22 new + 0 
unchanged - 0 fixed = 22 total (was 0) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
24s{color} | {color:green} hadoop-openstack in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-cloud-storage-project in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-cloud-storage in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
40s{color} | {color:red} The patch generated 9 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | HADOOP-13687 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838263/HADOOP-13687-trunk.006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux cddf1384aa38 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / de3a5f8 |
| Default Java | 1.8.0_101 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11045/artifact/patchprocess/diff-javadoc-javadoc-hadoop-tools_hadoop-openstack.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11045/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11045/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: hadoop-project hadoop-tools/hadoop-openstack 
hadoop-cloud-storage-project ha

[jira] [Commented] (HADOOP-13742) Expose "NumOpenConnectionsPerUser" as a metric

2016-11-09 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15652701#comment-15652701
 ] 

Brahma Reddy Battula commented on HADOOP-13742:
---

hmm..Nope.. thanks.

> Expose "NumOpenConnectionsPerUser" as a metric
> --
>
> Key: HADOOP-13742
> URL: https://issues.apache.org/jira/browse/HADOOP-13742
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-13742-002.patch, HADOOP-13742-003.patch, 
> HADOOP-13742-004.patch, HADOOP-13742.patch
>
>
> To track user level connections( How many connections for each user) in busy 
> cluster where so many connections to server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13427) Eliminate needless uses of FileSystem#{exists(), isFile(), isDirectory()}

2016-11-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15652820#comment-15652820
 ] 

Hadoop QA commented on HADOOP-13427:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  7m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 14m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  9m  
1s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
50s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-yarn-server-applicationhistoryservice in the 
patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-yarn-server-sharedcachemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-mapreduce-client-core in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-mapreduce-client-app in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-mapreduce-client-hs in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-mapreduce-examples in the patch failed. {color} 
|
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-distcp in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
18s{color} | {color:red} hadoop-archives in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
18s{color} | {color:red} hadoop-archive-logs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-rumen in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-extras in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
16s{color} | {color:red} hadoop-openstack in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  0s{color} | {color:orange} root: The patch generated 1 new + 867 unchanged 
- 10 fixed = 868 total (was 877) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {