[jira] [Commented] (HADOOP-10786) Fix UGI#reloginFromKeytab on Java 8

2015-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14696593#comment-14696593
 ] 

Hudson commented on HADOOP-10786:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #8300 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8300/])
HADOOP-10786. Fix UGI#reloginFromKeytab on Java 8. Contributed by Stephen Chu. 
(vinayakumarb: rev 24a11e39960696d75e58df912ec6aa7283be194d)
* hadoop-common-project/hadoop-common/CHANGES.txt


> Fix UGI#reloginFromKeytab on Java 8
> ---
>
> Key: HADOOP-10786
> URL: https://issues.apache.org/jira/browse/HADOOP-10786
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Tobi Vollebregt
>Assignee: Stephen Chu
> Fix For: 2.6.1, 2.7.0
>
> Attachments: HADOOP-10786.2.patch, HADOOP-10786.3.patch, 
> HADOOP-10786.3.patch, HADOOP-10786.4.patch, HADOOP-10786.5.patch, 
> HADOOP-10786.patch
>
>
> Krb5LoginModule changed subtly in java 8: in particular, if useKeyTab and 
> storeKey are specified, then only a KeyTab object is added to the Subject's 
> private credentials, whereas in java <= 7 both a KeyTab and some number of 
> KerberosKey objects were added.
> The UGI constructor checks whether or not a keytab was used to login by 
> looking if there are any KerberosKey objects in the Subject's private 
> credentials. If there are, then isKeyTab is set to true, and otherwise it's 
> set to false.
> Thus, in java 8 isKeyTab is always false given the current UGI 
> implementation, which makes UGI#reloginFromKeytab fail silently.
> Attached patch will check for a KeyTab object on the Subject, instead of a 
> KerberosKey object. This fixes relogins from kerberos keytabs on Oracle java 
> 8, and works on Oracle java 7 as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10786) Fix UGI#reloginFromKeytab on Java 8

2015-08-13 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14696592#comment-14696592
 ] 

Vinayakumar B commented on HADOOP-10786:


Cherry-picked to 2.6.1

> Fix UGI#reloginFromKeytab on Java 8
> ---
>
> Key: HADOOP-10786
> URL: https://issues.apache.org/jira/browse/HADOOP-10786
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Tobi Vollebregt
>Assignee: Stephen Chu
> Fix For: 2.6.1, 2.7.0
>
> Attachments: HADOOP-10786.2.patch, HADOOP-10786.3.patch, 
> HADOOP-10786.3.patch, HADOOP-10786.4.patch, HADOOP-10786.5.patch, 
> HADOOP-10786.patch
>
>
> Krb5LoginModule changed subtly in java 8: in particular, if useKeyTab and 
> storeKey are specified, then only a KeyTab object is added to the Subject's 
> private credentials, whereas in java <= 7 both a KeyTab and some number of 
> KerberosKey objects were added.
> The UGI constructor checks whether or not a keytab was used to login by 
> looking if there are any KerberosKey objects in the Subject's private 
> credentials. If there are, then isKeyTab is set to true, and otherwise it's 
> set to false.
> Thus, in java 8 isKeyTab is always false given the current UGI 
> implementation, which makes UGI#reloginFromKeytab fail silently.
> Attached patch will check for a KeyTab object on the Subject, instead of a 
> KerberosKey object. This fixes relogins from kerberos keytabs on Oracle java 
> 8, and works on Oracle java 7 as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10786) Fix UGI#reloginFromKeytab on Java 8

2015-08-13 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-10786:
---
Issue Type: Bug  (was: Improvement)

> Fix UGI#reloginFromKeytab on Java 8
> ---
>
> Key: HADOOP-10786
> URL: https://issues.apache.org/jira/browse/HADOOP-10786
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Tobi Vollebregt
>Assignee: Stephen Chu
> Fix For: 2.6.1, 2.7.0
>
> Attachments: HADOOP-10786.2.patch, HADOOP-10786.3.patch, 
> HADOOP-10786.3.patch, HADOOP-10786.4.patch, HADOOP-10786.5.patch, 
> HADOOP-10786.patch
>
>
> Krb5LoginModule changed subtly in java 8: in particular, if useKeyTab and 
> storeKey are specified, then only a KeyTab object is added to the Subject's 
> private credentials, whereas in java <= 7 both a KeyTab and some number of 
> KerberosKey objects were added.
> The UGI constructor checks whether or not a keytab was used to login by 
> looking if there are any KerberosKey objects in the Subject's private 
> credentials. If there are, then isKeyTab is set to true, and otherwise it's 
> set to false.
> Thus, in java 8 isKeyTab is always false given the current UGI 
> implementation, which makes UGI#reloginFromKeytab fail silently.
> Attached patch will check for a KeyTab object on the Subject, instead of a 
> KerberosKey object. This fixes relogins from kerberos keytabs on Oracle java 
> 8, and works on Oracle java 7 as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10786) Fix UGI#reloginFromKeytab on Java 8

2015-08-13 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-10786:
---
Labels:   (was: 2.6.1-candidate)

> Fix UGI#reloginFromKeytab on Java 8
> ---
>
> Key: HADOOP-10786
> URL: https://issues.apache.org/jira/browse/HADOOP-10786
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Tobi Vollebregt
>Assignee: Stephen Chu
> Fix For: 2.6.1, 2.7.0
>
> Attachments: HADOOP-10786.2.patch, HADOOP-10786.3.patch, 
> HADOOP-10786.3.patch, HADOOP-10786.4.patch, HADOOP-10786.5.patch, 
> HADOOP-10786.patch
>
>
> Krb5LoginModule changed subtly in java 8: in particular, if useKeyTab and 
> storeKey are specified, then only a KeyTab object is added to the Subject's 
> private credentials, whereas in java <= 7 both a KeyTab and some number of 
> KerberosKey objects were added.
> The UGI constructor checks whether or not a keytab was used to login by 
> looking if there are any KerberosKey objects in the Subject's private 
> credentials. If there are, then isKeyTab is set to true, and otherwise it's 
> set to false.
> Thus, in java 8 isKeyTab is always false given the current UGI 
> implementation, which makes UGI#reloginFromKeytab fail silently.
> Attached patch will check for a KeyTab object on the Subject, instead of a 
> KerberosKey object. This fixes relogins from kerberos keytabs on Oracle java 
> 8, and works on Oracle java 7 as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10786) Fix UGI#reloginFromKeytab on Java 8

2015-08-13 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-10786:
---
Fix Version/s: 2.6.1

> Fix UGI#reloginFromKeytab on Java 8
> ---
>
> Key: HADOOP-10786
> URL: https://issues.apache.org/jira/browse/HADOOP-10786
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Tobi Vollebregt
>Assignee: Stephen Chu
> Fix For: 2.6.1, 2.7.0
>
> Attachments: HADOOP-10786.2.patch, HADOOP-10786.3.patch, 
> HADOOP-10786.3.patch, HADOOP-10786.4.patch, HADOOP-10786.5.patch, 
> HADOOP-10786.patch
>
>
> Krb5LoginModule changed subtly in java 8: in particular, if useKeyTab and 
> storeKey are specified, then only a KeyTab object is added to the Subject's 
> private credentials, whereas in java <= 7 both a KeyTab and some number of 
> KerberosKey objects were added.
> The UGI constructor checks whether or not a keytab was used to login by 
> looking if there are any KerberosKey objects in the Subject's private 
> credentials. If there are, then isKeyTab is set to true, and otherwise it's 
> set to false.
> Thus, in java 8 isKeyTab is always false given the current UGI 
> implementation, which makes UGI#reloginFromKeytab fail silently.
> Attached patch will check for a KeyTab object on the Subject, instead of a 
> KerberosKey object. This fixes relogins from kerberos keytabs on Oracle java 
> 8, and works on Oracle java 7 as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11901) BytesWritable supports only up to ~700MB (instead of 2G) due to integer overflow.

2015-08-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14696572#comment-14696572
 ] 

Hadoop QA commented on HADOOP-11901:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | patch |   0m  1s | The patch file was not named 
according to hadoop's naming conventions. Please see 
https://wiki.apache.org/hadoop/HowToContribute for instructions. |
| {color:blue}0{color} | pre-patch |  18m 13s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   8m 40s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 38s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 12s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 25s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m  0s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | common tests |  22m 59s | Tests failed in 
hadoop-common. |
| | |  66m  6s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.net.TestNetUtils |
|   | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12750455/HADOOP-11901%20%283%29.diff
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 6b1cefc |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7467/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7467/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7467/console |


This message was automatically generated.

> BytesWritable supports only up to ~700MB (instead of 2G) due to integer 
> overflow.
> -
>
> Key: HADOOP-11901
> URL: https://issues.apache.org/jira/browse/HADOOP-11901
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Reynold Xin
>Assignee: Reynold Xin
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11901 (3).diff, HADOOP-11901.diff
>
>
> BytesWritable.setSize increases the buffer size by 1.5 each time ( * 3 / 2). 
> This is an unsafe operation since it restricts the max size to ~700MB, since 
> 700MB * 3 > 2GB.
> I didn't write a test case for this case because in order to trigger this, 
> I'd need to allocate around 700MB, which is pretty expensive to do in a unit 
> test. Note that I didn't throw any exception in the case integer overflow as 
> I didn't want to change that behavior (callers to this might expect a 
> java.lang.NegativeArraySizeException).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12321) Make JvmPauseMonitor to AbstractService

2015-08-13 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14696514#comment-14696514
 ] 

Sunil G commented on HADOOP-12321:
--

Thank you [~rkanter], I will make the changes in JvmPauseMonitor and will 
update the invocation for various modules as per new service way.

> Make JvmPauseMonitor to AbstractService
> ---
>
> Key: HADOOP-12321
> URL: https://issues.apache.org/jira/browse/HADOOP-12321
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Sunil G
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The new JVM pause monitor has been written with its own start/stop lifecycle 
> which has already proven brittle to both ordering of operations and, even 
> after HADOOP-12313, is not thread safe (both start and stop are potentially 
> re-entrant).
> It also requires every class which supports the monitor to add another field 
> and perform the lifecycle operations in its own lifecycle, which, for all 
> Yarn services, is the YARN app lifecycle (as implemented in Hadoop common)
> Making the  monitor a subclass of {{AbstractService}} and moving the 
> init/start & stop operations in {{serviceInit()}}, {{serviceStart()}} & 
> {{serviceStop()}} methods will fix the concurrency and state model issues, 
> and make it trivial to add as a child to any YARN service which subclasses 
> {{CompositeService}} (most the NM and RM apps) will be able to hook up the 
> monitor simply by creating one in the ctor and adding it as a child.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11134) Change the default log level of bin/hadoop from INFO to WARN

2015-08-13 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14696511#comment-14696511
 ] 

Allen Wittenauer commented on HADOOP-11134:
---

bq. Thoughts?

Most of the stuff that we actually log at info is pretty useless on the client 
side, which is what this is effectively targeting.

> Change the default log level of bin/hadoop from INFO to WARN
> 
>
> Key: HADOOP-11134
> URL: https://issues.apache.org/jira/browse/HADOOP-11134
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Akira AJISAKA
>Priority: Minor
>  Labels: newbie
>
> Split from HADOOP-7984. Currently bin/hadoop script outputs INFO messages. 
> How about making it to output only WARN and ERROR messages?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11901) BytesWritable supports only up to ~700MB (instead of 2G) due to integer overflow.

2015-08-13 Thread Reynold Xin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14696502#comment-14696502
 ] 

Reynold Xin commented on HADOOP-11901:
--

OK updated (haven't compiled it though since it's a new computer)... Might be 
easier if a committer just patches it.



> BytesWritable supports only up to ~700MB (instead of 2G) due to integer 
> overflow.
> -
>
> Key: HADOOP-11901
> URL: https://issues.apache.org/jira/browse/HADOOP-11901
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Reynold Xin
>Assignee: Reynold Xin
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11901 (3).diff, HADOOP-11901.diff
>
>
> BytesWritable.setSize increases the buffer size by 1.5 each time ( * 3 / 2). 
> This is an unsafe operation since it restricts the max size to ~700MB, since 
> 700MB * 3 > 2GB.
> I didn't write a test case for this case because in order to trigger this, 
> I'd need to allocate around 700MB, which is pretty expensive to do in a unit 
> test. Note that I didn't throw any exception in the case integer overflow as 
> I didn't want to change that behavior (callers to this might expect a 
> java.lang.NegativeArraySizeException).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11901) BytesWritable supports only up to ~700MB (instead of 2G) due to integer overflow.

2015-08-13 Thread Reynold Xin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reynold Xin updated HADOOP-11901:
-
Attachment: HADOOP-11901 (3).diff

> BytesWritable supports only up to ~700MB (instead of 2G) due to integer 
> overflow.
> -
>
> Key: HADOOP-11901
> URL: https://issues.apache.org/jira/browse/HADOOP-11901
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Reynold Xin
>Assignee: Reynold Xin
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11901 (3).diff, HADOOP-11901.diff
>
>
> BytesWritable.setSize increases the buffer size by 1.5 each time ( * 3 / 2). 
> This is an unsafe operation since it restricts the max size to ~700MB, since 
> 700MB * 3 > 2GB.
> I didn't write a test case for this case because in order to trigger this, 
> I'd need to allocate around 700MB, which is pretty expensive to do in a unit 
> test. Note that I didn't throw any exception in the case integer overflow as 
> I didn't want to change that behavior (callers to this might expect a 
> java.lang.NegativeArraySizeException).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11134) Change the default log level of bin/hadoop from INFO to WARN

2015-08-13 Thread Asif A Bashar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Asif A Bashar updated HADOOP-11134:
---
Assignee: (was: Asif A Bashar)

> Change the default log level of bin/hadoop from INFO to WARN
> 
>
> Key: HADOOP-11134
> URL: https://issues.apache.org/jira/browse/HADOOP-11134
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Akira AJISAKA
>Priority: Minor
>  Labels: newbie
>
> Split from HADOOP-7984. Currently bin/hadoop script outputs INFO messages. 
> How about making it to output only WARN and ERROR messages?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11134) Change the default log level of bin/hadoop from INFO to WARN

2015-08-13 Thread Asif A Bashar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Asif A Bashar reassigned HADOOP-11134:
--

Assignee: Asif A Bashar

> Change the default log level of bin/hadoop from INFO to WARN
> 
>
> Key: HADOOP-11134
> URL: https://issues.apache.org/jira/browse/HADOOP-11134
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Akira AJISAKA
>Assignee: Asif A Bashar
>Priority: Minor
>  Labels: newbie
>
> Split from HADOOP-7984. Currently bin/hadoop script outputs INFO messages. 
> How about making it to output only WARN and ERROR messages?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12082) Support multiple authentication schemes via AuthenticationFilter

2015-08-13 Thread Hrishikesh Gadre (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hrishikesh Gadre updated HADOOP-12082:
--
Attachment: hadoop-ldap.patch

[~lmccay] Please review the latest patch (hadoop-ldap.patch file). I have 
completed the basic implementation of LDAP integration in this patch. I think 
we can improve this patch in couple of aspects

- We can fold the functionality of MultiSchemeAuthenticationHandler in the 
AuthenticationFilter itself. With this change, AuthenticationFilter would allow 
users to configure multiple handlers (e.g. kerberos + ldap) or a single handler 
(e.g. kerberos) in a uniform way.
- Alternatively we can allow AuthenticationHandler to define/implement multiple 
authentication modes (instead of single mode).

Please note that this patch does not contain unit tests etc. I just want to 
ensure that I am designing this appropriately. Once we agree on the design, I 
will work on completeness. Also you mentioned previously using service loader 
pattern. Could you please elaborate? 

> Support multiple authentication schemes via AuthenticationFilter
> 
>
> Key: HADOOP-12082
> URL: https://issues.apache.org/jira/browse/HADOOP-12082
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
> Attachments: hadoop-ldap.patch, multi-scheme-auth-support-poc.patch
>
>
> The requirement is to support LDAP based authentication scheme via Hadoop 
> AuthenticationFilter. HADOOP-9054 added a support to plug-in custom 
> authentication scheme (in addition to Kerberos) via 
> AltKerberosAuthenticationHandler class. But it is based on selecting the 
> authentication mechanism based on User-Agent HTTP header which does not 
> conform to HTTP protocol semantics.
> As per [RFC-2616|http://www.w3.org/Protocols/rfc2616/rfc2616.html]
> - HTTP protocol provides a simple challenge-response authentication mechanism 
> that can be used by a server to challenge a client request and by a client to 
> provide the necessary authentication information. 
> - This mechanism is initiated by server sending the 401 (Authenticate) 
> response with ‘WWW-Authenticate’ header which includes at least one challenge 
> that indicates the authentication scheme(s) and parameters applicable to the 
> Request-URI. 
> - In case server supports multiple authentication schemes, it may return 
> multiple challenges with a 401 (Authenticate) response, and each challenge 
> may use a different auth-scheme. 
> - A user agent MUST choose to use the strongest auth-scheme it understands 
> and request credentials from the user based upon that challenge.
> The existing Hadoop authentication filter implementation supports Kerberos 
> authentication scheme and uses ‘Negotiate’ as the challenge as part of 
> ‘WWW-Authenticate’ response header. As per the following documentation, 
> ‘Negotiate’ challenge scheme is only applicable to Kerberos (and Windows 
> NTLM) authentication schemes.
> [SPNEGO-based Kerberos and NTLM HTTP 
> Authentication|http://tools.ietf.org/html/rfc4559]
> [Understanding HTTP 
> Authentication|https://msdn.microsoft.com/en-us/library/ms789031%28v=vs.110%29.aspx]
> On the other hand for LDAP authentication, typically ‘Basic’ authentication 
> scheme is used (Note TLS is mandatory with Basic authentication scheme).
> http://httpd.apache.org/docs/trunk/mod/mod_authnz_ldap.html
> Hence for this feature, the idea would be to provide a custom implementation 
> of Hadoop AuthenticationHandler and Authenticator interfaces which would 
> support both schemes - Kerberos (via Negotiate auth challenge) and LDAP (via 
> Basic auth challenge). During the authentication phase, it would send both 
> the challenges and let client pick the appropriate one. If client responds 
> with an ‘Authorization’ header tagged with ‘Negotiate’ - it will use Kerberos 
> authentication. If client responds with an ‘Authorization’ header tagged with 
> ‘Basic’ - it will use LDAP authentication.
> Note - some HTTP clients (e.g. curl or Apache Http Java client) need to be 
> configured to use one scheme over the other e.g.
> - curl tool supports option to use either Kerberos (via --negotiate flag) or 
> username/password based authentication (via --basic and -u flags). 
> - Apache HttpClient library can be configured to use specific authentication 
> scheme.
> http://hc.apache.org/httpcomponents-client-ga/tutorial/html/authentication.html
> Typically web browsers automatically choose an authentication scheme based on 
> a notion of “strength” of security. e.g. take a look at the [design of Chrome 
> browser for HTTP 
> authentication|https://www.chromium.org/developers/design-documents/http-authentication]

[jira] [Updated] (HADOOP-11932) MetricsSinkAdapter hangs when being stopped

2015-08-13 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11932:
---
Attachment: HADOOP-11932.branch-2.6.patch

Attaching a patch to backport to branch-2.6.

>  MetricsSinkAdapter hangs when being stopped
> 
>
> Key: HADOOP-11932
> URL: https://issues.apache.org/jira/browse/HADOOP-11932
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Brahma Reddy Battula
>Priority: Critical
>  Labels: 2.6.1-candidate
> Fix For: 2.7.2
>
> Attachments: HADOOP-11932-02.patch, HADOOP-11932-branch-2.patch, 
> HADOOP-11932.branch-2.6.patch, HADOOP-11932.patch, HADOOP-11932.patch
>
>
> We've seen a situation that one RM hangs on stopping the MetricsSinkAdapter
> {code}
> "main-EventThread" daemon prio=10 tid=0x7f9b24031000 nid=0x2d18 in 
> Object.wait() [0x7f9afe7eb000]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> - waiting on <0xc058dcf8> (a 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter$1)
> at java.lang.Thread.join(Thread.java:1281)
> - locked <0xc058dcf8> (a 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter$1)
> at java.lang.Thread.join(Thread.java:1355)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.stop(MetricsSinkAdapter.java:202)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.stopSinks(MetricsSystemImpl.java:472)
> - locked <0xc04cc1a0> (a 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.stop(MetricsSystemImpl.java:213)
> - locked <0xc04cc1a0> (a 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.shutdown(MetricsSystemImpl.java:592)
> - locked <0xc04cc1a0> (a 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl)
> at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.shutdownInstance(DefaultMetricsSystem.java:72)
> at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.shutdown(DefaultMetricsSystem.java:68)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStop(ResourceManager.java:605)
> at 
> org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
> - locked <0xc0503568> (a java.lang.Object)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.stopActiveServices(ResourceManager.java:1024)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToStandby(ResourceManager.java:1076)
> - locked <0xc03fe3b8> (a 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToStandby(AdminService.java:322)
> - locked <0xc0502b10> (a 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.EmbeddedElectorService.becomeStandby(EmbeddedElectorService.java:135)
> at 
> org.apache.hadoop.ha.ActiveStandbyElector.becomeStandby(ActiveStandbyElector.java:911)
> at 
> org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:428)
> - locked <0xc0718940> (a 
> org.apache.hadoop.ha.ActiveStandbyElector)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:605)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> {code}
> {code}
> "timeline" daemon prio=10 tid=0x7f9b34d55000 nid=0x1d93 runnable 
> [0x7f9b0cbbf000]
>java.lang.Thread.State: RUNNABLE
> at java.net.SocketInputStream.socketRead0(Native Method)
> at java.net.SocketInputStream.read(SocketInputStream.java:152)
> at java.net.SocketInputStream.read(SocketInputStream.java:122)
> at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
> - locked <0xc0f522c8> (a java.io.BufferedInputStream)
> at 
> org.apache.commons.httpclient.HttpParser.readRawLine(HttpParser.java:78)
> at 
> org.apache.commons.httpclient.HttpParser.readLine(HttpParser.java:106)
> at 
> org.apache.commons.httpclient.HttpConnection.readLine(HttpConnection.java:1116)
> at 
> org.apache.commons.httpclient.HttpMethodBase.readStatusLine(HttpMethodBase.java:1973)
> at 
> org.apache.commons.httpclient.HttpMethodBase.readResponse(HttpMethodBase.java:1735)
> at 
> org.apache.commons.ht

[jira] [Commented] (HADOOP-12322) typos in rpcmetrics.java

2015-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14696451#comment-14696451
 ] 

Hudson commented on HADOOP-12322:
-

FAILURE: Integrated in Hadoop-trunk-Commit #8297 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8297/])
HADOOP-12322. Typos in rpcmetrics.java. (Contributed by Anu Engineer) (arp: rev 
6b1cefc561bf407daf745606275c03b9cda5ef4d)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/metrics/RpcMetrics.java


> typos in rpcmetrics.java
> 
>
> Key: HADOOP-12322
> URL: https://issues.apache.org/jira/browse/HADOOP-12322
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HADOOP-12322.001.patch
>
>
> typos in RpcMetrics.java
> Processsing --> Processing
> sucesses ->  successes
> JobTrackerInstrumenation -> JobTrackerInstrumentation
> these are all part of description of the metric or in comments. So should 
> have no impact on backward compact.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12322) typos in rpcmetrics.java

2015-08-13 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-12322:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: 2.8.0
Target Version/s:   (was: 2.8.0)
  Status: Resolved  (was: Patch Available)

Committed for 2.8.0. Thanks for the contribution [~anu].

> typos in rpcmetrics.java
> 
>
> Key: HADOOP-12322
> URL: https://issues.apache.org/jira/browse/HADOOP-12322
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HADOOP-12322.001.patch
>
>
> typos in RpcMetrics.java
> Processsing --> Processing
> sucesses ->  successes
> JobTrackerInstrumenation -> JobTrackerInstrumentation
> these are all part of description of the metric or in comments. So should 
> have no impact on backward compact.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11683) Need a plugin API to translate long principal names to local OS user names arbitrarily

2015-08-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14696404#comment-14696404
 ] 

Hadoop QA commented on HADOOP-11683:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m  0s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 44s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 43s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m  5s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 21s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 53s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | common tests |  22m 16s | Tests failed in 
hadoop-common. |
| | |  62m  3s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.net.TestNetUtils |
|   | hadoop.net.TestClusterTopology |
|   | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12749679/HADOOP-11683.001.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 0a03054 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7465/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7465/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7465/console |


This message was automatically generated.

> Need a plugin API to translate long principal names to local OS user names 
> arbitrarily
> --
>
> Key: HADOOP-11683
> URL: https://issues.apache.org/jira/browse/HADOOP-11683
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Sunny Cheung
>Assignee: roger mak
> Attachments: HADOOP-11683.001.patch
>
>
> We need a plugin API to translate long principal names (e.g. 
> john@example.com) to local OS user names (e.g. user123456) arbitrarily.
> For some organizations the name translation is straightforward (e.g. 
> john@example.com to john_doe), and the hadoop.security.auth_to_local 
> configurable mapping is sufficient to resolve this (see HADOOP-6526). 
> However, in some other cases the name translation is arbitrary and cannot be 
> generalized by a set of translation rules easily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12244) recover broken rebase during precommit

2015-08-13 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14696379#comment-14696379
 ] 

Allen Wittenauer commented on HADOOP-12244:
---

I'm an idiot.

This code won't work on a currently broken box because the box has no way to 
get the new code because the pull is broken.

Filed BUILDS-104 to help get some manual assistance for the time being.  At 
least it should self heal in the future?

> recover broken rebase during precommit
> --
>
> Key: HADOOP-12244
> URL: https://issues.apache.org/jira/browse/HADOOP-12244
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Critical
> Fix For: HADOOP-12111, 3.0.0
>
> Attachments: HADOOP-12244.00.patch, HADOOP-12244.HADOOP-12111.00.patch
>
>
> One of the Jenkins hosts is failing during the git rebase that happens in 
> bootstrap.  We should probably do something better than just fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11802) DomainSocketWatcher thread terminates sometimes after there is an I/O error during requestShortCircuitShm

2015-08-13 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11802:
---
Attachment: HADOOP-11802.branch-2.6.patch

Attaching a patch to backport this issue to branch-2.6.


> DomainSocketWatcher thread terminates sometimes after there is an I/O error 
> during requestShortCircuitShm
> -
>
> Key: HADOOP-11802
> URL: https://issues.apache.org/jira/browse/HADOOP-11802
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Eric Payne
>Assignee: Colin Patrick McCabe
>  Labels: 2.6.1-candidate
> Fix For: 2.7.1
>
> Attachments: HADOOP-11802.001.patch, HADOOP-11802.002.patch, 
> HADOOP-11802.003.patch, HADOOP-11802.004.patch, HADOOP-11802.branch-2.6.patch
>
>
> In {{DataXceiver#requestShortCircuitShm}}, we attempt to recover from some 
> errors by closing the {{DomainSocket}}.  However, this violates the invariant 
> that the domain socket should never be closed when it is being managed by the 
> {{DomainSocketWatcher}}.  Instead, we should call {{shutdown}} on the 
> {{DomainSocket}}.  When this bug hits, it terminates the 
> {{DomainSocketWatcher}} thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11802) DomainSocketWatcher thread terminates sometimes after there is an I/O error during requestShortCircuitShm

2015-08-13 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14696282#comment-14696282
 ] 

Akira AJISAKA commented on HADOOP-11802:


If we are going to backport this issue to branch-2.6, we need to backport 
HDFS-7915 before. If we backport these, we should backport HDFS-8070 as well 
because HDFS-7915 breaks it.

> DomainSocketWatcher thread terminates sometimes after there is an I/O error 
> during requestShortCircuitShm
> -
>
> Key: HADOOP-11802
> URL: https://issues.apache.org/jira/browse/HADOOP-11802
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Eric Payne
>Assignee: Colin Patrick McCabe
>  Labels: 2.6.1-candidate
> Fix For: 2.7.1
>
> Attachments: HADOOP-11802.001.patch, HADOOP-11802.002.patch, 
> HADOOP-11802.003.patch, HADOOP-11802.004.patch
>
>
> In {{DataXceiver#requestShortCircuitShm}}, we attempt to recover from some 
> errors by closing the {{DomainSocket}}.  However, this violates the invariant 
> that the domain socket should never be closed when it is being managed by the 
> {{DomainSocketWatcher}}.  Instead, we should call {{shutdown}} on the 
> {{DomainSocket}}.  When this bug hits, it terminates the 
> {{DomainSocketWatcher}} thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12313) Possible NPE in JvmPauseMonitor.stop()

2015-08-13 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14696216#comment-14696216
 ] 

Robert Kanter commented on HADOOP-12313:


Let's fix the NPE in this JIRA, use HADOOP-12321 to make {{JvmPauseMonitor}} a 
Service, and use HADOOP-12320 to discuss making Services restartable in general.

> Possible NPE in JvmPauseMonitor.stop()
> --
>
> Key: HADOOP-12313
> URL: https://issues.apache.org/jira/browse/HADOOP-12313
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Gabor Liptak
>Priority: Critical
> Attachments: HADOOP-12313.2.patch, HADOOP-12313.3.patch, 
> YARN-4035.1.patch
>
>
> It is observed that after YARN-4019 some tests are failing in 
> TestRMAdminService with null pointer exceptions in build [build failure 
> |https://builds.apache.org/job/PreCommit-YARN-Build/8792/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt]
> {noformat}
> unning org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService
> Tests run: 19, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 11.541 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService
> testModifyLabelsOnNodesWithDistributedConfigurationDisabled(org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService)
>   Time elapsed: 0.132 sec  <<< ERROR!
> java.lang.NullPointerException: null
>   at org.apache.hadoop.util.JvmPauseMonitor.stop(JvmPauseMonitor.java:86)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStop(ResourceManager.java:601)
>   at 
> org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.stopActiveServices(ResourceManager.java:983)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToStandby(ResourceManager.java:1038)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStop(ResourceManager.java:1085)
>   at 
> org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
>   at 
> org.apache.hadoop.service.AbstractService.close(AbstractService.java:250)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService.testModifyLabelsOnNodesWithDistributedConfigurationDisabled(TestRMAdminService.java:824)
> testRemoveClusterNodeLabelsWithDistributedConfigurationEnabled(org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService)
>   Time elapsed: 0.121 sec  <<< ERROR!
> java.lang.NullPointerException: null
>   at org.apache.hadoop.util.JvmPauseMonitor.stop(JvmPauseMonitor.java:86)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStop(ResourceManager.java:601)
>   at 
> org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.stopActiveServices(ResourceManager.java:983)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToStandby(ResourceManager.java:1038)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStop(ResourceManager.java:1085)
>   at 
> org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
>   at 
> org.apache.hadoop.service.AbstractService.close(AbstractService.java:250)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService.testRemoveClusterNodeLabelsWithDistributedConfigurationEnabled(TestRMAdminService.java:867)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12321) Make JvmPauseMonitor to AbstractService

2015-08-13 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter reassigned HADOOP-12321:
--

Assignee: Sunil G

Sure.  Go ahead [~sunilg].  Make sure to update existing uses of the 
{{JvmPauseMonitor}}, including NN, DN, RM, NN, JHS, ATS, etc.  We'll probably 
need some HDFS, YARN, and MAPREDUCE sibling JIRAs for those.

> Make JvmPauseMonitor to AbstractService
> ---
>
> Key: HADOOP-12321
> URL: https://issues.apache.org/jira/browse/HADOOP-12321
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Sunil G
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The new JVM pause monitor has been written with its own start/stop lifecycle 
> which has already proven brittle to both ordering of operations and, even 
> after HADOOP-12313, is not thread safe (both start and stop are potentially 
> re-entrant).
> It also requires every class which supports the monitor to add another field 
> and perform the lifecycle operations in its own lifecycle, which, for all 
> Yarn services, is the YARN app lifecycle (as implemented in Hadoop common)
> Making the  monitor a subclass of {{AbstractService}} and moving the 
> init/start & stop operations in {{serviceInit()}}, {{serviceStart()}} & 
> {{serviceStop()}} methods will fix the concurrency and state model issues, 
> and make it trivial to add as a child to any YARN service which subclasses 
> {{CompositeService}} (most the NM and RM apps) will be able to hook up the 
> monitor simply by creating one in the ctor and adding it as a child.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11683) Need a plugin API to translate long principal names to local OS user names arbitrarily

2015-08-13 Thread roger mak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

roger mak updated HADOOP-11683:
---
 Release Note: 
The patch allows HadoopKerberosName to use a user name mapping pluggable API 
from parameter, hadoop.security.user.name.mapping, instead of the regular 
expression specified in parameter, hadoop.security.auth_to_local.

If user name is not found by the API or hadoop.security.user.mapping is not 
set, it will default back to hadoop.security.auth_to_local for compatibility. 
Affects Version/s: 2.6.0
   Status: Patch Available  (was: Open)

submit patch to start the review process and trigger the automated test.

> Need a plugin API to translate long principal names to local OS user names 
> arbitrarily
> --
>
> Key: HADOOP-11683
> URL: https://issues.apache.org/jira/browse/HADOOP-11683
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Sunny Cheung
>Assignee: roger mak
> Attachments: HADOOP-11683.001.patch
>
>
> We need a plugin API to translate long principal names (e.g. 
> john@example.com) to local OS user names (e.g. user123456) arbitrarily.
> For some organizations the name translation is straightforward (e.g. 
> john@example.com to john_doe), and the hadoop.security.auth_to_local 
> configurable mapping is sufficient to resolve this (see HADOOP-6526). 
> However, in some other cases the name translation is arbitrary and cannot be 
> generalized by a set of translation rules easily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12314) check_unittests in test-patch.sh can return a wrong status

2015-08-13 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12314:

Assignee: Kengo Seki
  Status: Patch Available  (was: Open)

> check_unittests in test-patch.sh can return a wrong status
> --
>
> Key: HADOOP-12314
> URL: https://issues.apache.org/jira/browse/HADOOP-12314
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Kengo Seki
> Attachments: HADOOP-12314.HADOOP-12111.00.patch
>
>
> Follow-up from HADOOP-12247. check_unittests returns the value of  
> $\{result}, but the status of *_process_tests is added to $\{results}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12314) check_unittests in test-patch.sh can return a wrong status

2015-08-13 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12314:

Attachment: HADOOP-12314.HADOOP-12111.00.patch

-00:

* replace $results with $result in check_unittests
* declare $result locally in modules_workers
* unify $result and $retval (undeclared locally) into the former in 
precheck_mvninstall and check_mvninstall


> check_unittests in test-patch.sh can return a wrong status
> --
>
> Key: HADOOP-12314
> URL: https://issues.apache.org/jira/browse/HADOOP-12314
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
> Attachments: HADOOP-12314.HADOOP-12111.00.patch
>
>
> Follow-up from HADOOP-12247. check_unittests returns the value of  
> $\{result}, but the status of *_process_tests is added to $\{results}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12322) typos in rpcmetrics.java

2015-08-13 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14696031#comment-14696031
 ] 

Anu Engineer commented on HADOOP-12322:
---

test failures are not related to this patch


> typos in rpcmetrics.java
> 
>
> Key: HADOOP-12322
> URL: https://issues.apache.org/jira/browse/HADOOP-12322
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Trivial
> Attachments: HADOOP-12322.001.patch
>
>
> typos in RpcMetrics.java
> Processsing --> Processing
> sucesses ->  successes
> JobTrackerInstrumenation -> JobTrackerInstrumentation
> these are all part of description of the metric or in comments. So should 
> have no impact on backward compact.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12322) typos in rpcmetrics.java

2015-08-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14696027#comment-14696027
 ] 

Hadoop QA commented on HADOOP-12322:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m 36s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   8m 14s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  9s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 25s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m  7s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 31s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 59s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | common tests |  23m 10s | Tests failed in 
hadoop-common. |
| | |  65m 46s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.net.TestNetUtils |
|   | hadoop.ha.TestZKFailoverController |
| Timed out tests | 
org.apache.hadoop.security.token.delegation.TestDelegationToken |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12750353/HADOOP-12322.001.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / b73181f |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7464/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7464/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7464/console |


This message was automatically generated.

> typos in rpcmetrics.java
> 
>
> Key: HADOOP-12322
> URL: https://issues.apache.org/jira/browse/HADOOP-12322
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Trivial
> Attachments: HADOOP-12322.001.patch
>
>
> typos in RpcMetrics.java
> Processsing --> Processing
> sucesses ->  successes
> JobTrackerInstrumenation -> JobTrackerInstrumentation
> these are all part of description of the metric or in comments. So should 
> have no impact on backward compact.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12314) check_unittests in test-patch.sh can return a wrong status

2015-08-13 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695996#comment-14695996
 ] 

Allen Wittenauer commented on HADOOP-12314:
---

both of those are bugs.  I wonder if the modules_workers one may explain the 
weird behavior we sometimes see with findbugs.

> check_unittests in test-patch.sh can return a wrong status
> --
>
> Key: HADOOP-12314
> URL: https://issues.apache.org/jira/browse/HADOOP-12314
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>
> Follow-up from HADOOP-12247. check_unittests returns the value of  
> $\{result}, but the status of *_process_tests is added to $\{results}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12317) Applications fail on NM restart on some linux distro because NM container recovery declares AM container as LOST

2015-08-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695930#comment-14695930
 ] 

Hadoop QA commented on HADOOP-12317:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  16m 32s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 36s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 36s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m  3s | The applied patch generated  2 
new checkstyle issues (total was 97, now 98). |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 22s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 51s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | common tests |  22m 21s | Tests failed in 
hadoop-common. |
| | |  61m 20s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.ha.TestZKFailoverController |
|   | hadoop.net.TestNetUtils |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12750001/YARN-4046.002.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / b73181f |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/7463/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7463/artifact/patchprocess/whitespace.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7463/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7463/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7463/console |


This message was automatically generated.

> Applications fail on NM restart on some linux distro because NM container 
> recovery declares AM container as LOST
> 
>
> Key: HADOOP-12317
> URL: https://issues.apache.org/jira/browse/HADOOP-12317
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
>Priority: Critical
> Attachments: YARN-4046.002.patch, YARN-4046.002.patch, 
> YARN-4096.001.patch
>
>
> On a debian machine we have seen node manager recovery of containers fail 
> because the signal syntax for process group may not work. We see errors in 
> checking if process is alive during container recovery which causes the 
> container to be declared as LOST (154) on a NodeManager restart.
> The application will fail with error. The attempts are not retried.
> {noformat}
> Application application_1439244348718_0001 failed 1 times due to Attempt 
> recovered after RM restartAM Container for 
> appattempt_1439244348718_0001_01 exited with exitCode: 154
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12316) Potential false-positive and false-negative in parsing TAP output

2015-08-13 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695928#comment-14695928
 ] 

Allen Wittenauer commented on HADOOP-12316:
---

Oh, I think I remember now.  This was originally written with an xargs that 
caused all sorts of havoc.

> Potential false-positive and false-negative in parsing TAP output
> -
>
> Key: HADOOP-12316
> URL: https://issues.apache.org/jira/browse/HADOOP-12316
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Kengo Seki
> Fix For: HADOOP-12111
>
> Attachments: HADOOP-12316.HADOOP-12111.00.patch
>
>
> In tap.sh, TAP results are parsed as follows:
> {code}
>   filenames=$(find "${TAP_LOG_DIR}" -type f -exec "${GREP}" -l -E "not ok " 
> {} \;)
> {code}
> But this regex seems to have the following problems:
> 1. According to [the TAP 
> specification|https://testanything.org/tap-specification.html], "ok" / "not 
> ok" is only required in the test line and others are optional. So each line 
> can be terminated with just "ok" or "not ok", without trailing spaces. In 
> that case, the regex "not ok " will miss test failures.
> 2. TAP output can contain descriptions and diagnostics. If they contains the 
> string "not ok ", a false-alarm will be raised.
> They won't occur as far as we are using only bats, but considering supporting 
> other test tools in the future, the regex should be replaced with "^not ok".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12316) Potential false-positive and false-negative in parsing TAP output

2015-08-13 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12316:
--
   Resolution: Fixed
Fix Version/s: HADOOP-12111
   Status: Resolved  (was: Patch Available)

> Potential false-positive and false-negative in parsing TAP output
> -
>
> Key: HADOOP-12316
> URL: https://issues.apache.org/jira/browse/HADOOP-12316
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Kengo Seki
> Fix For: HADOOP-12111
>
> Attachments: HADOOP-12316.HADOOP-12111.00.patch
>
>
> In tap.sh, TAP results are parsed as follows:
> {code}
>   filenames=$(find "${TAP_LOG_DIR}" -type f -exec "${GREP}" -l -E "not ok " 
> {} \;)
> {code}
> But this regex seems to have the following problems:
> 1. According to [the TAP 
> specification|https://testanything.org/tap-specification.html], "ok" / "not 
> ok" is only required in the test line and others are optional. So each line 
> can be terminated with just "ok" or "not ok", without trailing spaces. In 
> that case, the regex "not ok " will miss test failures.
> 2. TAP output can contain descriptions and diagnostics. If they contains the 
> string "not ok ", a false-alarm will be raised.
> They won't occur as far as we are using only bats, but considering supporting 
> other test tools in the future, the regex should be replaced with "^not ok".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12316) Potential false-positive and false-negative in parsing TAP output

2015-08-13 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695894#comment-14695894
 ] 

Allen Wittenauer commented on HADOOP-12316:
---

OK, I can't duplicate it so +1.  lol

thanks. Will commit this here in a bit.

> Potential false-positive and false-negative in parsing TAP output
> -
>
> Key: HADOOP-12316
> URL: https://issues.apache.org/jira/browse/HADOOP-12316
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Kengo Seki
> Attachments: HADOOP-12316.HADOOP-12111.00.patch
>
>
> In tap.sh, TAP results are parsed as follows:
> {code}
>   filenames=$(find "${TAP_LOG_DIR}" -type f -exec "${GREP}" -l -E "not ok " 
> {} \;)
> {code}
> But this regex seems to have the following problems:
> 1. According to [the TAP 
> specification|https://testanything.org/tap-specification.html], "ok" / "not 
> ok" is only required in the test line and others are optional. So each line 
> can be terminated with just "ok" or "not ok", without trailing spaces. In 
> that case, the regex "not ok " will miss test failures.
> 2. TAP output can contain descriptions and diagnostics. If they contains the 
> string "not ok ", a false-alarm will be raised.
> They won't occur as far as we are using only bats, but considering supporting 
> other test tools in the future, the regex should be replaced with "^not ok".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12244) recover broken rebase during precommit

2015-08-13 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695867#comment-14695867
 ] 

Allen Wittenauer commented on HADOOP-12244:
---

note: it's H3 that has the problem in the HADOOP build queue (yarn, etc are 
fine)

> recover broken rebase during precommit
> --
>
> Key: HADOOP-12244
> URL: https://issues.apache.org/jira/browse/HADOOP-12244
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Critical
> Fix For: HADOOP-12111, 3.0.0
>
> Attachments: HADOOP-12244.00.patch, HADOOP-12244.HADOOP-12111.00.patch
>
>
> One of the Jenkins hosts is failing during the git rebase that happens in 
> bootstrap.  We should probably do something better than just fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12244) recover broken rebase during precommit

2015-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695851#comment-14695851
 ] 

Hudson commented on HADOOP-12244:
-

FAILURE: Integrated in Hadoop-trunk-Commit #8294 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8294/])
HADOOP-12244. recover broken rebase during precommit (aw) (aw: rev 
b73181f18702f9dc2dfc9d3cdb415b510261e74c)
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/test-patch.sh


> recover broken rebase during precommit
> --
>
> Key: HADOOP-12244
> URL: https://issues.apache.org/jira/browse/HADOOP-12244
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Critical
> Fix For: HADOOP-12111, 3.0.0
>
> Attachments: HADOOP-12244.00.patch, HADOOP-12244.HADOOP-12111.00.patch
>
>
> One of the Jenkins hosts is failing during the git rebase that happens in 
> bootstrap.  We should probably do something better than just fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12129) rework test-patch bug system support

2015-08-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695850#comment-14695850
 ] 

Hadoop QA commented on HADOOP-12129:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} site {color} | {color:green} 0m 0s 
{color} | {color:green} HADOOP-12111 passed {color} |
| {color:blue}0{color} | {color:blue} @author {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipping @author checks as test-patch.sh has been 
patched. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} site {color} | {color:green} 0m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 11s 
{color} | {color:red} The applied patch generated 1 new shellcheck issues 
(total was 22, now 23). {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 0m 32s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12750358/HADOOP-12129.HADOOP-12111.04.patch
 |
| JIRA Issue | HADOOP-12129 |
| git revision | HADOOP-12111 / 13da896 |
| Optional Tests | asflicense site unit shellcheck |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/dev-support-test/personality/hadoop.sh
 |
| Default Java | 1.7.0_55 |
| Multi-JDK versions |  /home/jenkins/tools/java/jdk1.8.0:1.8.0 
/home/jenkins/tools/java/jdk1.7.0_55:1.7.0_55 |
| shellcheck | v0.3.3 (This is an old version that has serious bugs. Consider 
upgrading.) |
| shellcheck | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7462/artifact/patchprocess/diff-patch-shellcheck.txt
 |
| JDK v1.7.0_55  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7462/testReport/ |
| Max memory used | 49MB |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7462/console |


This message was automatically generated.



> rework test-patch bug system support
> 
>
> Key: HADOOP-12129
> URL: https://issues.apache.org/jira/browse/HADOOP-12129
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-12129.HADOOP-12111.00.patch, 
> HADOOP-12129.HADOOP-12111.01.patch, HADOOP-12129.HADOOP-12111.02.patch, 
> HADOOP-12129.HADOOP-12111.03.patch, HADOOP-12129.HADOOP-12111.04.patch
>
>
> WARNING: this is a fairly big project.
> See first comment for a brain dump on the issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12297) test-patch's basedir and patch-dir must be directories under the user's home in docker mode if using boot2docker

2015-08-13 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12297:
--
   Resolution: Fixed
Fix Version/s: HADOOP-12111
   Status: Resolved  (was: Patch Available)

+1 committed

thanks!


> test-patch's basedir and patch-dir must be directories under the user's home 
> in docker mode if using boot2docker
> 
>
> Key: HADOOP-12297
> URL: https://issues.apache.org/jira/browse/HADOOP-12297
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Kengo Seki
> Fix For: HADOOP-12111
>
> Attachments: HADOOP-12297.HADOOP-12111.00.patch
>
>
> Docker mode without a patch-dir option or with an absolute path seems not to 
> work:
> {code}
> [sekikn@mobile hadoop]$ dev-support/test-patch.sh 
> --basedir=/Users/sekikn/dev/hadoop --project=hadoop --docker /tmp/test.patch
> (snip)
> Successfully built 37438de64e81
> JAVA_HOME: /Library/Java/JavaVirtualMachines/jdk1.7.0_80.jdk/Contents/Home 
> does not exist. Dockermode: attempting to switch to another.
> /testptch/launch-test-patch.sh: line 42: cd: 
> /testptch/patchprocess/precommit/: No such file or directory
> /testptch/launch-test-patch.sh: line 45: 
> /testptch/patchprocess/precommit/test-patch.sh: No such file or directory
> {code}
> It succeeds if a relative directory is specified:
> {code}
> [sekikn@mobile hadoop]$ dev-support/test-patch.sh 
> --basedir=/Users/sekikn/dev/hadoop --project=hadoop --docker --patch-dir=foo 
> /tmp/test.patch
> (snip)
> Successfully built 6ea5001987a7
> JAVA_HOME: /Library/Java/JavaVirtualMachines/jdk1.7.0_80.jdk/Contents/Home 
> does not exist. Dockermode: attempting to switch to another.
> 
> 
> Bootstrapping test harness
> 
> 
> (snip)
> +1 overall
> (snip)
> 
> 
>   Finished build.
> 
> 
> {code}
> If my setup or usage is wrong, please close this JIRA as invalid.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11880) aw jira sub-task testing, ignore

2015-08-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695846#comment-14695846
 ] 

Hadoop QA commented on HADOOP-11880:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12750214/HDFS-EC-merge.trunk.01.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle shellcheck |
| git revision | trunk / b73181f |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7461/console |


This message was automatically generated.

> aw jira sub-task testing, ignore
> 
>
> Key: HADOOP-11880
> URL: https://issues.apache.org/jira/browse/HADOOP-11880
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Allen Wittenauer
> Attachments: HDFS-EC-merge.trunk.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12129) rework test-patch bug system support

2015-08-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695847#comment-14695847
 ] 

Hadoop QA commented on HADOOP-12129:


(!) A patch to the files used for the QA process has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7462/console in case of 
problems.

> rework test-patch bug system support
> 
>
> Key: HADOOP-12129
> URL: https://issues.apache.org/jira/browse/HADOOP-12129
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-12129.HADOOP-12111.00.patch, 
> HADOOP-12129.HADOOP-12111.01.patch, HADOOP-12129.HADOOP-12111.02.patch, 
> HADOOP-12129.HADOOP-12111.03.patch, HADOOP-12129.HADOOP-12111.04.patch
>
>
> WARNING: this is a fairly big project.
> See first comment for a brain dump on the issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11820) aw jira testing, ignore

2015-08-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695845#comment-14695845
 ] 

Hadoop QA commented on HADOOP-11820:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} site {color} | {color:green} 0m 0s 
{color} | {color:green} HADOOP-12111 passed {color} |
| {color:blue}0{color} | {color:blue} @author {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipping @author checks as test-patch.sh has been 
patched. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} site {color} | {color:green} 0m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
11s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 0m 32s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12750216/HADOOP-12129.HADOOP-12111.03.patch
 |
| JIRA Issue | HADOOP-11820 |
| git revision | HADOOP-12111 / 13da896 |
| Optional Tests | asflicense site unit shellcheck |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/dev-support-test/personality/hadoop.sh
 |
| Default Java | 1.7.0_55 |
| Multi-JDK versions |  /home/jenkins/tools/java/jdk1.8.0:1.8.0 
/home/jenkins/tools/java/jdk1.7.0_55:1.7.0_55 |
| shellcheck | v0.3.3 (This is an old version that has serious bugs. Consider 
upgrading.) |
| JDK v1.7.0_55  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7460/testReport/ |
| Max memory used | 48MB |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7460/console |


This message was automatically generated.



> aw jira testing, ignore
> ---
>
> Key: HADOOP-11820
> URL: https://issues.apache.org/jira/browse/HADOOP-11820
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: HADOOP-12129.HADOOP-12111.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11820) aw jira testing, ignore

2015-08-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695844#comment-14695844
 ] 

Hadoop QA commented on HADOOP-11820:


(!) A patch to the files used for the QA process has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7460/console in case of 
problems.

> aw jira testing, ignore
> ---
>
> Key: HADOOP-11820
> URL: https://issues.apache.org/jira/browse/HADOOP-11820
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: HADOOP-12129.HADOOP-12111.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12316) Potential false-positive and false-negative in parsing TAP output

2015-08-13 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695840#comment-14695840
 ] 

Allen Wittenauer commented on HADOOP-12316:
---

IIRC, there was a problem with using the circumflex when multiple tests failed. 
Let me see if I can reproduce it.

> Potential false-positive and false-negative in parsing TAP output
> -
>
> Key: HADOOP-12316
> URL: https://issues.apache.org/jira/browse/HADOOP-12316
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Kengo Seki
> Attachments: HADOOP-12316.HADOOP-12111.00.patch
>
>
> In tap.sh, TAP results are parsed as follows:
> {code}
>   filenames=$(find "${TAP_LOG_DIR}" -type f -exec "${GREP}" -l -E "not ok " 
> {} \;)
> {code}
> But this regex seems to have the following problems:
> 1. According to [the TAP 
> specification|https://testanything.org/tap-specification.html], "ok" / "not 
> ok" is only required in the test line and others are optional. So each line 
> can be terminated with just "ok" or "not ok", without trailing spaces. In 
> that case, the regex "not ok " will miss test failures.
> 2. TAP output can contain descriptions and diagnostics. If they contains the 
> string "not ok ", a false-alarm will be raised.
> They won't occur as far as we are using only bats, but considering supporting 
> other test tools in the future, the regex should be replaced with "^not ok".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-12315) hbaseprotoc_postapply in the test-patch hbase personality can return a wrong status

2015-08-13 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-12315.
---
Resolution: Fixed

> hbaseprotoc_postapply in the test-patch hbase personality can return a wrong 
> status
> ---
>
> Key: HADOOP-12315
> URL: https://issues.apache.org/jira/browse/HADOOP-12315
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Kengo Seki
> Fix For: HADOOP-12111
>
> Attachments: HADOOP-12315.HADOOP-12111.00.patch
>
>
> Similar to HADOOP-12314. hbaseprotoc_postapply returns the value of 
> $\{results}, but if a module status is -1, $\{result} is incremented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12315) hbaseprotoc_postapply in the test-patch hbase personality can return a wrong status

2015-08-13 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12315:
--
Fix Version/s: HADOOP-12111
   Status: In Progress  (was: Patch Available)

great catch

+1 committed

thanks!



> hbaseprotoc_postapply in the test-patch hbase personality can return a wrong 
> status
> ---
>
> Key: HADOOP-12315
> URL: https://issues.apache.org/jira/browse/HADOOP-12315
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Kengo Seki
> Fix For: HADOOP-12111
>
> Attachments: HADOOP-12315.HADOOP-12111.00.patch
>
>
> Similar to HADOOP-12314. hbaseprotoc_postapply returns the value of 
> $\{results}, but if a module status is -1, $\{result} is incremented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12318) Expose underlying LDAP exceptions in SaslPlainServer

2015-08-13 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695835#comment-14695835
 ] 

Aaron T. Myers commented on HADOOP-12318:
-

Hey Steve, thanks a lot for taking a look at this patch, and I think your point 
is a good one. Mind if we file a new JIRA to make this change? Seems like 
that'd make the history a bit cleaner than reverting this patch and applying a 
new one with the revision you're suggesting.

> Expose underlying LDAP exceptions in SaslPlainServer
> 
>
> Key: HADOOP-12318
> URL: https://issues.apache.org/jira/browse/HADOOP-12318
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12318.000.patch
>
>
> In the code of class 
> [SaslPlainServer|http://github.mtv.cloudera.com/CDH/hadoop/blob/cdh5-2.6.0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPlainServer.java#L108],
>  the underlying exception is not included in the {{SaslException}}, which 
> leads to below error message in HiveServer2:
> {noformat}
> 2015-07-22 11:50:28,433 DEBUG 
> org.apache.thrift.transport.TSaslServerTransport: failed to open server 
> transport
> org.apache.thrift.transport.TTransportException: PLAIN auth failed: Error 
> validating LDAP user
>   at 
> org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
>   at 
> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316)
>   at 
> org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
>   at 
> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
>   at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Make COEs very hard to understand what the real error is.
> Can we change that line as:
> {code}
> } catch (Exception e) {
>   throw new SaslException("PLAIN auth failed: " + e.getMessage(), e);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11880) aw jira sub-task testing, ignore

2015-08-13 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11880:
--
Status: Patch Available  (was: Reopened)

> aw jira sub-task testing, ignore
> 
>
> Key: HADOOP-11880
> URL: https://issues.apache.org/jira/browse/HADOOP-11880
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Allen Wittenauer
> Attachments: HDFS-EC-merge.trunk.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12244) recover broken rebase during precommit

2015-08-13 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695811#comment-14695811
 ] 

Allen Wittenauer commented on HADOOP-12244:
---

I'll watch the builds to see if this makes the broken host go through now...

> recover broken rebase during precommit
> --
>
> Key: HADOOP-12244
> URL: https://issues.apache.org/jira/browse/HADOOP-12244
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Critical
> Fix For: HADOOP-12111, 3.0.0
>
> Attachments: HADOOP-12244.00.patch, HADOOP-12244.HADOOP-12111.00.patch
>
>
> One of the Jenkins hosts is failing during the git rebase that happens in 
> bootstrap.  We should probably do something better than just fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12244) recover broken rebase during precommit

2015-08-13 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12244:
--
   Resolution: Fixed
Fix Version/s: 3.0.0
   HADOOP-12111
   Status: Resolved  (was: Patch Available)

Committed.

> recover broken rebase during precommit
> --
>
> Key: HADOOP-12244
> URL: https://issues.apache.org/jira/browse/HADOOP-12244
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Critical
> Fix For: HADOOP-12111, 3.0.0
>
> Attachments: HADOOP-12244.00.patch, HADOOP-12244.HADOOP-12111.00.patch
>
>
> One of the Jenkins hosts is failing during the git rebase that happens in 
> bootstrap.  We should probably do something better than just fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12244) recover broken rebase during precommit

2015-08-13 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12244:
--
Summary: recover broken rebase during precommit  (was: recover broken 
rebase?)

> recover broken rebase during precommit
> --
>
> Key: HADOOP-12244
> URL: https://issues.apache.org/jira/browse/HADOOP-12244
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Critical
> Attachments: HADOOP-12244.00.patch, HADOOP-12244.HADOOP-12111.00.patch
>
>
> One of the Jenkins hosts is failing during the git rebase that happens in 
> bootstrap.  We should probably do something better than just fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12244) recover broken rebase?

2015-08-13 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695804#comment-14695804
 ] 

Allen Wittenauer commented on HADOOP-12244:
---

thanks! will commit to both HADOOP-12111 and trunk since this is pretty awful 
and is causing builds and precommit to break.



> recover broken rebase?
> --
>
> Key: HADOOP-12244
> URL: https://issues.apache.org/jira/browse/HADOOP-12244
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Critical
> Attachments: HADOOP-12244.00.patch, HADOOP-12244.HADOOP-12111.00.patch
>
>
> One of the Jenkins hosts is failing during the git rebase that happens in 
> bootstrap.  We should probably do something better than just fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-12129) rework test-patch bug system support

2015-08-13 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695792#comment-14695792
 ] 

Allen Wittenauer edited comment on HADOOP-12129 at 8/13/15 7:16 PM:


-04:
* some doc fixes/additions/cleanup
* fix some leaky variables
* point personalities to their github equivalents and set more reasonable 
defaults
* added some code comments
* fixed a minor bug with GH pull requests given on the command line
* force GH to use v3 API
* reformat the REST conversion bits to be a bit cleaner
* whitespace now writes line comments
* fix a bug where if the JIRA issue that was given as input was also a branch 
name, we don't switch to that branch
* if we can't write to a system because of creds, report it.
* guess_patch_file would throw errors if the input file didn't exist. the new 
locate_patch code can trigger this condition whereas before it didn't.
* bugysstem line comments now take a header so that plugins can report which 
plugin actually generated the message
* renamed bugsystem_output to bugsystem_finalreport to better reflect reality


was (Author: aw):
-04:
* some doc fixes/additions/cleanup
* fix some leaky variables
* point personalities to their github equivalents and set more reasonable 
defaults
* added some code comments
* fixed a minor but with GH pull requests
* force GH to use v3 API
* reformat the REST conversion bits to be a bit cleaner
* whitespace now writes line comments
* fix a bug where if the JIRA issue that was given as input was also a branch 
name, we don't switch to that branch
* if we can't write to a system because of creds, report it.
* guess_patch_file would throw errors if the input file didn't exist. the new 
locate_patch code can trigger this condition whereas before it didn't.
* bugysstem line comments now take a header so that plugins can report which 
plugin actually generated the message
* renamed bugsystem_output to bugsystem_finalreport to better reflect reality

> rework test-patch bug system support
> 
>
> Key: HADOOP-12129
> URL: https://issues.apache.org/jira/browse/HADOOP-12129
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-12129.HADOOP-12111.00.patch, 
> HADOOP-12129.HADOOP-12111.01.patch, HADOOP-12129.HADOOP-12111.02.patch, 
> HADOOP-12129.HADOOP-12111.03.patch, HADOOP-12129.HADOOP-12111.04.patch
>
>
> WARNING: this is a fairly big project.
> See first comment for a brain dump on the issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12129) rework test-patch bug system support

2015-08-13 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12129:
--
Attachment: HADOOP-12129.HADOOP-12111.04.patch

-04:
* some doc fixes/additions/cleanup
* fix some leaky variables
* point personalities to their github equivalents and set more reasonable 
defaults
* added some code comments
* fixed a minor but with GH pull requests
* force GH to use v3 API
* reformat the REST conversion bits to be a bit cleaner
* whitespace now writes line comments
* fix a bug where if the JIRA issue that was given as input was also a branch 
name, we don't switch to that branch
* if we can't write to a system because of creds, report it.
* guess_patch_file would throw errors if the input file didn't exist. the new 
locate_patch code can trigger this condition whereas before it didn't.
* bugysstem line comments now take a header so that plugins can report which 
plugin actually generated the message
* renamed bugsystem_output to bugsystem_finalreport to better reflect reality

* 

> rework test-patch bug system support
> 
>
> Key: HADOOP-12129
> URL: https://issues.apache.org/jira/browse/HADOOP-12129
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-12129.HADOOP-12111.00.patch, 
> HADOOP-12129.HADOOP-12111.01.patch, HADOOP-12129.HADOOP-12111.02.patch, 
> HADOOP-12129.HADOOP-12111.03.patch, HADOOP-12129.HADOOP-12111.04.patch
>
>
> WARNING: this is a fairly big project.
> See first comment for a brain dump on the issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-12129) rework test-patch bug system support

2015-08-13 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695792#comment-14695792
 ] 

Allen Wittenauer edited comment on HADOOP-12129 at 8/13/15 7:16 PM:


-04:
* some doc fixes/additions/cleanup
* fix some leaky variables
* point personalities to their github equivalents and set more reasonable 
defaults
* added some code comments
* fixed a minor but with GH pull requests
* force GH to use v3 API
* reformat the REST conversion bits to be a bit cleaner
* whitespace now writes line comments
* fix a bug where if the JIRA issue that was given as input was also a branch 
name, we don't switch to that branch
* if we can't write to a system because of creds, report it.
* guess_patch_file would throw errors if the input file didn't exist. the new 
locate_patch code can trigger this condition whereas before it didn't.
* bugysstem line comments now take a header so that plugins can report which 
plugin actually generated the message
* renamed bugsystem_output to bugsystem_finalreport to better reflect reality


was (Author: aw):
-04:
* some doc fixes/additions/cleanup
* fix some leaky variables
* point personalities to their github equivalents and set more reasonable 
defaults
* added some code comments
* fixed a minor but with GH pull requests
* force GH to use v3 API
* reformat the REST conversion bits to be a bit cleaner
* whitespace now writes line comments
* fix a bug where if the JIRA issue that was given as input was also a branch 
name, we don't switch to that branch
* if we can't write to a system because of creds, report it.
* guess_patch_file would throw errors if the input file didn't exist. the new 
locate_patch code can trigger this condition whereas before it didn't.
* bugysstem line comments now take a header so that plugins can report which 
plugin actually generated the message
* renamed bugsystem_output to bugsystem_finalreport to better reflect reality

* 

> rework test-patch bug system support
> 
>
> Key: HADOOP-12129
> URL: https://issues.apache.org/jira/browse/HADOOP-12129
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-12129.HADOOP-12111.00.patch, 
> HADOOP-12129.HADOOP-12111.01.patch, HADOOP-12129.HADOOP-12111.02.patch, 
> HADOOP-12129.HADOOP-12111.03.patch, HADOOP-12129.HADOOP-12111.04.patch
>
>
> WARNING: this is a fairly big project.
> See first comment for a brain dump on the issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11004) NFS gateway doesn't respect HDFS extended ACLs

2015-08-13 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695779#comment-14695779
 ] 

Arpit Agarwal commented on HADOOP-11004:


This Jira brings up two separate issues:
# Accessing HDFS files via NFS does not respect existing ACLs. If so this needs 
to be fixed since we'd expect file access via NFS to respect both HDFS 
unix-style permissions and ACLs.
# HDFS ACLs are not exposed when listing files/directories via NFS. HDFS-6949 
as [~brandonli] mentioned.

> NFS gateway doesn't respect HDFS extended ACLs
> --
>
> Key: HADOOP-11004
> URL: https://issues.apache.org/jira/browse/HADOOP-11004
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs, security
>Affects Versions: 2.4.0
> Environment: HDP 2.1
>Reporter: Hari Sekhon
>
> I'm aware that the NFS gateway to HDFS doesn't work with secondary groups 
> until Hadoop 2.5 (HADOOP-10701) but I've also found that when setting 
> extended ACLs to allow the primary group of my regular user account I'm still 
> unable to access that directory in HDFS via the NFS gateway's mount point, 
> although I can via hadoop fs commands, indicating the NFS gateway isn't 
> respecting with HDFS extended ACLs. Nor do the existence of extended ACLS 
> show up via a plus sign after the rwx bits in the NFS directory listing as 
> they do in hadoop fs listing or as regular Linux extended ACLs both do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12244) recover broken rebase?

2015-08-13 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12244:
---
Hadoop Flags: Reviewed

+1 for the patch.  I also reviewed some of the git codebase, and it looks like 
this is the correct way to identify that a rebase is in progress.  Thank you 
for the patch, Allen.

> recover broken rebase?
> --
>
> Key: HADOOP-12244
> URL: https://issues.apache.org/jira/browse/HADOOP-12244
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Critical
> Attachments: HADOOP-12244.00.patch, HADOOP-12244.HADOOP-12111.00.patch
>
>
> One of the Jenkins hosts is failing during the git rebase that happens in 
> bootstrap.  We should probably do something better than just fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12322) typos in rpcmetrics.java

2015-08-13 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695762#comment-14695762
 ] 

Arpit Agarwal commented on HADOOP-12322:


+1 for the patch pending Jenkins.

Thanks for fixing this [~anu].

> typos in rpcmetrics.java
> 
>
> Key: HADOOP-12322
> URL: https://issues.apache.org/jira/browse/HADOOP-12322
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Trivial
> Attachments: HADOOP-12322.001.patch
>
>
> typos in RpcMetrics.java
> Processsing --> Processing
> sucesses ->  successes
> JobTrackerInstrumenation -> JobTrackerInstrumentation
> these are all part of description of the metric or in comments. So should 
> have no impact on backward compact.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12322) typos in rpcmetrics.java

2015-08-13 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HADOOP-12322:
--
Status: Patch Available  (was: Open)

> typos in rpcmetrics.java
> 
>
> Key: HADOOP-12322
> URL: https://issues.apache.org/jira/browse/HADOOP-12322
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Trivial
> Attachments: HADOOP-12322.001.patch
>
>
> typos in RpcMetrics.java
> Processsing --> Processing
> sucesses ->  successes
> JobTrackerInstrumenation -> JobTrackerInstrumentation
> these are all part of description of the metric or in comments. So should 
> have no impact on backward compact.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12322) typos in rpcmetrics.java

2015-08-13 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HADOOP-12322:
--
Attachment: HADOOP-12322.001.patch

> typos in rpcmetrics.java
> 
>
> Key: HADOOP-12322
> URL: https://issues.apache.org/jira/browse/HADOOP-12322
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Trivial
> Attachments: HADOOP-12322.001.patch
>
>
> typos in RpcMetrics.java
> Processsing --> Processing
> sucesses ->  successes
> JobTrackerInstrumenation -> JobTrackerInstrumentation
> these are all part of description of the metric or in comments. So should 
> have no impact on backward compact.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12322) typos in rpcmetrics.java

2015-08-13 Thread Anu Engineer (JIRA)
Anu Engineer created HADOOP-12322:
-

 Summary: typos in rpcmetrics.java
 Key: HADOOP-12322
 URL: https://issues.apache.org/jira/browse/HADOOP-12322
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 2.7.1
Reporter: Anu Engineer
Assignee: Anu Engineer
Priority: Trivial


typos in RpcMetrics.java

Processsing --> Processing
sucesses ->  successes
JobTrackerInstrumenation -> JobTrackerInstrumentation

these are all part of description of the metric or in comments. So should have 
no impact on backward compact.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12317) Applications fail on NM restart on some linux distro because NM container recovery declares AM container as LOST

2015-08-13 Thread Anubhav Dhoot (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anubhav Dhoot updated HADOOP-12317:
---
Status: Patch Available  (was: Open)

> Applications fail on NM restart on some linux distro because NM container 
> recovery declares AM container as LOST
> 
>
> Key: HADOOP-12317
> URL: https://issues.apache.org/jira/browse/HADOOP-12317
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
>Priority: Critical
> Attachments: YARN-4046.002.patch, YARN-4046.002.patch, 
> YARN-4096.001.patch
>
>
> On a debian machine we have seen node manager recovery of containers fail 
> because the signal syntax for process group may not work. We see errors in 
> checking if process is alive during container recovery which causes the 
> container to be declared as LOST (154) on a NodeManager restart.
> The application will fail with error. The attempts are not retried.
> {noformat}
> Application application_1439244348718_0001 failed 1 times due to Attempt 
> recovered after RM restartAM Container for 
> appattempt_1439244348718_0001_01 exited with exitCode: 154
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12317) Applications fail on NM restart on some linux distro because NM container recovery declares AM container as LOST

2015-08-13 Thread Anubhav Dhoot (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anubhav Dhoot reassigned HADOOP-12317:
--

Assignee: Anubhav Dhoot

> Applications fail on NM restart on some linux distro because NM container 
> recovery declares AM container as LOST
> 
>
> Key: HADOOP-12317
> URL: https://issues.apache.org/jira/browse/HADOOP-12317
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
>Priority: Critical
> Attachments: YARN-4046.002.patch, YARN-4046.002.patch, 
> YARN-4096.001.patch
>
>
> On a debian machine we have seen node manager recovery of containers fail 
> because the signal syntax for process group may not work. We see errors in 
> checking if process is alive during container recovery which causes the 
> container to be declared as LOST (154) on a NodeManager restart.
> The application will fail with error. The attempts are not retried.
> {noformat}
> Application application_1439244348718_0001 failed 1 times due to Attempt 
> recovered after RM restartAM Container for 
> appattempt_1439244348718_0001_01 exited with exitCode: 154
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12309) [Refactor] Use java.lang.Throwable.addSuppressed(Throwable) instead of class org.apache.hadoop.io.MultipleIOException

2015-08-13 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695712#comment-14695712
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-12309:
--

"A throwable contains a snapshot of the execution stack of its thread at the 
time it was created."  See 
http://docs.oracle.com/javase/7/docs/api/java/lang/Throwable.html

> [Refactor] Use java.lang.Throwable.addSuppressed(Throwable) instead of class 
> org.apache.hadoop.io.MultipleIOException
> -
>
> Key: HADOOP-12309
> URL: https://issues.apache.org/jira/browse/HADOOP-12309
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajith S
>Assignee: Ajith S
>Priority: Minor
> Attachments: HADOOP-12309.patch
>
>
> Can use java.lang.Throwable.addSuppressed(Throwable) instead of 
> org.apache.hadoop.io.MultipleIOException as 1.7+ java provides support for 
> this. org.apache.hadoop.io.MultipleIOException can be deprecated as for now
> {code}
> 
> catch (IOException e) {
>   if(generalException == null)
>   {
> generalException = new IOException("General exception");
>   }
>   generalException.addSuppressed(e);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12303) test-patch pylint plugin fails silently and votes +1 incorrectly

2015-08-13 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695679#comment-14695679
 ] 

Kengo Seki commented on HADOOP-12303:
-

In the above situation, pylint returns 1.

{code}
[sekikn@mobile hadoop]$ pylint dev-support/releasedocmaker.py >/dev/null 
No config file found, using default configuration
Traceback (most recent call last):
  File "/usr/local/bin/pylint", line 11, in 
sys.exit(run_pylint())
  File "/Library/Python/2.7/site-packages/pylint/__init__.py", line 23, in 
run_pylint
Run(sys.argv[1:])
  File "/Library/Python/2.7/site-packages/pylint/lint.py", line 1332, in 
__init__
linter.check(args)
  File "/Library/Python/2.7/site-packages/pylint/lint.py", line 747, in check
self._do_check(files_or_modules)
  File "/Library/Python/2.7/site-packages/pylint/lint.py", line 869, in 
_do_check
self.check_astroid_module(ast_node, walker, rawcheckers, tokencheckers)
  File "/Library/Python/2.7/site-packages/pylint/lint.py", line 944, in 
check_astroid_module
checker.process_tokens(tokens)
  File "/Library/Python/2.7/site-packages/pylint/checkers/format.py", line 727, 
in process_tokens
self.new_line(TokenWrapper(tokens), idx-1, idx+1)
  File "/Library/Python/2.7/site-packages/pylint/checkers/format.py", line 473, 
in new_line
self.check_lines(line, line_num)
  File "/Library/Python/2.7/site-packages/pylint/checkers/format.py", line 932, 
in check_lines
self.add_message('line-too-long', line=i, args=(len(line), max_chars))
  File "/Library/Python/2.7/site-packages/pylint/checkers/__init__.py", line 
101, in add_message
self.linter.add_message(msg_id, line, node, args, confidence)
  File "/Library/Python/2.7/site-packages/pylint/utils.py", line 410, in 
add_message
(abspath, path, module, obj, line or 1, col_offset or 0), msg, confidence))
  File "/Library/Python/2.7/site-packages/pylint/reporters/text.py", line 61, 
in handle_message
self.write_message(msg)
  File "/Library/Python/2.7/site-packages/pylint/reporters/text.py", line 51, 
in write_message
self.writeln(msg.format(self._template))
  File "/Library/Python/2.7/site-packages/pylint/reporters/__init__.py", line 
94, in writeln
print(self.encode(string), file=self.out)
  File "/Library/Python/2.7/site-packages/pylint/reporters/__init__.py", line 
84, in encode
locale.getdefaultlocale()[1] or
  File 
"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/locale.py",
 line 511, in getdefaultlocale
return _parse_localename(localename)
  File 
"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/locale.py",
 line 443, in _parse_localename
raise ValueError, 'unknown locale: %s' % localename
ValueError: unknown locale: UTF-8
[sekikn@mobile hadoop]$ echo $?
1
{code}

But pylint can also return 1 even if linting succeeded.

{code}
[sekikn@mobile hadoop]$ cat a.py 
"""a.py"""
import simplejson
print dir(simplejson)
[sekikn@mobile hadoop]$ pylint --reports=n a.py 
No config file found, using default configuration
* Module a
F:  2, 0: Unable to import 'simplejson' (import-error)
[sekikn@mobile hadoop]$ echo $?
1
{code}

So the status code doesn't help, though this example is somewhat artificial. 
Probably we need to distinguish them by the stderr contents.

> test-patch pylint plugin fails silently and votes +1 incorrectly
> 
>
> Key: HADOOP-12303
> URL: https://issues.apache.org/jira/browse/HADOOP-12303
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>
> This patch
> {code}
> [sekikn@mobile hadoop]$ cat /tmp/test.patch 
> diff --git a/dev-support/releasedocmaker.py b/dev-support/releasedocmaker.py
> index 37bd58a..7cd6dd3 100755
> --- a/dev-support/releasedocmaker.py
> +++ b/dev-support/releasedocmaker.py
> @@ -580,4 +580,4 @@ def main():
>  sys.exit(1)
>  
>  if __name__ == "__main__":
> -main()
> +main( )
> {code}
> is supposed to cause the following pylint errors.
> {code}
> C:583, 0: No space allowed after bracket
> main( )
> ^ (bad-whitespace)
> C:583, 0: No space allowed before bracket
> main( )
>   ^ (bad-whitespace)
> {code}
> But the system locale is set as follows, pylint check is passed, and there is 
> no pylint output.
> {code}
> [sekikn@mobile hadoop]$ locale
> LANG=
> LC_COLLATE="C"
> LC_CTYPE="UTF-8"
> LC_MESSAGES="C"
> LC_MONETARY="C"
> LC_NUMERIC="C"
> LC_TIME="C"
> LC_ALL=
> [sekikn@mobile hadoop]$ dev-support/test-patch.sh 
> --basedir=/Users/sekikn/dev/hadoop --project=hadoop /tmp/test.patch 
> (snip)
> | Vote |  Subsystem |  Runtime   | Comment
> 
> |  +1  |   @author  

[jira] [Commented] (HADOOP-12321) Make JvmPauseMonitor to AbstractService

2015-08-13 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695570#comment-14695570
 ] 

Sunil G commented on HADOOP-12321:
--

I think this change definitely helps in resolving the current complexities in 
the use cases of JvmMonitor. I would like to give a try if its fine. :)

> Make JvmPauseMonitor to AbstractService
> ---
>
> Key: HADOOP-12321
> URL: https://issues.apache.org/jira/browse/HADOOP-12321
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The new JVM pause monitor has been written with its own start/stop lifecycle 
> which has already proven brittle to both ordering of operations and, even 
> after HADOOP-12313, is not thread safe (both start and stop are potentially 
> re-entrant).
> It also requires every class which supports the monitor to add another field 
> and perform the lifecycle operations in its own lifecycle, which, for all 
> Yarn services, is the YARN app lifecycle (as implemented in Hadoop common)
> Making the  monitor a subclass of {{AbstractService}} and moving the 
> init/start & stop operations in {{serviceInit()}}, {{serviceStart()}} & 
> {{serviceStop()}} methods will fix the concurrency and state model issues, 
> and make it trivial to add as a child to any YARN service which subclasses 
> {{CompositeService}} (most the NM and RM apps) will be able to hook up the 
> monitor simply by creating one in the ctor and adding it as a child.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12258) Need translate java.nio.file.NoSuchFileException to FileNotFoundException to avoid regression

2015-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695524#comment-14695524
 ] 

Hudson commented on HADOOP-12258:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2232 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2232/])
HADOOP-12258. Need translate java.nio.file.NoSuchFileException to 
FileNotFoundException to avoid regression. Contributed by Zhihai Xu. (cnauroth: 
rev 6cc8e38db5b26bdd02bc6bc1c9684db2593eec25)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractSetTimesTest.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/localfs/TestLocalFSContractSetTimes.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/rawlocal/TestRawlocalContractGetFileStatus.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/contract/hdfs/TestHDFSContractSetTimes.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractOptions.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractGetFileStatusTest.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/contract/hdfs/TestHDFSContractGetFileStatus.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/localfs/TestLocalFSContractGetFileStatus.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/rawlocal/TestRawlocalContractSetTimes.java
* hadoop-common-project/hadoop-common/src/test/resources/contract/rawlocal.xml
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/contract/hdfs.xml
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/test/resources/contract/localfs.xml


> Need translate java.nio.file.NoSuchFileException to FileNotFoundException to 
> avoid regression
> -
>
> Key: HADOOP-12258
> URL: https://issues.apache.org/jira/browse/HADOOP-12258
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: zhihai xu
>Assignee: zhihai xu
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: HADOOP-12258.000.patch, HADOOP-12258.001.patch, 
> HADOOP-12258.002.patch
>
>
> need translate java.nio.file.NoSuchFileException to FileNotFoundException to 
> avoid regression.
> HADOOP-12045 adds nio to support access time, but nio will create 
> java.nio.file.NoSuchFileException instead of FileNotFoundException.
> many hadoop codes depend on FileNotFoundException to decide whether a file 
> exists. for example {{FileContext.util().exists()}}. 
> {code}
> public boolean exists(final Path f) throws AccessControlException,
>   UnsupportedFileSystemException, IOException {
>   try {
> FileStatus fs = FileContext.this.getFileStatus(f);
> assert fs != null;
> return true;
>   } catch (FileNotFoundException e) {
> return false;
>   }
> }
> {code}
> same for {{FileSystem#exists}}
> {code}
>   public boolean exists(Path f) throws IOException {
> try {
>   return getFileStatus(f) != null;
> } catch (FileNotFoundException e) {
>   return false;
> }
>   }
> {code}
> NoSuchFileException will break these functions.
> Since {{exists}} is one of the most used API in FileSystem, this issue is 
> very critical.
> Several test failures for TestDeletionService are caused by this issue:
> https://builds.apache.org/job/PreCommit-YARN-Build/8630/testReport/org.apache.hadoop.yarn.server.nodemanager/TestDeletionService/testRelativeDelete/
> https://builds.apache.org/job/PreCommit-YARN-Build/8632/testReport/org.apache.hadoop.yarn.server.nodemanager/TestDeletionService/testAbsDelete/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12318) Expose underlying LDAP exceptions in SaslPlainServer

2015-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695520#comment-14695520
 ] 

Hudson commented on HADOOP-12318:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2232 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2232/])
HADOOP-12318. Expose underlying LDAP exceptions in SaslPlainServer. Contributed 
by Mike Yoder. (atm: rev 820f864a26d90e9f4a3584577df581dcac20f9b6)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPlainServer.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Expose underlying LDAP exceptions in SaslPlainServer
> 
>
> Key: HADOOP-12318
> URL: https://issues.apache.org/jira/browse/HADOOP-12318
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12318.000.patch
>
>
> In the code of class 
> [SaslPlainServer|http://github.mtv.cloudera.com/CDH/hadoop/blob/cdh5-2.6.0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPlainServer.java#L108],
>  the underlying exception is not included in the {{SaslException}}, which 
> leads to below error message in HiveServer2:
> {noformat}
> 2015-07-22 11:50:28,433 DEBUG 
> org.apache.thrift.transport.TSaslServerTransport: failed to open server 
> transport
> org.apache.thrift.transport.TTransportException: PLAIN auth failed: Error 
> validating LDAP user
>   at 
> org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
>   at 
> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316)
>   at 
> org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
>   at 
> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
>   at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Make COEs very hard to understand what the real error is.
> Can we change that line as:
> {code}
> } catch (Exception e) {
>   throw new SaslException("PLAIN auth failed: " + e.getMessage(), e);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12295) Improve NetworkTopology#InnerNode#remove logic

2015-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695523#comment-14695523
 ] 

Hudson commented on HADOOP-12295:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2232 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2232/])
HADOOP-12295. Improve NetworkTopology#InnerNode#remove logic. (yliu) (yliu: rev 
53bef9c5b98dee87d4ffaf35415bc38e2f876ed8)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/net/TestNetworkTopology.java


> Improve NetworkTopology#InnerNode#remove logic
> --
>
> Key: HADOOP-12295
> URL: https://issues.apache.org/jira/browse/HADOOP-12295
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-12295.001.patch
>
>
> In {{NetworkTopology#InnerNode#remove}}, We can use {{childrenMap}} to get 
> the parent node, no need to loop the {{children}} list. Then it is more 
> efficient since in most cases deleting parent node doesn't happen.
> Another nit in current code is:
> {code}
>   String parent = n.getNetworkLocation();
>   String currentPath = getPath(this);
> {code}
> can be in closure of {{\!isAncestor\(n\)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12318) Expose underlying LDAP exceptions in SaslPlainServer

2015-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695481#comment-14695481
 ] 

Hudson commented on HADOOP-12318:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #283 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/283/])
HADOOP-12318. Expose underlying LDAP exceptions in SaslPlainServer. Contributed 
by Mike Yoder. (atm: rev 820f864a26d90e9f4a3584577df581dcac20f9b6)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPlainServer.java


> Expose underlying LDAP exceptions in SaslPlainServer
> 
>
> Key: HADOOP-12318
> URL: https://issues.apache.org/jira/browse/HADOOP-12318
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12318.000.patch
>
>
> In the code of class 
> [SaslPlainServer|http://github.mtv.cloudera.com/CDH/hadoop/blob/cdh5-2.6.0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPlainServer.java#L108],
>  the underlying exception is not included in the {{SaslException}}, which 
> leads to below error message in HiveServer2:
> {noformat}
> 2015-07-22 11:50:28,433 DEBUG 
> org.apache.thrift.transport.TSaslServerTransport: failed to open server 
> transport
> org.apache.thrift.transport.TTransportException: PLAIN auth failed: Error 
> validating LDAP user
>   at 
> org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
>   at 
> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316)
>   at 
> org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
>   at 
> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
>   at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Make COEs very hard to understand what the real error is.
> Can we change that line as:
> {code}
> } catch (Exception e) {
>   throw new SaslException("PLAIN auth failed: " + e.getMessage(), e);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12315) hbaseprotoc_postapply in the test-patch hbase personality can return a wrong status

2015-08-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695478#comment-14695478
 ] 

Hadoop QA commented on HADOOP-12315:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12750301/HADOOP-12315.HADOOP-12111.00.patch
 |
| Optional Tests | shellcheck |
| git revision | HADOOP-12111 / 96f2745 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7459/console |


This message was automatically generated.

> hbaseprotoc_postapply in the test-patch hbase personality can return a wrong 
> status
> ---
>
> Key: HADOOP-12315
> URL: https://issues.apache.org/jira/browse/HADOOP-12315
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Kengo Seki
> Attachments: HADOOP-12315.HADOOP-12111.00.patch
>
>
> Similar to HADOOP-12314. hbaseprotoc_postapply returns the value of 
> $\{results}, but if a module status is -1, $\{result} is incremented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12295) Improve NetworkTopology#InnerNode#remove logic

2015-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695484#comment-14695484
 ] 

Hudson commented on HADOOP-12295:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #283 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/283/])
HADOOP-12295. Improve NetworkTopology#InnerNode#remove logic. (yliu) (yliu: rev 
53bef9c5b98dee87d4ffaf35415bc38e2f876ed8)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/net/TestNetworkTopology.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Improve NetworkTopology#InnerNode#remove logic
> --
>
> Key: HADOOP-12295
> URL: https://issues.apache.org/jira/browse/HADOOP-12295
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-12295.001.patch
>
>
> In {{NetworkTopology#InnerNode#remove}}, We can use {{childrenMap}} to get 
> the parent node, no need to loop the {{children}} list. Then it is more 
> efficient since in most cases deleting parent node doesn't happen.
> Another nit in current code is:
> {code}
>   String parent = n.getNetworkLocation();
>   String currentPath = getPath(this);
> {code}
> can be in closure of {{\!isAncestor\(n\)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12258) Need translate java.nio.file.NoSuchFileException to FileNotFoundException to avoid regression

2015-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695485#comment-14695485
 ] 

Hudson commented on HADOOP-12258:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #283 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/283/])
HADOOP-12258. Need translate java.nio.file.NoSuchFileException to 
FileNotFoundException to avoid regression. Contributed by Zhihai Xu. (cnauroth: 
rev 6cc8e38db5b26bdd02bc6bc1c9684db2593eec25)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/rawlocal/TestRawlocalContractGetFileStatus.java
* hadoop-common-project/hadoop-common/src/test/resources/contract/localfs.xml
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractOptions.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/localfs/TestLocalFSContractSetTimes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/contract/hdfs/TestHDFSContractSetTimes.java
* hadoop-common-project/hadoop-common/src/test/resources/contract/rawlocal.xml
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractGetFileStatusTest.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/contract/hdfs.xml
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/localfs/TestLocalFSContractGetFileStatus.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/rawlocal/TestRawlocalContractSetTimes.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractSetTimesTest.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/contract/hdfs/TestHDFSContractGetFileStatus.java


> Need translate java.nio.file.NoSuchFileException to FileNotFoundException to 
> avoid regression
> -
>
> Key: HADOOP-12258
> URL: https://issues.apache.org/jira/browse/HADOOP-12258
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: zhihai xu
>Assignee: zhihai xu
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: HADOOP-12258.000.patch, HADOOP-12258.001.patch, 
> HADOOP-12258.002.patch
>
>
> need translate java.nio.file.NoSuchFileException to FileNotFoundException to 
> avoid regression.
> HADOOP-12045 adds nio to support access time, but nio will create 
> java.nio.file.NoSuchFileException instead of FileNotFoundException.
> many hadoop codes depend on FileNotFoundException to decide whether a file 
> exists. for example {{FileContext.util().exists()}}. 
> {code}
> public boolean exists(final Path f) throws AccessControlException,
>   UnsupportedFileSystemException, IOException {
>   try {
> FileStatus fs = FileContext.this.getFileStatus(f);
> assert fs != null;
> return true;
>   } catch (FileNotFoundException e) {
> return false;
>   }
> }
> {code}
> same for {{FileSystem#exists}}
> {code}
>   public boolean exists(Path f) throws IOException {
> try {
>   return getFileStatus(f) != null;
> } catch (FileNotFoundException e) {
>   return false;
> }
>   }
> {code}
> NoSuchFileException will break these functions.
> Since {{exists}} is one of the most used API in FileSystem, this issue is 
> very critical.
> Several test failures for TestDeletionService are caused by this issue:
> https://builds.apache.org/job/PreCommit-YARN-Build/8630/testReport/org.apache.hadoop.yarn.server.nodemanager/TestDeletionService/testRelativeDelete/
> https://builds.apache.org/job/PreCommit-YARN-Build/8632/testReport/org.apache.hadoop.yarn.server.nodemanager/TestDeletionService/testAbsDelete/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12258) Need translate java.nio.file.NoSuchFileException to FileNotFoundException to avoid regression

2015-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695460#comment-14695460
 ] 

Hudson commented on HADOOP-12258:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #275 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/275/])
HADOOP-12258. Need translate java.nio.file.NoSuchFileException to 
FileNotFoundException to avoid regression. Contributed by Zhihai Xu. (cnauroth: 
rev 6cc8e38db5b26bdd02bc6bc1c9684db2593eec25)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/localfs/TestLocalFSContractGetFileStatus.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractSetTimesTest.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/contract/hdfs/TestHDFSContractGetFileStatus.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/localfs/TestLocalFSContractSetTimes.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/rawlocal/TestRawlocalContractSetTimes.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractGetFileStatusTest.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/contract/hdfs/TestHDFSContractSetTimes.java
* hadoop-common-project/hadoop-common/src/test/resources/contract/localfs.xml
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/contract/hdfs.xml
* hadoop-common-project/hadoop-common/src/test/resources/contract/rawlocal.xml
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractOptions.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/rawlocal/TestRawlocalContractGetFileStatus.java


> Need translate java.nio.file.NoSuchFileException to FileNotFoundException to 
> avoid regression
> -
>
> Key: HADOOP-12258
> URL: https://issues.apache.org/jira/browse/HADOOP-12258
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: zhihai xu
>Assignee: zhihai xu
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: HADOOP-12258.000.patch, HADOOP-12258.001.patch, 
> HADOOP-12258.002.patch
>
>
> need translate java.nio.file.NoSuchFileException to FileNotFoundException to 
> avoid regression.
> HADOOP-12045 adds nio to support access time, but nio will create 
> java.nio.file.NoSuchFileException instead of FileNotFoundException.
> many hadoop codes depend on FileNotFoundException to decide whether a file 
> exists. for example {{FileContext.util().exists()}}. 
> {code}
> public boolean exists(final Path f) throws AccessControlException,
>   UnsupportedFileSystemException, IOException {
>   try {
> FileStatus fs = FileContext.this.getFileStatus(f);
> assert fs != null;
> return true;
>   } catch (FileNotFoundException e) {
> return false;
>   }
> }
> {code}
> same for {{FileSystem#exists}}
> {code}
>   public boolean exists(Path f) throws IOException {
> try {
>   return getFileStatus(f) != null;
> } catch (FileNotFoundException e) {
>   return false;
> }
>   }
> {code}
> NoSuchFileException will break these functions.
> Since {{exists}} is one of the most used API in FileSystem, this issue is 
> very critical.
> Several test failures for TestDeletionService are caused by this issue:
> https://builds.apache.org/job/PreCommit-YARN-Build/8630/testReport/org.apache.hadoop.yarn.server.nodemanager/TestDeletionService/testRelativeDelete/
> https://builds.apache.org/job/PreCommit-YARN-Build/8632/testReport/org.apache.hadoop.yarn.server.nodemanager/TestDeletionService/testAbsDelete/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12295) Improve NetworkTopology#InnerNode#remove logic

2015-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695459#comment-14695459
 ] 

Hudson commented on HADOOP-12295:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #275 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/275/])
HADOOP-12295. Improve NetworkTopology#InnerNode#remove logic. (yliu) (yliu: rev 
53bef9c5b98dee87d4ffaf35415bc38e2f876ed8)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/net/TestNetworkTopology.java


> Improve NetworkTopology#InnerNode#remove logic
> --
>
> Key: HADOOP-12295
> URL: https://issues.apache.org/jira/browse/HADOOP-12295
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-12295.001.patch
>
>
> In {{NetworkTopology#InnerNode#remove}}, We can use {{childrenMap}} to get 
> the parent node, no need to loop the {{children}} list. Then it is more 
> efficient since in most cases deleting parent node doesn't happen.
> Another nit in current code is:
> {code}
>   String parent = n.getNetworkLocation();
>   String currentPath = getPath(this);
> {code}
> can be in closure of {{\!isAncestor\(n\)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12318) Expose underlying LDAP exceptions in SaslPlainServer

2015-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695456#comment-14695456
 ] 

Hudson commented on HADOOP-12318:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #275 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/275/])
HADOOP-12318. Expose underlying LDAP exceptions in SaslPlainServer. Contributed 
by Mike Yoder. (atm: rev 820f864a26d90e9f4a3584577df581dcac20f9b6)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPlainServer.java


> Expose underlying LDAP exceptions in SaslPlainServer
> 
>
> Key: HADOOP-12318
> URL: https://issues.apache.org/jira/browse/HADOOP-12318
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12318.000.patch
>
>
> In the code of class 
> [SaslPlainServer|http://github.mtv.cloudera.com/CDH/hadoop/blob/cdh5-2.6.0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPlainServer.java#L108],
>  the underlying exception is not included in the {{SaslException}}, which 
> leads to below error message in HiveServer2:
> {noformat}
> 2015-07-22 11:50:28,433 DEBUG 
> org.apache.thrift.transport.TSaslServerTransport: failed to open server 
> transport
> org.apache.thrift.transport.TTransportException: PLAIN auth failed: Error 
> validating LDAP user
>   at 
> org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
>   at 
> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316)
>   at 
> org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
>   at 
> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
>   at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Make COEs very hard to understand what the real error is.
> Can we change that line as:
> {code}
> } catch (Exception e) {
>   throw new SaslException("PLAIN auth failed: " + e.getMessage(), e);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HADOOP-11820) aw jira testing, ignore

2015-08-13 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Comment: was deleted

(was: (!) A patch to the files used for the QA process has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7456/console in case of 
problems.)

> aw jira testing, ignore
> ---
>
> Key: HADOOP-11820
> URL: https://issues.apache.org/jira/browse/HADOOP-11820
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: HADOOP-12129.HADOOP-12111.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HADOOP-11820) aw jira testing, ignore

2015-08-13 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} site {color} | {color:green} 0m 0s 
{color} | {color:green} HADOOP-12111 passed {color} |
| {color:blue}0{color} | {color:blue} @author {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipping @author checks as test-patch.sh has been 
patched. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} site {color} | {color:green} 0m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
14s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
11s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 0m 31s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12750216/HADOOP-12129.HADOOP-12111.03.patch
 |
| JIRA Issue | HADOOP-11820 |
| git revision | HADOOP-12111 / 96f2745 |
| Optional Tests | asflicense site unit shellcheck |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/dev-support-test/personality/hadoop.sh
 |
| Default Java | 1.7.0_55 |
| Multi-JDK versions |  /home/jenkins/tools/java/jdk1.8.0:1.8.0 
/home/jenkins/tools/java/jdk1.7.0_55:1.7.0_55 |
| shellcheck | v0.3.3 (This is an old version that has serious bugs. Consider 
upgrading.) |
| JDK v1.7.0_55  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7456/testReport/ |
| Max memory used | 48MB |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7456/console |


This message was automatically generated.

)

> aw jira testing, ignore
> ---
>
> Key: HADOOP-11820
> URL: https://issues.apache.org/jira/browse/HADOOP-11820
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: HADOOP-12129.HADOOP-12111.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12315) hbaseprotoc_postapply in the test-patch hbase personality can return a wrong status

2015-08-13 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12315:

Assignee: Kengo Seki
  Status: Patch Available  (was: Open)

> hbaseprotoc_postapply in the test-patch hbase personality can return a wrong 
> status
> ---
>
> Key: HADOOP-12315
> URL: https://issues.apache.org/jira/browse/HADOOP-12315
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Kengo Seki
> Attachments: HADOOP-12315.HADOOP-12111.00.patch
>
>
> Similar to HADOOP-12314. hbaseprotoc_postapply returns the value of 
> $\{results}, but if a module status is -1, $\{result} is incremented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12315) hbaseprotoc_postapply in the test-patch hbase personality can return a wrong status

2015-08-13 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12315:

Attachment: HADOOP-12315.HADOOP-12111.00.patch

-00:

* replace $results with $result in hbaseprotoc_postapply
* fix hbaseprotoc_postapply failure caused by a missing maven parameter

I confirmed that hbaseprotoc_postapply is called if a .proto file is changed 
and it works:

{code}
[sekikn@mobile hadoop]$ cat /tmp/test.patch 
diff --git a/hbase-protocol/src/main/protobuf/AccessControl.proto 
b/hbase-protocol/src/main/protobuf/AccessControl.proto
index e67540b..2d36568 100644
--- a/hbase-protocol/src/main/protobuf/AccessControl.proto
+++ b/hbase-protocol/src/main/protobuf/AccessControl.proto
@@ -121,3 +121,4 @@ service AccessControlService {
 rpc CheckPermissions(CheckPermissionsRequest)
   returns (CheckPermissionsResponse);
 }
+
[sekikn@mobile hadoop]$ dev-support/test-patch.sh --basedir=/Users/sekikn/hbase 
--project=hbase /tmp/test.patch 

(snip)



 Patch HBase protoc plugin




cd /Users/sekikn/hbase/hbase-protocol
mvn --batch-mode compile -DskipTests -Pcompile-protobuf -X -DHBasePatchProcess 
-DskipTests -DHBasePatchProcess -Ptest-patch > 
/private/tmp/test-patch-hbase/1142/patch-hbaseprotoc-hbase-protocol.txt 2>&1
Elapsed:   0m 40s

(snip)

| Vote |   Subsystem |  Runtime   | Comment

|  +1  |@author  |  0m 00s| The patch does not contain any @author 
|  | || tags.
|  -1  | test4tests  |  0m 00s| The patch doesn't appear to include any 
|  | || new or modified tests. Please justify
|  | || why no new tests are needed for this
|  | || patch. Also please list what manual
|  | || steps were performed to verify this
|  | || patch.
|  +1  | asflicense  |  0m 19s| Patch does not generate ASF License 
|  | || warnings.
|  +1  | whitespace  |  0m 00s| Patch has no whitespace issues. 
|  +1  |hadoopcheck  |  10m 21s   | Patch does not cause any errors with 
|  | || Hadoop 2.4.1 2.5.2 2.6.0.
|  +1  |hbaseprotoc  |  0m 40s| the patch passed 
|  | |  11m 21s   | 
{code}

> hbaseprotoc_postapply in the test-patch hbase personality can return a wrong 
> status
> ---
>
> Key: HADOOP-12315
> URL: https://issues.apache.org/jira/browse/HADOOP-12315
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
> Attachments: HADOOP-12315.HADOOP-12111.00.patch
>
>
> Similar to HADOOP-12314. hbaseprotoc_postapply returns the value of 
> $\{results}, but if a module status is -1, $\{result} is incremented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12258) Need translate java.nio.file.NoSuchFileException to FileNotFoundException to avoid regression

2015-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695393#comment-14695393
 ] 

Hudson commented on HADOOP-12258:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2213 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2213/])
HADOOP-12258. Need translate java.nio.file.NoSuchFileException to 
FileNotFoundException to avoid regression. Contributed by Zhihai Xu. (cnauroth: 
rev 6cc8e38db5b26bdd02bc6bc1c9684db2593eec25)
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/contract/hdfs.xml
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/localfs/TestLocalFSContractSetTimes.java
* hadoop-common-project/hadoop-common/src/test/resources/contract/rawlocal.xml
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/contract/hdfs/TestHDFSContractGetFileStatus.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractOptions.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/localfs/TestLocalFSContractGetFileStatus.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/rawlocal/TestRawlocalContractSetTimes.java
* hadoop-common-project/hadoop-common/src/test/resources/contract/localfs.xml
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractGetFileStatusTest.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/rawlocal/TestRawlocalContractGetFileStatus.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/contract/hdfs/TestHDFSContractSetTimes.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractSetTimesTest.java


> Need translate java.nio.file.NoSuchFileException to FileNotFoundException to 
> avoid regression
> -
>
> Key: HADOOP-12258
> URL: https://issues.apache.org/jira/browse/HADOOP-12258
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: zhihai xu
>Assignee: zhihai xu
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: HADOOP-12258.000.patch, HADOOP-12258.001.patch, 
> HADOOP-12258.002.patch
>
>
> need translate java.nio.file.NoSuchFileException to FileNotFoundException to 
> avoid regression.
> HADOOP-12045 adds nio to support access time, but nio will create 
> java.nio.file.NoSuchFileException instead of FileNotFoundException.
> many hadoop codes depend on FileNotFoundException to decide whether a file 
> exists. for example {{FileContext.util().exists()}}. 
> {code}
> public boolean exists(final Path f) throws AccessControlException,
>   UnsupportedFileSystemException, IOException {
>   try {
> FileStatus fs = FileContext.this.getFileStatus(f);
> assert fs != null;
> return true;
>   } catch (FileNotFoundException e) {
> return false;
>   }
> }
> {code}
> same for {{FileSystem#exists}}
> {code}
>   public boolean exists(Path f) throws IOException {
> try {
>   return getFileStatus(f) != null;
> } catch (FileNotFoundException e) {
>   return false;
> }
>   }
> {code}
> NoSuchFileException will break these functions.
> Since {{exists}} is one of the most used API in FileSystem, this issue is 
> very critical.
> Several test failures for TestDeletionService are caused by this issue:
> https://builds.apache.org/job/PreCommit-YARN-Build/8630/testReport/org.apache.hadoop.yarn.server.nodemanager/TestDeletionService/testRelativeDelete/
> https://builds.apache.org/job/PreCommit-YARN-Build/8632/testReport/org.apache.hadoop.yarn.server.nodemanager/TestDeletionService/testAbsDelete/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12318) Expose underlying LDAP exceptions in SaslPlainServer

2015-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695389#comment-14695389
 ] 

Hudson commented on HADOOP-12318:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2213 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2213/])
HADOOP-12318. Expose underlying LDAP exceptions in SaslPlainServer. Contributed 
by Mike Yoder. (atm: rev 820f864a26d90e9f4a3584577df581dcac20f9b6)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPlainServer.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Expose underlying LDAP exceptions in SaslPlainServer
> 
>
> Key: HADOOP-12318
> URL: https://issues.apache.org/jira/browse/HADOOP-12318
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12318.000.patch
>
>
> In the code of class 
> [SaslPlainServer|http://github.mtv.cloudera.com/CDH/hadoop/blob/cdh5-2.6.0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPlainServer.java#L108],
>  the underlying exception is not included in the {{SaslException}}, which 
> leads to below error message in HiveServer2:
> {noformat}
> 2015-07-22 11:50:28,433 DEBUG 
> org.apache.thrift.transport.TSaslServerTransport: failed to open server 
> transport
> org.apache.thrift.transport.TTransportException: PLAIN auth failed: Error 
> validating LDAP user
>   at 
> org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
>   at 
> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316)
>   at 
> org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
>   at 
> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
>   at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Make COEs very hard to understand what the real error is.
> Can we change that line as:
> {code}
> } catch (Exception e) {
>   throw new SaslException("PLAIN auth failed: " + e.getMessage(), e);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12295) Improve NetworkTopology#InnerNode#remove logic

2015-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695392#comment-14695392
 ] 

Hudson commented on HADOOP-12295:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2213 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2213/])
HADOOP-12295. Improve NetworkTopology#InnerNode#remove logic. (yliu) (yliu: rev 
53bef9c5b98dee87d4ffaf35415bc38e2f876ed8)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/net/TestNetworkTopology.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java


> Improve NetworkTopology#InnerNode#remove logic
> --
>
> Key: HADOOP-12295
> URL: https://issues.apache.org/jira/browse/HADOOP-12295
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-12295.001.patch
>
>
> In {{NetworkTopology#InnerNode#remove}}, We can use {{childrenMap}} to get 
> the parent node, no need to loop the {{children}} list. Then it is more 
> efficient since in most cases deleting parent node doesn't happen.
> Another nit in current code is:
> {code}
>   String parent = n.getNetworkLocation();
>   String currentPath = getPath(this);
> {code}
> can be in closure of {{\!isAncestor\(n\)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11295) RPC Server Reader thread can't shutdown if RPCCallQueue is full

2015-08-13 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11295:
---
Attachment: HADOOP-11295.branch-2.6.patch

Rebased the patch for branch-2.6.

> RPC Server Reader thread can't shutdown if RPCCallQueue is full
> ---
>
> Key: HADOOP-11295
> URL: https://issues.apache.org/jira/browse/HADOOP-11295
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ming Ma
>Assignee: Ming Ma
>  Labels: 2.6.1-candidate
> Fix For: 2.7.0
>
> Attachments: HADOOP-11295-2.patch, HADOOP-11295-3.patch, 
> HADOOP-11295-4.patch, HADOOP-11295-5.patch, HADOOP-11295.006.patch, 
> HADOOP-11295.branch-2.6.patch, HADOOP-11295.patch
>
>
> If RPC server is asked to stop when RPCCallQueue is full, {{reader.join()}} 
> will just wait there. That is because
> 1. The reader thread is blocked on {{callQueue.put(call);}}.
> 2. When RPC server is asked to stop, it will interrupt all handler threads 
> and thus no threads will drain the callQueue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12295) Improve NetworkTopology#InnerNode#remove logic

2015-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695128#comment-14695128
 ] 

Hudson commented on HADOOP-12295:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #1016 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1016/])
HADOOP-12295. Improve NetworkTopology#InnerNode#remove logic. (yliu) (yliu: rev 
53bef9c5b98dee87d4ffaf35415bc38e2f876ed8)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/net/TestNetworkTopology.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java


> Improve NetworkTopology#InnerNode#remove logic
> --
>
> Key: HADOOP-12295
> URL: https://issues.apache.org/jira/browse/HADOOP-12295
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-12295.001.patch
>
>
> In {{NetworkTopology#InnerNode#remove}}, We can use {{childrenMap}} to get 
> the parent node, no need to loop the {{children}} list. Then it is more 
> efficient since in most cases deleting parent node doesn't happen.
> Another nit in current code is:
> {code}
>   String parent = n.getNetworkLocation();
>   String currentPath = getPath(this);
> {code}
> can be in closure of {{\!isAncestor\(n\)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12318) Expose underlying LDAP exceptions in SaslPlainServer

2015-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695125#comment-14695125
 ] 

Hudson commented on HADOOP-12318:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #1016 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1016/])
HADOOP-12318. Expose underlying LDAP exceptions in SaslPlainServer. Contributed 
by Mike Yoder. (atm: rev 820f864a26d90e9f4a3584577df581dcac20f9b6)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPlainServer.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Expose underlying LDAP exceptions in SaslPlainServer
> 
>
> Key: HADOOP-12318
> URL: https://issues.apache.org/jira/browse/HADOOP-12318
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12318.000.patch
>
>
> In the code of class 
> [SaslPlainServer|http://github.mtv.cloudera.com/CDH/hadoop/blob/cdh5-2.6.0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPlainServer.java#L108],
>  the underlying exception is not included in the {{SaslException}}, which 
> leads to below error message in HiveServer2:
> {noformat}
> 2015-07-22 11:50:28,433 DEBUG 
> org.apache.thrift.transport.TSaslServerTransport: failed to open server 
> transport
> org.apache.thrift.transport.TTransportException: PLAIN auth failed: Error 
> validating LDAP user
>   at 
> org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
>   at 
> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316)
>   at 
> org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
>   at 
> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
>   at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Make COEs very hard to understand what the real error is.
> Can we change that line as:
> {code}
> } catch (Exception e) {
>   throw new SaslException("PLAIN auth failed: " + e.getMessage(), e);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12258) Need translate java.nio.file.NoSuchFileException to FileNotFoundException to avoid regression

2015-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695129#comment-14695129
 ] 

Hudson commented on HADOOP-12258:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #1016 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1016/])
HADOOP-12258. Need translate java.nio.file.NoSuchFileException to 
FileNotFoundException to avoid regression. Contributed by Zhihai Xu. (cnauroth: 
rev 6cc8e38db5b26bdd02bc6bc1c9684db2593eec25)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractGetFileStatusTest.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/contract/hdfs/TestHDFSContractSetTimes.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractSetTimesTest.java
* hadoop-common-project/hadoop-common/src/test/resources/contract/rawlocal.xml
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/contract/hdfs/TestHDFSContractGetFileStatus.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractOptions.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
* hadoop-common-project/hadoop-common/src/test/resources/contract/localfs.xml
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/rawlocal/TestRawlocalContractSetTimes.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/localfs/TestLocalFSContractGetFileStatus.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/localfs/TestLocalFSContractSetTimes.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/rawlocal/TestRawlocalContractGetFileStatus.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/contract/hdfs.xml


> Need translate java.nio.file.NoSuchFileException to FileNotFoundException to 
> avoid regression
> -
>
> Key: HADOOP-12258
> URL: https://issues.apache.org/jira/browse/HADOOP-12258
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: zhihai xu
>Assignee: zhihai xu
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: HADOOP-12258.000.patch, HADOOP-12258.001.patch, 
> HADOOP-12258.002.patch
>
>
> need translate java.nio.file.NoSuchFileException to FileNotFoundException to 
> avoid regression.
> HADOOP-12045 adds nio to support access time, but nio will create 
> java.nio.file.NoSuchFileException instead of FileNotFoundException.
> many hadoop codes depend on FileNotFoundException to decide whether a file 
> exists. for example {{FileContext.util().exists()}}. 
> {code}
> public boolean exists(final Path f) throws AccessControlException,
>   UnsupportedFileSystemException, IOException {
>   try {
> FileStatus fs = FileContext.this.getFileStatus(f);
> assert fs != null;
> return true;
>   } catch (FileNotFoundException e) {
> return false;
>   }
> }
> {code}
> same for {{FileSystem#exists}}
> {code}
>   public boolean exists(Path f) throws IOException {
> try {
>   return getFileStatus(f) != null;
> } catch (FileNotFoundException e) {
>   return false;
> }
>   }
> {code}
> NoSuchFileException will break these functions.
> Since {{exists}} is one of the most used API in FileSystem, this issue is 
> very critical.
> Several test failures for TestDeletionService are caused by this issue:
> https://builds.apache.org/job/PreCommit-YARN-Build/8630/testReport/org.apache.hadoop.yarn.server.nodemanager/TestDeletionService/testRelativeDelete/
> https://builds.apache.org/job/PreCommit-YARN-Build/8632/testReport/org.apache.hadoop.yarn.server.nodemanager/TestDeletionService/testAbsDelete/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12318) Expose underlying LDAP exceptions in SaslPlainServer

2015-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695118#comment-14695118
 ] 

Hudson commented on HADOOP-12318:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #286 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/286/])
HADOOP-12318. Expose underlying LDAP exceptions in SaslPlainServer. Contributed 
by Mike Yoder. (atm: rev 820f864a26d90e9f4a3584577df581dcac20f9b6)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPlainServer.java


> Expose underlying LDAP exceptions in SaslPlainServer
> 
>
> Key: HADOOP-12318
> URL: https://issues.apache.org/jira/browse/HADOOP-12318
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12318.000.patch
>
>
> In the code of class 
> [SaslPlainServer|http://github.mtv.cloudera.com/CDH/hadoop/blob/cdh5-2.6.0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPlainServer.java#L108],
>  the underlying exception is not included in the {{SaslException}}, which 
> leads to below error message in HiveServer2:
> {noformat}
> 2015-07-22 11:50:28,433 DEBUG 
> org.apache.thrift.transport.TSaslServerTransport: failed to open server 
> transport
> org.apache.thrift.transport.TTransportException: PLAIN auth failed: Error 
> validating LDAP user
>   at 
> org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
>   at 
> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316)
>   at 
> org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
>   at 
> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
>   at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Make COEs very hard to understand what the real error is.
> Can we change that line as:
> {code}
> } catch (Exception e) {
>   throw new SaslException("PLAIN auth failed: " + e.getMessage(), e);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12258) Need translate java.nio.file.NoSuchFileException to FileNotFoundException to avoid regression

2015-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695122#comment-14695122
 ] 

Hudson commented on HADOOP-12258:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #286 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/286/])
HADOOP-12258. Need translate java.nio.file.NoSuchFileException to 
FileNotFoundException to avoid regression. Contributed by Zhihai Xu. (cnauroth: 
rev 6cc8e38db5b26bdd02bc6bc1c9684db2593eec25)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/rawlocal/TestRawlocalContractGetFileStatus.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/contract/hdfs/TestHDFSContractSetTimes.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractGetFileStatusTest.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/contract/hdfs.xml
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/contract/hdfs/TestHDFSContractGetFileStatus.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractSetTimesTest.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/rawlocal/TestRawlocalContractSetTimes.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/localfs/TestLocalFSContractSetTimes.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/test/resources/contract/rawlocal.xml
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/localfs/TestLocalFSContractGetFileStatus.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractOptions.java
* hadoop-common-project/hadoop-common/src/test/resources/contract/localfs.xml


> Need translate java.nio.file.NoSuchFileException to FileNotFoundException to 
> avoid regression
> -
>
> Key: HADOOP-12258
> URL: https://issues.apache.org/jira/browse/HADOOP-12258
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: zhihai xu
>Assignee: zhihai xu
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: HADOOP-12258.000.patch, HADOOP-12258.001.patch, 
> HADOOP-12258.002.patch
>
>
> need translate java.nio.file.NoSuchFileException to FileNotFoundException to 
> avoid regression.
> HADOOP-12045 adds nio to support access time, but nio will create 
> java.nio.file.NoSuchFileException instead of FileNotFoundException.
> many hadoop codes depend on FileNotFoundException to decide whether a file 
> exists. for example {{FileContext.util().exists()}}. 
> {code}
> public boolean exists(final Path f) throws AccessControlException,
>   UnsupportedFileSystemException, IOException {
>   try {
> FileStatus fs = FileContext.this.getFileStatus(f);
> assert fs != null;
> return true;
>   } catch (FileNotFoundException e) {
> return false;
>   }
> }
> {code}
> same for {{FileSystem#exists}}
> {code}
>   public boolean exists(Path f) throws IOException {
> try {
>   return getFileStatus(f) != null;
> } catch (FileNotFoundException e) {
>   return false;
> }
>   }
> {code}
> NoSuchFileException will break these functions.
> Since {{exists}} is one of the most used API in FileSystem, this issue is 
> very critical.
> Several test failures for TestDeletionService are caused by this issue:
> https://builds.apache.org/job/PreCommit-YARN-Build/8630/testReport/org.apache.hadoop.yarn.server.nodemanager/TestDeletionService/testRelativeDelete/
> https://builds.apache.org/job/PreCommit-YARN-Build/8632/testReport/org.apache.hadoop.yarn.server.nodemanager/TestDeletionService/testAbsDelete/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12295) Improve NetworkTopology#InnerNode#remove logic

2015-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695121#comment-14695121
 ] 

Hudson commented on HADOOP-12295:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #286 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/286/])
HADOOP-12295. Improve NetworkTopology#InnerNode#remove logic. (yliu) (yliu: rev 
53bef9c5b98dee87d4ffaf35415bc38e2f876ed8)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/net/TestNetworkTopology.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Improve NetworkTopology#InnerNode#remove logic
> --
>
> Key: HADOOP-12295
> URL: https://issues.apache.org/jira/browse/HADOOP-12295
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-12295.001.patch
>
>
> In {{NetworkTopology#InnerNode#remove}}, We can use {{childrenMap}} to get 
> the parent node, no need to loop the {{children}} list. Then it is more 
> efficient since in most cases deleting parent node doesn't happen.
> Another nit in current code is:
> {code}
>   String parent = n.getNetworkLocation();
>   String currentPath = getPath(this);
> {code}
> can be in closure of {{\!isAncestor\(n\)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12313) Possible NPE in JvmPauseMonitor.stop()

2015-08-13 Thread Gabor Liptak (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695109#comment-14695109
 ] 

Gabor Liptak commented on HADOOP-12313:
---

I created HADOOP-12320 yesterday (which might be a duplicate as per [~kasha]

[~ste...@apache.org] Would you see this being worked under this Jira or the new 
"improvement" one?


> Possible NPE in JvmPauseMonitor.stop()
> --
>
> Key: HADOOP-12313
> URL: https://issues.apache.org/jira/browse/HADOOP-12313
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Gabor Liptak
>Priority: Critical
> Attachments: HADOOP-12313.2.patch, HADOOP-12313.3.patch, 
> YARN-4035.1.patch
>
>
> It is observed that after YARN-4019 some tests are failing in 
> TestRMAdminService with null pointer exceptions in build [build failure 
> |https://builds.apache.org/job/PreCommit-YARN-Build/8792/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt]
> {noformat}
> unning org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService
> Tests run: 19, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 11.541 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService
> testModifyLabelsOnNodesWithDistributedConfigurationDisabled(org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService)
>   Time elapsed: 0.132 sec  <<< ERROR!
> java.lang.NullPointerException: null
>   at org.apache.hadoop.util.JvmPauseMonitor.stop(JvmPauseMonitor.java:86)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStop(ResourceManager.java:601)
>   at 
> org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.stopActiveServices(ResourceManager.java:983)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToStandby(ResourceManager.java:1038)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStop(ResourceManager.java:1085)
>   at 
> org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
>   at 
> org.apache.hadoop.service.AbstractService.close(AbstractService.java:250)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService.testModifyLabelsOnNodesWithDistributedConfigurationDisabled(TestRMAdminService.java:824)
> testRemoveClusterNodeLabelsWithDistributedConfigurationEnabled(org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService)
>   Time elapsed: 0.121 sec  <<< ERROR!
> java.lang.NullPointerException: null
>   at org.apache.hadoop.util.JvmPauseMonitor.stop(JvmPauseMonitor.java:86)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStop(ResourceManager.java:601)
>   at 
> org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.stopActiveServices(ResourceManager.java:983)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToStandby(ResourceManager.java:1038)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStop(ResourceManager.java:1085)
>   at 
> org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
>   at 
> org.apache.hadoop.service.AbstractService.close(AbstractService.java:250)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService.testRemoveClusterNodeLabelsWithDistributedConfigurationEnabled(TestRMAdminService.java:867)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12295) Improve NetworkTopology#InnerNode#remove logic

2015-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14694999#comment-14694999
 ] 

Hudson commented on HADOOP-12295:
-

FAILURE: Integrated in Hadoop-trunk-Commit #8293 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8293/])
HADOOP-12295. Improve NetworkTopology#InnerNode#remove logic. (yliu) (yliu: rev 
53bef9c5b98dee87d4ffaf35415bc38e2f876ed8)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/net/TestNetworkTopology.java


> Improve NetworkTopology#InnerNode#remove logic
> --
>
> Key: HADOOP-12295
> URL: https://issues.apache.org/jira/browse/HADOOP-12295
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-12295.001.patch
>
>
> In {{NetworkTopology#InnerNode#remove}}, We can use {{childrenMap}} to get 
> the parent node, no need to loop the {{children}} list. Then it is more 
> efficient since in most cases deleting parent node doesn't happen.
> Another nit in current code is:
> {code}
>   String parent = n.getNetworkLocation();
>   String currentPath = getPath(this);
> {code}
> can be in closure of {{\!isAncestor\(n\)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12319) S3AFastOutputStream has no ability to apply backpressure

2015-08-13 Thread Thomas Demoor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14694923#comment-14694923
 ] 

Thomas Demoor commented on HADOOP-12319:


Hi Colin.

Completely correct observation: I openend HADOOP-11684 for this and have a 
patch submitted there waiting for review. 

Would be fantastic if you review and/or try it out and give some feedback.

Note that it relies on HADOOP-12269, which was merged into trunk last week, so 
you probably need to apply that patch as well and update your aws-sdk.

> S3AFastOutputStream has no ability to apply backpressure
> 
>
> Key: HADOOP-12319
> URL: https://issues.apache.org/jira/browse/HADOOP-12319
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Colin Marc
>Priority: Critical
>
> Currently, users of S3AFastOutputStream can control memory usage with a few 
> settings: {{fs.s3a.threads.core,max}}, which control the number of active 
> uploads (specifically as arguments to a {{ThreadPoolExecutor}}), and 
> {{fs.s3a.max.total.tasks}}, which controls the size of the feeding queue for 
> the {{ThreadPoolExecutor}}.
> However, a user can get an almost *guaranteed* crash if the throughput of the 
> writing job is higher than the total S3 throughput, because there is never 
> any backpressure or blocking on calls to {{write}}.
> If {{fs.s3a.max.total.tasks}} is set high (the default is 1000), then 
> {{write}} calls will continue to add data to the queue, which can eventually 
> OOM. But if the user tries to set it lower, then writes will fail when the 
> queue is full; the {{ThreadPoolExecutor}} will reject the part with 
> {{java.util.concurrent.RejectedExecutionException}}.
> Ideally, calls to {{write}} should *block, not fail* when the queue is full, 
> so as to apply backpressure on whatever the writing process is.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12295) Improve NetworkTopology#InnerNode#remove logic

2015-08-13 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HADOOP-12295:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

> Improve NetworkTopology#InnerNode#remove logic
> --
>
> Key: HADOOP-12295
> URL: https://issues.apache.org/jira/browse/HADOOP-12295
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-12295.001.patch
>
>
> In {{NetworkTopology#InnerNode#remove}}, We can use {{childrenMap}} to get 
> the parent node, no need to loop the {{children}} list. Then it is more 
> efficient since in most cases deleting parent node doesn't happen.
> Another nit in current code is:
> {code}
>   String parent = n.getNetworkLocation();
>   String currentPath = getPath(this);
> {code}
> can be in closure of {{\!isAncestor\(n\)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12295) Improve NetworkTopology#InnerNode#remove logic

2015-08-13 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14694910#comment-14694910
 ] 

Yi Liu commented on HADOOP-12295:
-

Thanks [~vinayrpet] for the review, committed to trunk and branch-2.  I can 
address the comment if Chris has, thanks.

> Improve NetworkTopology#InnerNode#remove logic
> --
>
> Key: HADOOP-12295
> URL: https://issues.apache.org/jira/browse/HADOOP-12295
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yi Liu
>Assignee: Yi Liu
> Attachments: HADOOP-12295.001.patch
>
>
> In {{NetworkTopology#InnerNode#remove}}, We can use {{childrenMap}} to get 
> the parent node, no need to loop the {{children}} list. Then it is more 
> efficient since in most cases deleting parent node doesn't happen.
> Another nit in current code is:
> {code}
>   String parent = n.getNetworkLocation();
>   String currentPath = getPath(this);
> {code}
> can be in closure of {{\!isAncestor\(n\)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)