[jira] [Updated] (HADOOP-11627) Remove io.native.lib.available from trunk

2015-04-23 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-11627:
--
Attachment: HADOOP-11627-010.patch

> Remove io.native.lib.available from trunk
> -
>
> Key: HADOOP-11627
> URL: https://issues.apache.org/jira/browse/HADOOP-11627
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-11627-002.patch, HADOOP-11627-003.patch, 
> HADOOP-11627-004.patch, HADOOP-11627-005.patch, HADOOP-11627-006.patch, 
> HADOOP-11627-007.patch, HADOOP-11627-008.patch, HADOOP-11627-009.patch, 
> HADOOP-11627-010.patch, HADOOP-11627.patch
>
>
> According to the discussion in HADOOP-8642, we should remove 
> {{io.native.lib.available}} from trunk, and always use native libraries if 
> they exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11627) Remove io.native.lib.available from trunk

2015-04-23 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14508573#comment-14508573
 ] 

Brahma Reddy Battula commented on HADOOP-11627:
---

Attached patch for addressing check-style comments...Kindly review..

> Remove io.native.lib.available from trunk
> -
>
> Key: HADOOP-11627
> URL: https://issues.apache.org/jira/browse/HADOOP-11627
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-11627-002.patch, HADOOP-11627-003.patch, 
> HADOOP-11627-004.patch, HADOOP-11627-005.patch, HADOOP-11627-006.patch, 
> HADOOP-11627-007.patch, HADOOP-11627-008.patch, HADOOP-11627-009.patch, 
> HADOOP-11627-010.patch, HADOOP-11627.patch
>
>
> According to the discussion in HADOOP-8642, we should remove 
> {{io.native.lib.available}} from trunk, and always use native libraries if 
> they exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11864) JWTRedirectAuthenticationHandler breaks java8 javadocs

2015-04-23 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11864:

   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

+1, committed -thanks!

> JWTRedirectAuthenticationHandler breaks java8 javadocs
> --
>
> Key: HADOOP-11864
> URL: https://issues.apache.org/jira/browse/HADOOP-11864
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
> Environment: Jenkins on Java8
>Reporter: Steve Loughran
>Assignee: Larry McCay
> Fix For: 2.8.0
>
> Attachments: HADOOP-11864-0.patch
>
>
> Jenkins on Java8 is failing as {{JWTRedirectAuthenticationHandler}} has 
> {{}} tags in it, something javadoc on java8 considers illegal



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11870) [JDK8] AuthenticationFilter, CertificateUtil, SignerSecretProviders, KeyAuthorizationKeyProvider Javadoc issues

2015-04-23 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14508633#comment-14508633
 ] 

Steve Loughran commented on HADOOP-11870:
-

LGTM, +1

> [JDK8] AuthenticationFilter, CertificateUtil, SignerSecretProviders, 
> KeyAuthorizationKeyProvider Javadoc issues
> ---
>
> Key: HADOOP-11870
> URL: https://issues.apache.org/jira/browse/HADOOP-11870
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
> Environment: Jenkins on Java8
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: HADOOP-11870.001.patch
>
>
> Jenkins on Java8 is failing due to a number of Javadoc violations that are 
> now considered ERRORs in the following classes:
> - AuthenticationFilter.java
> - CertificateUtil.java
> - RolloverSignerSecretProvider.java
> - SignerSecretProvider.java
> - ZKSignerSecretProvider.java
> - KeyAuthorizationKeyProvider.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11843) Make setting up the build environment easier

2015-04-23 Thread Niels Basjes (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niels Basjes updated HADOOP-11843:
--
Status: Open  (was: Patch Available)

> Make setting up the build environment easier
> 
>
> Key: HADOOP-11843
> URL: https://issues.apache.org/jira/browse/HADOOP-11843
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Niels Basjes
>Assignee: Niels Basjes
> Attachments: HADOOP-11843-2015-04-17-1612.patch, 
> HADOOP-11843-2015-04-17-2226.patch, HADOOP-11843-2015-04-17-2308.patch, 
> HADOOP-11843-2015-04-19-2206.patch, HADOOP-11843-2015-04-19-2232.patch, 
> HADOOP-11843-2015-04-22-1122.patch, HADOOP-11843-2015-04-23-1000.patch
>
>
> ( As discussed with [~aw] )
> In AVRO-1537 a docker based solution was created to setup all the tools for 
> doing a full build. This enables much easier reproduction of any issues and 
> getting up and running for new developers.
> This issue is to 'copy/port' that setup into the hadoop project in 
> preparation for the bug squash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11843) Make setting up the build environment easier

2015-04-23 Thread Niels Basjes (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niels Basjes updated HADOOP-11843:
--
Attachment: HADOOP-11843-2015-04-23-1000.patch

Added :
- "First make sure Homebrew has been installed ( http://brew.sh/ )"
- ENV MAVEN_OPTS -Xms256m -Xmx512m


> Make setting up the build environment easier
> 
>
> Key: HADOOP-11843
> URL: https://issues.apache.org/jira/browse/HADOOP-11843
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Niels Basjes
>Assignee: Niels Basjes
> Attachments: HADOOP-11843-2015-04-17-1612.patch, 
> HADOOP-11843-2015-04-17-2226.patch, HADOOP-11843-2015-04-17-2308.patch, 
> HADOOP-11843-2015-04-19-2206.patch, HADOOP-11843-2015-04-19-2232.patch, 
> HADOOP-11843-2015-04-22-1122.patch, HADOOP-11843-2015-04-23-1000.patch
>
>
> ( As discussed with [~aw] )
> In AVRO-1537 a docker based solution was created to setup all the tools for 
> doing a full build. This enables much easier reproduction of any issues and 
> getting up and running for new developers.
> This issue is to 'copy/port' that setup into the hadoop project in 
> preparation for the bug squash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11843) Make setting up the build environment easier

2015-04-23 Thread Niels Basjes (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niels Basjes updated HADOOP-11843:
--
Status: Patch Available  (was: Open)

> Make setting up the build environment easier
> 
>
> Key: HADOOP-11843
> URL: https://issues.apache.org/jira/browse/HADOOP-11843
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Niels Basjes
>Assignee: Niels Basjes
> Attachments: HADOOP-11843-2015-04-17-1612.patch, 
> HADOOP-11843-2015-04-17-2226.patch, HADOOP-11843-2015-04-17-2308.patch, 
> HADOOP-11843-2015-04-19-2206.patch, HADOOP-11843-2015-04-19-2232.patch, 
> HADOOP-11843-2015-04-22-1122.patch, HADOOP-11843-2015-04-23-1000.patch
>
>
> ( As discussed with [~aw] )
> In AVRO-1537 a docker based solution was created to setup all the tools for 
> doing a full build. This enables much easier reproduction of any issues and 
> getting up and running for new developers.
> This issue is to 'copy/port' that setup into the hadoop project in 
> preparation for the bug squash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11843) Make setting up the build environment easier

2015-04-23 Thread Niels Basjes (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niels Basjes updated HADOOP-11843:
--
Release Note: Includes a docker based solution for setting up a build 
environment with minimal effort.

> Make setting up the build environment easier
> 
>
> Key: HADOOP-11843
> URL: https://issues.apache.org/jira/browse/HADOOP-11843
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Niels Basjes
>Assignee: Niels Basjes
> Attachments: HADOOP-11843-2015-04-17-1612.patch, 
> HADOOP-11843-2015-04-17-2226.patch, HADOOP-11843-2015-04-17-2308.patch, 
> HADOOP-11843-2015-04-19-2206.patch, HADOOP-11843-2015-04-19-2232.patch, 
> HADOOP-11843-2015-04-22-1122.patch, HADOOP-11843-2015-04-23-1000.patch
>
>
> ( As discussed with [~aw] )
> In AVRO-1537 a docker based solution was created to setup all the tools for 
> doing a full build. This enables much easier reproduction of any issues and 
> getting up and running for new developers.
> This issue is to 'copy/port' that setup into the hadoop project in 
> preparation for the bug squash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11864) JWTRedirectAuthenticationHandler breaks java8 javadocs

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14508669#comment-14508669
 ] 

Hudson commented on HADOOP-11864:
-

FAILURE: Integrated in Hadoop-trunk-Commit #7644 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7644/])
HADOOP-11864. JWTRedirectAuthenticationHandler breaks java8 javadocs. (Larry 
McCay via stevel) (stevel: rev 08d4386162a878e88ac8f3d8db246e17c2943dad)
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/JWTRedirectAuthenticationHandler.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> JWTRedirectAuthenticationHandler breaks java8 javadocs
> --
>
> Key: HADOOP-11864
> URL: https://issues.apache.org/jira/browse/HADOOP-11864
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
> Environment: Jenkins on Java8
>Reporter: Steve Loughran
>Assignee: Larry McCay
> Fix For: 2.8.0
>
> Attachments: HADOOP-11864-0.patch
>
>
> Jenkins on Java8 is failing as {{JWTRedirectAuthenticationHandler}} has 
> {{}} tags in it, something javadoc on java8 considers illegal



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11843) Make setting up the build environment easier

2015-04-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14508675#comment-14508675
 ] 

Hadoop QA commented on HADOOP-11843:


\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   0m  0s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | release audit |   0m 15s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:blue}0{color} | shellcheck |   0m 15s | Shellcheck was not available. |
| | |   0m 23s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12727559/HADOOP-11843-2015-04-23-1000.patch
 |
| Optional Tests | shellcheck |
| git revision | trunk / d9bcf99 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6165/console |


This message was automatically generated.

> Make setting up the build environment easier
> 
>
> Key: HADOOP-11843
> URL: https://issues.apache.org/jira/browse/HADOOP-11843
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Niels Basjes
>Assignee: Niels Basjes
> Attachments: HADOOP-11843-2015-04-17-1612.patch, 
> HADOOP-11843-2015-04-17-2226.patch, HADOOP-11843-2015-04-17-2308.patch, 
> HADOOP-11843-2015-04-19-2206.patch, HADOOP-11843-2015-04-19-2232.patch, 
> HADOOP-11843-2015-04-22-1122.patch, HADOOP-11843-2015-04-23-1000.patch
>
>
> ( As discussed with [~aw] )
> In AVRO-1537 a docker based solution was created to setup all the tools for 
> doing a full build. This enables much easier reproduction of any issues and 
> getting up and running for new developers.
> This issue is to 'copy/port' that setup into the hadoop project in 
> preparation for the bug squash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11828) Implement the Hitchhiker erasure coding algorithm

2015-04-23 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14508733#comment-14508733
 ] 

Kai Zheng commented on HADOOP-11828:


Jack, good work. Thanks!
* Please rebase with latest branch. Your codebase is rather old.
* Please remove codes for other modes for now, even in tests.

> Implement the Hitchhiker erasure coding algorithm
> -
>
> Key: HADOOP-11828
> URL: https://issues.apache.org/jira/browse/HADOOP-11828
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: jack liuquan
> Attachments: 7715-hitchhikerXOR-v2-testcode.patch, 
> 7715-hitchhikerXOR-v2.patch, HADOOP-11828-hitchhikerXOR-V3.patch, 
> HDFS-7715-hhxor-decoder.patch, HDFS-7715-hhxor-encoder.patch
>
>
> [Hitchhiker | 
> http://www.eecs.berkeley.edu/~nihar/publications/Hitchhiker_SIGCOMM14.pdf] is 
> a new erasure coding algorithm developed as a research project at UC 
> Berkeley. It has been shown to reduce network traffic and disk I/O by 25%-45% 
> during data reconstruction. This JIRA aims to introduce Hitchhiker to the 
> HDFS-EC framework, as one of the pluggable codec algorithms.
> The existing implementation is based on HDFS-RAID. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11627) Remove io.native.lib.available from trunk

2015-04-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14508776#comment-14508776
 ] 

Hadoop QA commented on HADOOP-11627:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 32s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 4 new or modified test files. |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 1  line(s) that 
end in whitespace. |
| {color:green}+1{color} | javac |   7m 29s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 33s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   2m 58s | Site still builds. |
| {color:green}+1{color} | checkstyle |   5m 21s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 31s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 18s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | common tests |  23m 17s | Tests passed in 
hadoop-common. |
| {color:green}+1{color} | mapreduce tests | 106m  5s | Tests passed in 
hadoop-mapreduce-client-jobclient. |
| | | 177m  4s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12727550/HADOOP-11627-010.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle site |
| git revision | trunk / 18eb5e7 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6164/artifact/patchprocess/whitespace.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6164/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-mapreduce-client-jobclient test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6164/artifact/patchprocess/testrun_hadoop-mapreduce-client-jobclient.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6164/testReport/ |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6164/console |


This message was automatically generated.

> Remove io.native.lib.available from trunk
> -
>
> Key: HADOOP-11627
> URL: https://issues.apache.org/jira/browse/HADOOP-11627
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-11627-002.patch, HADOOP-11627-003.patch, 
> HADOOP-11627-004.patch, HADOOP-11627-005.patch, HADOOP-11627-006.patch, 
> HADOOP-11627-007.patch, HADOOP-11627-008.patch, HADOOP-11627-009.patch, 
> HADOOP-11627-010.patch, HADOOP-11627.patch
>
>
> According to the discussion in HADOOP-8642, we should remove 
> {{io.native.lib.available}} from trunk, and always use native libraries if 
> they exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11627) Remove io.native.lib.available from trunk

2015-04-23 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-11627:
--
Attachment: HADOOP-11627-011.patch

Attached patch to Address white space ( Even it's not induced )

> Remove io.native.lib.available from trunk
> -
>
> Key: HADOOP-11627
> URL: https://issues.apache.org/jira/browse/HADOOP-11627
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-11627-002.patch, HADOOP-11627-003.patch, 
> HADOOP-11627-004.patch, HADOOP-11627-005.patch, HADOOP-11627-006.patch, 
> HADOOP-11627-007.patch, HADOOP-11627-008.patch, HADOOP-11627-009.patch, 
> HADOOP-11627-010.patch, HADOOP-11627-011.patch, HADOOP-11627.patch
>
>
> According to the discussion in HADOOP-8642, we should remove 
> {{io.native.lib.available}} from trunk, and always use native libraries if 
> they exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11854) Fix Typos in all the projects

2015-04-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14508812#comment-14508812
 ] 

Hadoop QA commented on HADOOP-11854:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 37s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 63 new or modified test files. |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 10  line(s) that 
end in whitespace. |
| {color:green}+1{color} | javac |   7m 29s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 35s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   7m 50s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |  15m  8s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | common tests |  22m 57s | Tests passed in 
hadoop-common. |
| {color:red}-1{color} | mapreduce tests |   9m 40s | Tests failed in 
hadoop-mapreduce-client-app. |
| {color:green}+1{color} | mapreduce tests |   1m 34s | Tests passed in 
hadoop-mapreduce-client-core. |
| {color:green}+1{color} | mapreduce tests |   5m 52s | Tests passed in 
hadoop-mapreduce-client-hs. |
| {color:green}+1{color} | mapreduce tests | 103m 35s | Tests passed in 
hadoop-mapreduce-client-jobclient. |
| {color:green}+1{color} | tools/hadoop tests |   1m 12s | Tests passed in 
hadoop-azure. |
| {color:green}+1{color} | tools/hadoop tests |   0m 23s | Tests passed in 
hadoop-rumen. |
| {color:green}+1{color} | yarn tests |   7m  9s | Tests passed in 
hadoop-yarn-client. |
| {color:green}+1{color} | yarn tests |   2m  1s | Tests passed in 
hadoop-yarn-common. |
| {color:red}-1{color} | yarn tests |   5m 55s | Tests failed in 
hadoop-yarn-server-nodemanager. |
| {color:red}-1{color} | yarn tests |  51m 40s | Tests failed in 
hadoop-yarn-server-resourcemanager. |
| {color:red}-1{color} | hdfs tests | 162m 33s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   3m 39s | Tests passed in 
hadoop-hdfs-httpfs. |
| {color:green}+1{color} | hdfs tests |   1m 45s | Tests passed in 
hadoop-hdfs-nfs. |
| | | 437m  7s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.mapreduce.jobhistory.TestEvents |
|   | hadoop.mapreduce.v2.app.webapp.TestAppController |
|   | hadoop.mapreduce.v2.app.webapp.TestAMWebServicesTasks |
|   | 
hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService
 |
|   | hadoop.yarn.server.resourcemanager.scheduler.fifo.TestFifoScheduler |
|   | hadoop.hdfs.TestSnapshotCommands |
|   | hadoop.hdfs.TestAppendSnapshotTruncate |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12726921/HADOOP-11854.suggestions.001.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / a100be6 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6160/artifact/patchprocess/whitespace.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6160/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-mapreduce-client-app test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6160/artifact/patchprocess/testrun_hadoop-mapreduce-client-app.txt
 |
| hadoop-mapreduce-client-core test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6160/artifact/patchprocess/testrun_hadoop-mapreduce-client-core.txt
 |
| hadoop-mapreduce-client-hs test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6160/artifact/patchprocess/testrun_hadoop-mapreduce-client-hs.txt
 |
| hadoop-mapreduce-client-jobclient test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6160/artifact/patchprocess/testrun_hadoop-mapreduce-client-jobclient.txt
 |
| hadoop-azure test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6160/artifact/patchprocess/testrun_hadoop-azure.txt
 |
| hadoop-rumen test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6160/artifact/patchprocess/testrun_hadoop-rumen.txt
 |
| hadoop-yarn-client test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6160/artifact/patchprocess/testrun_hadoop-yarn-client.txt
 |
| hadoop-yarn-common test log | 
https://builds.apache.org/job/PreCommit

[jira] [Commented] (HADOOP-11859) PseudoAuthenticationHandler fails with httpcomponents v4.4

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14508862#comment-14508862
 ] 

Hudson commented on HADOOP-11859:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2104 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2104/])
HADOOP-11859. PseudoAuthenticationHandler fails with httpcomponents v4.4. 
Contributed by Eugene Koifman. (jitendra: rev 
1f4767c7f2d1fdd23954c16e903acd2dca78a1e1)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/PseudoAuthenticationHandler.java


> PseudoAuthenticationHandler fails with httpcomponents v4.4
> --
>
> Key: HADOOP-11859
> URL: https://issues.apache.org/jira/browse/HADOOP-11859
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Fix For: 2.8.0
>
> Attachments: HADOOP-11859.patch
>
>
> This shows in the context of WebHCat and Hive (which recently moved to 
> httpcomponents:httpclient:4.4) but could happen in other places.
> URLEncodedUtils.parse(String, Charset) which is called from 
> PseudoAuthenticationHandler.getUserName() with the 1st argument produced by 
> HttpServletRequest.getQueryString().
> The later returns NULL if there is no query string in the URL.
> in httpcoponents:httpclient:4.2.5 parse() gracefully handles first argument 
> being NULL, but in 4.4 it NPEs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11850) Typos in hadoop-common java docs

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14508871#comment-14508871
 ] 

Hudson commented on HADOOP-11850:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2104 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2104/])
HADOOP-11850: Typos in hadoop-common java docs. Contributed by Surendra Singh 
Lilhore. (jghoman: rev e54a3e1f4f3ea4dbba14f3fab0c395a235763c54)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/MoveCommands.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/BufferedFSInputStream.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ContentSummary.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/PathData.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WritableComparator.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/PseudoAuthenticator.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/HarFileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/Text.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/AuthenticationException.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/FairCallQueueMXBean.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/file/tfile/BoundedRangeFileInputStream.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/bzip2/CBZip2InputStream.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WritableUtils.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSInputChecker.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java


> Typos in hadoop-common java docs
> 
>
> Key: HADOOP-11850
> URL: https://issues.apache.org/jira/browse/HADOOP-11850
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.6.0
>Reporter: surendra singh lilhore
>Assignee: surendra singh lilhore
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HADOOP-11850.patch, HADOOP-11850_1.patch, 
> HADOOP-11850_2.patch
>
>
> This jira will fix the typo in hdfs-common project



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11848) Incorrect arguments to sizeof in DomainSocket.c

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14508868#comment-14508868
 ] 

Hudson commented on HADOOP-11848:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2104 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2104/])
HADOOP-11848. Incorrect arguments to sizeof in DomainSocket.c (Malcolm Kavalsky 
via Colin P. McCabe) (cmccabe: rev a3b1d8c90288a6237089a98d4a81c25f44aedb2c)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/net/unix/DomainSocket.c


> Incorrect arguments to sizeof in DomainSocket.c
> ---
>
> Key: HADOOP-11848
> URL: https://issues.apache.org/jira/browse/HADOOP-11848
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.6.0
>Reporter: Malcolm Kavalsky
>Assignee: Malcolm Kavalsky
>  Labels: native
> Fix For: 2.8.0
>
> Attachments: HADOOP-11848.001.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Length of buffer to be zeroed using sizeof , should not use the address of 
> the structure rather the structure itself.
> DomainSocket.c line 156
> Replace current:
> memset(&addr,0,sizeof,(&addr));
> With:
> memset(&addr, 0, sizeof(addr));



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11861) test-patch.sh rewrite addendum patch

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14508870#comment-14508870
 ] 

Hudson commented on HADOOP-11861:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2104 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2104/])
HADOOP-11861. test-patch.sh rewrite addendum patch. Contributed by Allen 
Wittenauer. (cnauroth: rev 18eb5e79345295b2259b566c154375ad2a6216a1)
* dev-support/test-patch.sh
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/test-patch.d/shellcheck.sh


> test-patch.sh rewrite addendum patch
> 
>
> Key: HADOOP-11861
> URL: https://issues.apache.org/jira/browse/HADOOP-11861
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Anu Engineer
>Assignee: Allen Wittenauer
> Fix For: 2.8.0
>
> Attachments: HADOOP-11861-00.patch, HADOOP-11861-01.patch, 
> HADOOP-11861-02.patch, HADOOP-11861-04.patch
>
>
> if you specify "--build-native=false"  like 
> {code}
> ./dev-support/test-patch.sh  --build-native=false 
> ~/workspaces/patches/hdfs-8211.001.patch 
> {code}
> mvn fails with invalid lifecycle error. 
> Here are the steps to repro :
> 1) run any patch with --buid-native=false option 
> 2) Open up  /tmp/hadoop-test-patch//patchJavacWarnings.txt to see 
> the failure reason.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11864) JWTRedirectAuthenticationHandler breaks java8 javadocs

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14508866#comment-14508866
 ] 

Hudson commented on HADOOP-11864:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2104 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2104/])
HADOOP-11864. JWTRedirectAuthenticationHandler breaks java8 javadocs. (Larry 
McCay via stevel) (stevel: rev 08d4386162a878e88ac8f3d8db246e17c2943dad)
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/JWTRedirectAuthenticationHandler.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> JWTRedirectAuthenticationHandler breaks java8 javadocs
> --
>
> Key: HADOOP-11864
> URL: https://issues.apache.org/jira/browse/HADOOP-11864
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
> Environment: Jenkins on Java8
>Reporter: Steve Loughran
>Assignee: Larry McCay
> Fix For: 2.8.0
>
> Attachments: HADOOP-11864-0.patch
>
>
> Jenkins on Java8 is failing as {{JWTRedirectAuthenticationHandler}} has 
> {{}} tags in it, something javadoc on java8 considers illegal



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11868) Invalid user logins trigger large backtraces in server log

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14508865#comment-14508865
 ] 

Hudson commented on HADOOP-11868:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2104 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2104/])
HADOOP-11868. Invalid user logins trigger large backtraces in server log. 
Contributed by Chang Li (jlowe: rev 0ebe84d30af2046775884c9fb1e054da31582657)
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Invalid user logins trigger large backtraces in server log
> --
>
> Key: HADOOP-11868
> URL: https://issues.apache.org/jira/browse/HADOOP-11868
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chang Li
>Assignee: Chang Li
> Fix For: 2.7.1
>
> Attachments: YARN-3520.patch
>
>
> {code}
> WARN sso.CookieValidatorHelpers: Cookie has expired by 25364187 msec
> WARN server.AuthenticationFilter: Authentication exception: Invalid Cookie
> 166 org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Invalid Bouncer Cookie
> 167 at 
> KerberosAuthenticationHandler.bouncerAuthenticate(KerberosAuthenticationHandler.java:94)
> 168 at 
> AuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:82)
> 169 at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:507)
> 170 at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> 171 at 
> org.apache.hadoop.yarn.server.timeline.webapp.CrossOriginFilter.doFilter(CrossOriginFilter.java:95)
> 172 at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> 173 at 
> org.mortbay.servlet.UserAgentFilter.doFilter(UserAgentFilter.java:78)
> 174 at GzipFilter.doFilter(GzipFilter.java:188)
> 175 at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> 176 at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1224)
> 177 at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> 178 at 
> org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> 179 at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> 180 at 
> org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> 181 at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> 182 at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
> 183 at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
> 184 at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
> 185 at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
> 186 at 
> org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
> 187 at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
> 188 at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
> 189 at org.mortbay.jetty.Server.handle(Server.java:326)
> 190 at 
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
> 191 at 
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
> 192 at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
> 193 at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
> 194 at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
> 195 at 
> org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
> 196 at 
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
>  WARN sso.CookieValidatorHelpers: Cookie has expired by 25373197 msec
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11868) Invalid user logins trigger large backtraces in server log

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14508877#comment-14508877
 ] 

Hudson commented on HADOOP-11868:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #163 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/163/])
HADOOP-11868. Invalid user logins trigger large backtraces in server log. 
Contributed by Chang Li (jlowe: rev 0ebe84d30af2046775884c9fb1e054da31582657)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java


> Invalid user logins trigger large backtraces in server log
> --
>
> Key: HADOOP-11868
> URL: https://issues.apache.org/jira/browse/HADOOP-11868
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chang Li
>Assignee: Chang Li
> Fix For: 2.7.1
>
> Attachments: YARN-3520.patch
>
>
> {code}
> WARN sso.CookieValidatorHelpers: Cookie has expired by 25364187 msec
> WARN server.AuthenticationFilter: Authentication exception: Invalid Cookie
> 166 org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Invalid Bouncer Cookie
> 167 at 
> KerberosAuthenticationHandler.bouncerAuthenticate(KerberosAuthenticationHandler.java:94)
> 168 at 
> AuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:82)
> 169 at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:507)
> 170 at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> 171 at 
> org.apache.hadoop.yarn.server.timeline.webapp.CrossOriginFilter.doFilter(CrossOriginFilter.java:95)
> 172 at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> 173 at 
> org.mortbay.servlet.UserAgentFilter.doFilter(UserAgentFilter.java:78)
> 174 at GzipFilter.doFilter(GzipFilter.java:188)
> 175 at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> 176 at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1224)
> 177 at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> 178 at 
> org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> 179 at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> 180 at 
> org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> 181 at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> 182 at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
> 183 at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
> 184 at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
> 185 at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
> 186 at 
> org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
> 187 at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
> 188 at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
> 189 at org.mortbay.jetty.Server.handle(Server.java:326)
> 190 at 
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
> 191 at 
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
> 192 at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
> 193 at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
> 194 at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
> 195 at 
> org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
> 196 at 
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
>  WARN sso.CookieValidatorHelpers: Cookie has expired by 25373197 msec
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11859) PseudoAuthenticationHandler fails with httpcomponents v4.4

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14508874#comment-14508874
 ] 

Hudson commented on HADOOP-11859:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #163 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/163/])
HADOOP-11859. PseudoAuthenticationHandler fails with httpcomponents v4.4. 
Contributed by Eugene Koifman. (jitendra: rev 
1f4767c7f2d1fdd23954c16e903acd2dca78a1e1)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/PseudoAuthenticationHandler.java


> PseudoAuthenticationHandler fails with httpcomponents v4.4
> --
>
> Key: HADOOP-11859
> URL: https://issues.apache.org/jira/browse/HADOOP-11859
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Fix For: 2.8.0
>
> Attachments: HADOOP-11859.patch
>
>
> This shows in the context of WebHCat and Hive (which recently moved to 
> httpcomponents:httpclient:4.4) but could happen in other places.
> URLEncodedUtils.parse(String, Charset) which is called from 
> PseudoAuthenticationHandler.getUserName() with the 1st argument produced by 
> HttpServletRequest.getQueryString().
> The later returns NULL if there is no query string in the URL.
> in httpcoponents:httpclient:4.2.5 parse() gracefully handles first argument 
> being NULL, but in 4.4 it NPEs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11861) test-patch.sh rewrite addendum patch

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14508882#comment-14508882
 ] 

Hudson commented on HADOOP-11861:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #163 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/163/])
HADOOP-11861. test-patch.sh rewrite addendum patch. Contributed by Allen 
Wittenauer. (cnauroth: rev 18eb5e79345295b2259b566c154375ad2a6216a1)
* dev-support/test-patch.sh
* dev-support/test-patch.d/shellcheck.sh
* hadoop-common-project/hadoop-common/CHANGES.txt


> test-patch.sh rewrite addendum patch
> 
>
> Key: HADOOP-11861
> URL: https://issues.apache.org/jira/browse/HADOOP-11861
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Anu Engineer
>Assignee: Allen Wittenauer
> Fix For: 2.8.0
>
> Attachments: HADOOP-11861-00.patch, HADOOP-11861-01.patch, 
> HADOOP-11861-02.patch, HADOOP-11861-04.patch
>
>
> if you specify "--build-native=false"  like 
> {code}
> ./dev-support/test-patch.sh  --build-native=false 
> ~/workspaces/patches/hdfs-8211.001.patch 
> {code}
> mvn fails with invalid lifecycle error. 
> Here are the steps to repro :
> 1) run any patch with --buid-native=false option 
> 2) Open up  /tmp/hadoop-test-patch//patchJavacWarnings.txt to see 
> the failure reason.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11864) JWTRedirectAuthenticationHandler breaks java8 javadocs

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14508878#comment-14508878
 ] 

Hudson commented on HADOOP-11864:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #163 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/163/])
HADOOP-11864. JWTRedirectAuthenticationHandler breaks java8 javadocs. (Larry 
McCay via stevel) (stevel: rev 08d4386162a878e88ac8f3d8db246e17c2943dad)
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/JWTRedirectAuthenticationHandler.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> JWTRedirectAuthenticationHandler breaks java8 javadocs
> --
>
> Key: HADOOP-11864
> URL: https://issues.apache.org/jira/browse/HADOOP-11864
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
> Environment: Jenkins on Java8
>Reporter: Steve Loughran
>Assignee: Larry McCay
> Fix For: 2.8.0
>
> Attachments: HADOOP-11864-0.patch
>
>
> Jenkins on Java8 is failing as {{JWTRedirectAuthenticationHandler}} has 
> {{}} tags in it, something javadoc on java8 considers illegal



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11850) Typos in hadoop-common java docs

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14508883#comment-14508883
 ] 

Hudson commented on HADOOP-11850:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #163 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/163/])
HADOOP-11850: Typos in hadoop-common java docs. Contributed by Surendra Singh 
Lilhore. (jghoman: rev e54a3e1f4f3ea4dbba14f3fab0c395a235763c54)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/HarFileSystem.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/bzip2/CBZip2InputStream.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/file/tfile/BoundedRangeFileInputStream.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ContentSummary.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/FairCallQueueMXBean.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/Text.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSInputChecker.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/PseudoAuthenticator.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/AuthenticationException.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WritableUtils.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/MoveCommands.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WritableComparator.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/PathData.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/BufferedFSInputStream.java


> Typos in hadoop-common java docs
> 
>
> Key: HADOOP-11850
> URL: https://issues.apache.org/jira/browse/HADOOP-11850
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.6.0
>Reporter: surendra singh lilhore
>Assignee: surendra singh lilhore
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HADOOP-11850.patch, HADOOP-11850_1.patch, 
> HADOOP-11850_2.patch
>
>
> This jira will fix the typo in hdfs-common project



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11848) Incorrect arguments to sizeof in DomainSocket.c

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14508880#comment-14508880
 ] 

Hudson commented on HADOOP-11848:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #163 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/163/])
HADOOP-11848. Incorrect arguments to sizeof in DomainSocket.c (Malcolm Kavalsky 
via Colin P. McCabe) (cmccabe: rev a3b1d8c90288a6237089a98d4a81c25f44aedb2c)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/net/unix/DomainSocket.c


> Incorrect arguments to sizeof in DomainSocket.c
> ---
>
> Key: HADOOP-11848
> URL: https://issues.apache.org/jira/browse/HADOOP-11848
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.6.0
>Reporter: Malcolm Kavalsky
>Assignee: Malcolm Kavalsky
>  Labels: native
> Fix For: 2.8.0
>
> Attachments: HADOOP-11848.001.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Length of buffer to be zeroed using sizeof , should not use the address of 
> the structure rather the structure itself.
> DomainSocket.c line 156
> Replace current:
> memset(&addr,0,sizeof,(&addr));
> With:
> memset(&addr, 0, sizeof(addr));



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11868) Invalid user logins trigger large backtraces in server log

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14508891#comment-14508891
 ] 

Hudson commented on HADOOP-11868:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #172 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/172/])
HADOOP-11868. Invalid user logins trigger large backtraces in server log. 
Contributed by Chang Li (jlowe: rev 0ebe84d30af2046775884c9fb1e054da31582657)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java


> Invalid user logins trigger large backtraces in server log
> --
>
> Key: HADOOP-11868
> URL: https://issues.apache.org/jira/browse/HADOOP-11868
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chang Li
>Assignee: Chang Li
> Fix For: 2.7.1
>
> Attachments: YARN-3520.patch
>
>
> {code}
> WARN sso.CookieValidatorHelpers: Cookie has expired by 25364187 msec
> WARN server.AuthenticationFilter: Authentication exception: Invalid Cookie
> 166 org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Invalid Bouncer Cookie
> 167 at 
> KerberosAuthenticationHandler.bouncerAuthenticate(KerberosAuthenticationHandler.java:94)
> 168 at 
> AuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:82)
> 169 at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:507)
> 170 at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> 171 at 
> org.apache.hadoop.yarn.server.timeline.webapp.CrossOriginFilter.doFilter(CrossOriginFilter.java:95)
> 172 at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> 173 at 
> org.mortbay.servlet.UserAgentFilter.doFilter(UserAgentFilter.java:78)
> 174 at GzipFilter.doFilter(GzipFilter.java:188)
> 175 at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> 176 at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1224)
> 177 at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> 178 at 
> org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> 179 at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> 180 at 
> org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> 181 at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> 182 at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
> 183 at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
> 184 at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
> 185 at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
> 186 at 
> org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
> 187 at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
> 188 at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
> 189 at org.mortbay.jetty.Server.handle(Server.java:326)
> 190 at 
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
> 191 at 
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
> 192 at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
> 193 at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
> 194 at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
> 195 at 
> org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
> 196 at 
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
>  WARN sso.CookieValidatorHelpers: Cookie has expired by 25373197 msec
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11850) Typos in hadoop-common java docs

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14508897#comment-14508897
 ] 

Hudson commented on HADOOP-11850:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #172 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/172/])
HADOOP-11850: Typos in hadoop-common java docs. Contributed by Surendra Singh 
Lilhore. (jghoman: rev e54a3e1f4f3ea4dbba14f3fab0c395a235763c54)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WritableComparator.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/HarFileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/MoveCommands.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSInputChecker.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/FairCallQueueMXBean.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/PseudoAuthenticator.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/file/tfile/BoundedRangeFileInputStream.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ContentSummary.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/PathData.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/bzip2/CBZip2InputStream.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/Text.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/BufferedFSInputStream.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/AuthenticationException.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WritableUtils.java


> Typos in hadoop-common java docs
> 
>
> Key: HADOOP-11850
> URL: https://issues.apache.org/jira/browse/HADOOP-11850
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.6.0
>Reporter: surendra singh lilhore
>Assignee: surendra singh lilhore
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HADOOP-11850.patch, HADOOP-11850_1.patch, 
> HADOOP-11850_2.patch
>
>
> This jira will fix the typo in hdfs-common project



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11848) Incorrect arguments to sizeof in DomainSocket.c

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14508894#comment-14508894
 ] 

Hudson commented on HADOOP-11848:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #172 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/172/])
HADOOP-11848. Incorrect arguments to sizeof in DomainSocket.c (Malcolm Kavalsky 
via Colin P. McCabe) (cmccabe: rev a3b1d8c90288a6237089a98d4a81c25f44aedb2c)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/net/unix/DomainSocket.c


> Incorrect arguments to sizeof in DomainSocket.c
> ---
>
> Key: HADOOP-11848
> URL: https://issues.apache.org/jira/browse/HADOOP-11848
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.6.0
>Reporter: Malcolm Kavalsky
>Assignee: Malcolm Kavalsky
>  Labels: native
> Fix For: 2.8.0
>
> Attachments: HADOOP-11848.001.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Length of buffer to be zeroed using sizeof , should not use the address of 
> the structure rather the structure itself.
> DomainSocket.c line 156
> Replace current:
> memset(&addr,0,sizeof,(&addr));
> With:
> memset(&addr, 0, sizeof(addr));



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11864) JWTRedirectAuthenticationHandler breaks java8 javadocs

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14508892#comment-14508892
 ] 

Hudson commented on HADOOP-11864:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #172 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/172/])
HADOOP-11864. JWTRedirectAuthenticationHandler breaks java8 javadocs. (Larry 
McCay via stevel) (stevel: rev 08d4386162a878e88ac8f3d8db246e17c2943dad)
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/JWTRedirectAuthenticationHandler.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> JWTRedirectAuthenticationHandler breaks java8 javadocs
> --
>
> Key: HADOOP-11864
> URL: https://issues.apache.org/jira/browse/HADOOP-11864
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
> Environment: Jenkins on Java8
>Reporter: Steve Loughran
>Assignee: Larry McCay
> Fix For: 2.8.0
>
> Attachments: HADOOP-11864-0.patch
>
>
> Jenkins on Java8 is failing as {{JWTRedirectAuthenticationHandler}} has 
> {{}} tags in it, something javadoc on java8 considers illegal



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11861) test-patch.sh rewrite addendum patch

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14508896#comment-14508896
 ] 

Hudson commented on HADOOP-11861:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #172 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/172/])
HADOOP-11861. test-patch.sh rewrite addendum patch. Contributed by Allen 
Wittenauer. (cnauroth: rev 18eb5e79345295b2259b566c154375ad2a6216a1)
* dev-support/test-patch.d/shellcheck.sh
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/test-patch.sh


> test-patch.sh rewrite addendum patch
> 
>
> Key: HADOOP-11861
> URL: https://issues.apache.org/jira/browse/HADOOP-11861
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Anu Engineer
>Assignee: Allen Wittenauer
> Fix For: 2.8.0
>
> Attachments: HADOOP-11861-00.patch, HADOOP-11861-01.patch, 
> HADOOP-11861-02.patch, HADOOP-11861-04.patch
>
>
> if you specify "--build-native=false"  like 
> {code}
> ./dev-support/test-patch.sh  --build-native=false 
> ~/workspaces/patches/hdfs-8211.001.patch 
> {code}
> mvn fails with invalid lifecycle error. 
> Here are the steps to repro :
> 1) run any patch with --buid-native=false option 
> 2) Open up  /tmp/hadoop-test-patch//patchJavacWarnings.txt to see 
> the failure reason.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11859) PseudoAuthenticationHandler fails with httpcomponents v4.4

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1450#comment-1450
 ] 

Hudson commented on HADOOP-11859:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #172 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/172/])
HADOOP-11859. PseudoAuthenticationHandler fails with httpcomponents v4.4. 
Contributed by Eugene Koifman. (jitendra: rev 
1f4767c7f2d1fdd23954c16e903acd2dca78a1e1)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/PseudoAuthenticationHandler.java


> PseudoAuthenticationHandler fails with httpcomponents v4.4
> --
>
> Key: HADOOP-11859
> URL: https://issues.apache.org/jira/browse/HADOOP-11859
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Fix For: 2.8.0
>
> Attachments: HADOOP-11859.patch
>
>
> This shows in the context of WebHCat and Hive (which recently moved to 
> httpcomponents:httpclient:4.4) but could happen in other places.
> URLEncodedUtils.parse(String, Charset) which is called from 
> PseudoAuthenticationHandler.getUserName() with the 1st argument produced by 
> HttpServletRequest.getQueryString().
> The later returns NULL if there is no query string in the URL.
> in httpcoponents:httpclient:4.2.5 parse() gracefully handles first argument 
> being NULL, but in 4.4 it NPEs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11859) PseudoAuthenticationHandler fails with httpcomponents v4.4

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14508921#comment-14508921
 ] 

Hudson commented on HADOOP-11859:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #906 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/906/])
HADOOP-11859. PseudoAuthenticationHandler fails with httpcomponents v4.4. 
Contributed by Eugene Koifman. (jitendra: rev 
1f4767c7f2d1fdd23954c16e903acd2dca78a1e1)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/PseudoAuthenticationHandler.java


> PseudoAuthenticationHandler fails with httpcomponents v4.4
> --
>
> Key: HADOOP-11859
> URL: https://issues.apache.org/jira/browse/HADOOP-11859
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Fix For: 2.8.0
>
> Attachments: HADOOP-11859.patch
>
>
> This shows in the context of WebHCat and Hive (which recently moved to 
> httpcomponents:httpclient:4.4) but could happen in other places.
> URLEncodedUtils.parse(String, Charset) which is called from 
> PseudoAuthenticationHandler.getUserName() with the 1st argument produced by 
> HttpServletRequest.getQueryString().
> The later returns NULL if there is no query string in the URL.
> in httpcoponents:httpclient:4.2.5 parse() gracefully handles first argument 
> being NULL, but in 4.4 it NPEs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11861) test-patch.sh rewrite addendum patch

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14508929#comment-14508929
 ] 

Hudson commented on HADOOP-11861:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #906 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/906/])
HADOOP-11861. test-patch.sh rewrite addendum patch. Contributed by Allen 
Wittenauer. (cnauroth: rev 18eb5e79345295b2259b566c154375ad2a6216a1)
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/test-patch.sh
* dev-support/test-patch.d/shellcheck.sh


> test-patch.sh rewrite addendum patch
> 
>
> Key: HADOOP-11861
> URL: https://issues.apache.org/jira/browse/HADOOP-11861
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Anu Engineer
>Assignee: Allen Wittenauer
> Fix For: 2.8.0
>
> Attachments: HADOOP-11861-00.patch, HADOOP-11861-01.patch, 
> HADOOP-11861-02.patch, HADOOP-11861-04.patch
>
>
> if you specify "--build-native=false"  like 
> {code}
> ./dev-support/test-patch.sh  --build-native=false 
> ~/workspaces/patches/hdfs-8211.001.patch 
> {code}
> mvn fails with invalid lifecycle error. 
> Here are the steps to repro :
> 1) run any patch with --buid-native=false option 
> 2) Open up  /tmp/hadoop-test-patch//patchJavacWarnings.txt to see 
> the failure reason.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11868) Invalid user logins trigger large backtraces in server log

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14508924#comment-14508924
 ] 

Hudson commented on HADOOP-11868:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #906 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/906/])
HADOOP-11868. Invalid user logins trigger large backtraces in server log. 
Contributed by Chang Li (jlowe: rev 0ebe84d30af2046775884c9fb1e054da31582657)
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Invalid user logins trigger large backtraces in server log
> --
>
> Key: HADOOP-11868
> URL: https://issues.apache.org/jira/browse/HADOOP-11868
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chang Li
>Assignee: Chang Li
> Fix For: 2.7.1
>
> Attachments: YARN-3520.patch
>
>
> {code}
> WARN sso.CookieValidatorHelpers: Cookie has expired by 25364187 msec
> WARN server.AuthenticationFilter: Authentication exception: Invalid Cookie
> 166 org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Invalid Bouncer Cookie
> 167 at 
> KerberosAuthenticationHandler.bouncerAuthenticate(KerberosAuthenticationHandler.java:94)
> 168 at 
> AuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:82)
> 169 at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:507)
> 170 at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> 171 at 
> org.apache.hadoop.yarn.server.timeline.webapp.CrossOriginFilter.doFilter(CrossOriginFilter.java:95)
> 172 at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> 173 at 
> org.mortbay.servlet.UserAgentFilter.doFilter(UserAgentFilter.java:78)
> 174 at GzipFilter.doFilter(GzipFilter.java:188)
> 175 at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> 176 at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1224)
> 177 at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> 178 at 
> org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> 179 at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> 180 at 
> org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> 181 at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> 182 at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
> 183 at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
> 184 at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
> 185 at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
> 186 at 
> org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
> 187 at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
> 188 at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
> 189 at org.mortbay.jetty.Server.handle(Server.java:326)
> 190 at 
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
> 191 at 
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
> 192 at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
> 193 at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
> 194 at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
> 195 at 
> org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
> 196 at 
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
>  WARN sso.CookieValidatorHelpers: Cookie has expired by 25373197 msec
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11864) JWTRedirectAuthenticationHandler breaks java8 javadocs

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14508925#comment-14508925
 ] 

Hudson commented on HADOOP-11864:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #906 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/906/])
HADOOP-11864. JWTRedirectAuthenticationHandler breaks java8 javadocs. (Larry 
McCay via stevel) (stevel: rev 08d4386162a878e88ac8f3d8db246e17c2943dad)
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/JWTRedirectAuthenticationHandler.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> JWTRedirectAuthenticationHandler breaks java8 javadocs
> --
>
> Key: HADOOP-11864
> URL: https://issues.apache.org/jira/browse/HADOOP-11864
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
> Environment: Jenkins on Java8
>Reporter: Steve Loughran
>Assignee: Larry McCay
> Fix For: 2.8.0
>
> Attachments: HADOOP-11864-0.patch
>
>
> Jenkins on Java8 is failing as {{JWTRedirectAuthenticationHandler}} has 
> {{}} tags in it, something javadoc on java8 considers illegal



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11848) Incorrect arguments to sizeof in DomainSocket.c

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14508927#comment-14508927
 ] 

Hudson commented on HADOOP-11848:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #906 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/906/])
HADOOP-11848. Incorrect arguments to sizeof in DomainSocket.c (Malcolm Kavalsky 
via Colin P. McCabe) (cmccabe: rev a3b1d8c90288a6237089a98d4a81c25f44aedb2c)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/net/unix/DomainSocket.c


> Incorrect arguments to sizeof in DomainSocket.c
> ---
>
> Key: HADOOP-11848
> URL: https://issues.apache.org/jira/browse/HADOOP-11848
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.6.0
>Reporter: Malcolm Kavalsky
>Assignee: Malcolm Kavalsky
>  Labels: native
> Fix For: 2.8.0
>
> Attachments: HADOOP-11848.001.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Length of buffer to be zeroed using sizeof , should not use the address of 
> the structure rather the structure itself.
> DomainSocket.c line 156
> Replace current:
> memset(&addr,0,sizeof,(&addr));
> With:
> memset(&addr, 0, sizeof(addr));



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11850) Typos in hadoop-common java docs

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14508930#comment-14508930
 ] 

Hudson commented on HADOOP-11850:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #906 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/906/])
HADOOP-11850: Typos in hadoop-common java docs. Contributed by Surendra Singh 
Lilhore. (jghoman: rev e54a3e1f4f3ea4dbba14f3fab0c395a235763c54)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WritableComparator.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WritableUtils.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/BufferedFSInputStream.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/bzip2/CBZip2InputStream.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/PseudoAuthenticator.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/Text.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/file/tfile/BoundedRangeFileInputStream.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSInputChecker.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFs.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/MoveCommands.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/PathData.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/FairCallQueueMXBean.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/HarFileSystem.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/AuthenticationException.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ContentSummary.java


> Typos in hadoop-common java docs
> 
>
> Key: HADOOP-11850
> URL: https://issues.apache.org/jira/browse/HADOOP-11850
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.6.0
>Reporter: surendra singh lilhore
>Assignee: surendra singh lilhore
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HADOOP-11850.patch, HADOOP-11850_1.patch, 
> HADOOP-11850_2.patch
>
>
> This jira will fix the typo in hdfs-common project



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11859) PseudoAuthenticationHandler fails with httpcomponents v4.4

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14509019#comment-14509019
 ] 

Hudson commented on HADOOP-11859:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #173 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/173/])
HADOOP-11859. PseudoAuthenticationHandler fails with httpcomponents v4.4. 
Contributed by Eugene Koifman. (jitendra: rev 
1f4767c7f2d1fdd23954c16e903acd2dca78a1e1)
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/PseudoAuthenticationHandler.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> PseudoAuthenticationHandler fails with httpcomponents v4.4
> --
>
> Key: HADOOP-11859
> URL: https://issues.apache.org/jira/browse/HADOOP-11859
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Fix For: 2.8.0
>
> Attachments: HADOOP-11859.patch
>
>
> This shows in the context of WebHCat and Hive (which recently moved to 
> httpcomponents:httpclient:4.4) but could happen in other places.
> URLEncodedUtils.parse(String, Charset) which is called from 
> PseudoAuthenticationHandler.getUserName() with the 1st argument produced by 
> HttpServletRequest.getQueryString().
> The later returns NULL if there is no query string in the URL.
> in httpcoponents:httpclient:4.2.5 parse() gracefully handles first argument 
> being NULL, but in 4.4 it NPEs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11864) JWTRedirectAuthenticationHandler breaks java8 javadocs

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14509023#comment-14509023
 ] 

Hudson commented on HADOOP-11864:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #173 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/173/])
HADOOP-11864. JWTRedirectAuthenticationHandler breaks java8 javadocs. (Larry 
McCay via stevel) (stevel: rev 08d4386162a878e88ac8f3d8db246e17c2943dad)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/JWTRedirectAuthenticationHandler.java


> JWTRedirectAuthenticationHandler breaks java8 javadocs
> --
>
> Key: HADOOP-11864
> URL: https://issues.apache.org/jira/browse/HADOOP-11864
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
> Environment: Jenkins on Java8
>Reporter: Steve Loughran
>Assignee: Larry McCay
> Fix For: 2.8.0
>
> Attachments: HADOOP-11864-0.patch
>
>
> Jenkins on Java8 is failing as {{JWTRedirectAuthenticationHandler}} has 
> {{}} tags in it, something javadoc on java8 considers illegal



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11868) Invalid user logins trigger large backtraces in server log

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14509022#comment-14509022
 ] 

Hudson commented on HADOOP-11868:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #173 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/173/])
HADOOP-11868. Invalid user logins trigger large backtraces in server log. 
Contributed by Chang Li (jlowe: rev 0ebe84d30af2046775884c9fb1e054da31582657)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java


> Invalid user logins trigger large backtraces in server log
> --
>
> Key: HADOOP-11868
> URL: https://issues.apache.org/jira/browse/HADOOP-11868
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chang Li
>Assignee: Chang Li
> Fix For: 2.7.1
>
> Attachments: YARN-3520.patch
>
>
> {code}
> WARN sso.CookieValidatorHelpers: Cookie has expired by 25364187 msec
> WARN server.AuthenticationFilter: Authentication exception: Invalid Cookie
> 166 org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Invalid Bouncer Cookie
> 167 at 
> KerberosAuthenticationHandler.bouncerAuthenticate(KerberosAuthenticationHandler.java:94)
> 168 at 
> AuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:82)
> 169 at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:507)
> 170 at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> 171 at 
> org.apache.hadoop.yarn.server.timeline.webapp.CrossOriginFilter.doFilter(CrossOriginFilter.java:95)
> 172 at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> 173 at 
> org.mortbay.servlet.UserAgentFilter.doFilter(UserAgentFilter.java:78)
> 174 at GzipFilter.doFilter(GzipFilter.java:188)
> 175 at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> 176 at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1224)
> 177 at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> 178 at 
> org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> 179 at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> 180 at 
> org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> 181 at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> 182 at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
> 183 at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
> 184 at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
> 185 at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
> 186 at 
> org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
> 187 at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
> 188 at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
> 189 at org.mortbay.jetty.Server.handle(Server.java:326)
> 190 at 
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
> 191 at 
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
> 192 at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
> 193 at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
> 194 at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
> 195 at 
> org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
> 196 at 
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
>  WARN sso.CookieValidatorHelpers: Cookie has expired by 25373197 msec
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11848) Incorrect arguments to sizeof in DomainSocket.c

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14509025#comment-14509025
 ] 

Hudson commented on HADOOP-11848:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #173 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/173/])
HADOOP-11848. Incorrect arguments to sizeof in DomainSocket.c (Malcolm Kavalsky 
via Colin P. McCabe) (cmccabe: rev a3b1d8c90288a6237089a98d4a81c25f44aedb2c)
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/net/unix/DomainSocket.c
* hadoop-common-project/hadoop-common/CHANGES.txt


> Incorrect arguments to sizeof in DomainSocket.c
> ---
>
> Key: HADOOP-11848
> URL: https://issues.apache.org/jira/browse/HADOOP-11848
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.6.0
>Reporter: Malcolm Kavalsky
>Assignee: Malcolm Kavalsky
>  Labels: native
> Fix For: 2.8.0
>
> Attachments: HADOOP-11848.001.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Length of buffer to be zeroed using sizeof , should not use the address of 
> the structure rather the structure itself.
> DomainSocket.c line 156
> Replace current:
> memset(&addr,0,sizeof,(&addr));
> With:
> memset(&addr, 0, sizeof(addr));



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11850) Typos in hadoop-common java docs

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14509028#comment-14509028
 ] 

Hudson commented on HADOOP-11850:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #173 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/173/])
HADOOP-11850: Typos in hadoop-common java docs. Contributed by Surendra Singh 
Lilhore. (jghoman: rev e54a3e1f4f3ea4dbba14f3fab0c395a235763c54)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/PathData.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WritableComparator.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSInputChecker.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/Text.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ContentSummary.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/PseudoAuthenticator.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/FairCallQueueMXBean.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/file/tfile/BoundedRangeFileInputStream.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WritableUtils.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/MoveCommands.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/BufferedFSInputStream.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/HarFileSystem.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/AuthenticationException.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/bzip2/CBZip2InputStream.java


> Typos in hadoop-common java docs
> 
>
> Key: HADOOP-11850
> URL: https://issues.apache.org/jira/browse/HADOOP-11850
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.6.0
>Reporter: surendra singh lilhore
>Assignee: surendra singh lilhore
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HADOOP-11850.patch, HADOOP-11850_1.patch, 
> HADOOP-11850_2.patch
>
>
> This jira will fix the typo in hdfs-common project



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11861) test-patch.sh rewrite addendum patch

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14509027#comment-14509027
 ] 

Hudson commented on HADOOP-11861:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #173 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/173/])
HADOOP-11861. test-patch.sh rewrite addendum patch. Contributed by Allen 
Wittenauer. (cnauroth: rev 18eb5e79345295b2259b566c154375ad2a6216a1)
* dev-support/test-patch.sh
* dev-support/test-patch.d/shellcheck.sh
* hadoop-common-project/hadoop-common/CHANGES.txt


> test-patch.sh rewrite addendum patch
> 
>
> Key: HADOOP-11861
> URL: https://issues.apache.org/jira/browse/HADOOP-11861
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Anu Engineer
>Assignee: Allen Wittenauer
> Fix For: 2.8.0
>
> Attachments: HADOOP-11861-00.patch, HADOOP-11861-01.patch, 
> HADOOP-11861-02.patch, HADOOP-11861-04.patch
>
>
> if you specify "--build-native=false"  like 
> {code}
> ./dev-support/test-patch.sh  --build-native=false 
> ~/workspaces/patches/hdfs-8211.001.patch 
> {code}
> mvn fails with invalid lifecycle error. 
> Here are the steps to repro :
> 1) run any patch with --buid-native=false option 
> 2) Open up  /tmp/hadoop-test-patch//patchJavacWarnings.txt to see 
> the failure reason.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11861) test-patch.sh rewrite addendum patch

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14509069#comment-14509069
 ] 

Hudson commented on HADOOP-11861:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2122 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2122/])
HADOOP-11861. test-patch.sh rewrite addendum patch. Contributed by Allen 
Wittenauer. (cnauroth: rev 18eb5e79345295b2259b566c154375ad2a6216a1)
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/test-patch.sh
* dev-support/test-patch.d/shellcheck.sh


> test-patch.sh rewrite addendum patch
> 
>
> Key: HADOOP-11861
> URL: https://issues.apache.org/jira/browse/HADOOP-11861
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Anu Engineer
>Assignee: Allen Wittenauer
> Fix For: 2.8.0
>
> Attachments: HADOOP-11861-00.patch, HADOOP-11861-01.patch, 
> HADOOP-11861-02.patch, HADOOP-11861-04.patch
>
>
> if you specify "--build-native=false"  like 
> {code}
> ./dev-support/test-patch.sh  --build-native=false 
> ~/workspaces/patches/hdfs-8211.001.patch 
> {code}
> mvn fails with invalid lifecycle error. 
> Here are the steps to repro :
> 1) run any patch with --buid-native=false option 
> 2) Open up  /tmp/hadoop-test-patch//patchJavacWarnings.txt to see 
> the failure reason.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11627) Remove io.native.lib.available from trunk

2015-04-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14509072#comment-14509072
 ] 

Hadoop QA commented on HADOOP-11627:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 38s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 4 new or modified test files. |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 1  line(s) that 
end in whitespace. |
| {color:green}+1{color} | javac |   7m 27s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 35s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   2m 56s | Site still builds. |
| {color:green}+1{color} | checkstyle |   3m 56s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 31s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 17s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:red}-1{color} | common tests |  36m 45s | Tests failed in 
hadoop-common. |
| {color:green}+1{color} | mapreduce tests | 105m 48s | Tests passed in 
hadoop-mapreduce-client-jobclient. |
| | | 188m 55s | |
\\
\\
|| Reason || Tests ||
| Timed out tests | org.apache.hadoop.http.TestHttpCookieFlag |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12727586/HADOOP-11627-011.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle site |
| git revision | trunk / baf8bc6 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6166/artifact/patchprocess/whitespace.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6166/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-mapreduce-client-jobclient test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6166/artifact/patchprocess/testrun_hadoop-mapreduce-client-jobclient.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6166/testReport/ |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6166/console |


This message was automatically generated.

> Remove io.native.lib.available from trunk
> -
>
> Key: HADOOP-11627
> URL: https://issues.apache.org/jira/browse/HADOOP-11627
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-11627-002.patch, HADOOP-11627-003.patch, 
> HADOOP-11627-004.patch, HADOOP-11627-005.patch, HADOOP-11627-006.patch, 
> HADOOP-11627-007.patch, HADOOP-11627-008.patch, HADOOP-11627-009.patch, 
> HADOOP-11627-010.patch, HADOOP-11627-011.patch, HADOOP-11627.patch
>
>
> According to the discussion in HADOOP-8642, we should remove 
> {{io.native.lib.available}} from trunk, and always use native libraries if 
> they exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11868) Invalid user logins trigger large backtraces in server log

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14509064#comment-14509064
 ] 

Hudson commented on HADOOP-11868:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2122 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2122/])
HADOOP-11868. Invalid user logins trigger large backtraces in server log. 
Contributed by Chang Li (jlowe: rev 0ebe84d30af2046775884c9fb1e054da31582657)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java


> Invalid user logins trigger large backtraces in server log
> --
>
> Key: HADOOP-11868
> URL: https://issues.apache.org/jira/browse/HADOOP-11868
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chang Li
>Assignee: Chang Li
> Fix For: 2.7.1
>
> Attachments: YARN-3520.patch
>
>
> {code}
> WARN sso.CookieValidatorHelpers: Cookie has expired by 25364187 msec
> WARN server.AuthenticationFilter: Authentication exception: Invalid Cookie
> 166 org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Invalid Bouncer Cookie
> 167 at 
> KerberosAuthenticationHandler.bouncerAuthenticate(KerberosAuthenticationHandler.java:94)
> 168 at 
> AuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:82)
> 169 at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:507)
> 170 at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> 171 at 
> org.apache.hadoop.yarn.server.timeline.webapp.CrossOriginFilter.doFilter(CrossOriginFilter.java:95)
> 172 at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> 173 at 
> org.mortbay.servlet.UserAgentFilter.doFilter(UserAgentFilter.java:78)
> 174 at GzipFilter.doFilter(GzipFilter.java:188)
> 175 at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> 176 at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1224)
> 177 at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> 178 at 
> org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> 179 at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> 180 at 
> org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> 181 at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> 182 at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
> 183 at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
> 184 at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
> 185 at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
> 186 at 
> org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
> 187 at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
> 188 at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
> 189 at org.mortbay.jetty.Server.handle(Server.java:326)
> 190 at 
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
> 191 at 
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
> 192 at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
> 193 at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
> 194 at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
> 195 at 
> org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
> 196 at 
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
>  WARN sso.CookieValidatorHelpers: Cookie has expired by 25373197 msec
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11850) Typos in hadoop-common java docs

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14509070#comment-14509070
 ] 

Hudson commented on HADOOP-11850:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2122 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2122/])
HADOOP-11850: Typos in hadoop-common java docs. Contributed by Surendra Singh 
Lilhore. (jghoman: rev e54a3e1f4f3ea4dbba14f3fab0c395a235763c54)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/bzip2/CBZip2InputStream.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/HarFileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WritableComparator.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/BufferedFSInputStream.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WritableUtils.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/FairCallQueueMXBean.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/Text.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/PathData.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSInputChecker.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/file/tfile/BoundedRangeFileInputStream.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/AuthenticationException.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/MoveCommands.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFs.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/PseudoAuthenticator.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ContentSummary.java


> Typos in hadoop-common java docs
> 
>
> Key: HADOOP-11850
> URL: https://issues.apache.org/jira/browse/HADOOP-11850
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.6.0
>Reporter: surendra singh lilhore
>Assignee: surendra singh lilhore
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HADOOP-11850.patch, HADOOP-11850_1.patch, 
> HADOOP-11850_2.patch
>
>
> This jira will fix the typo in hdfs-common project



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11848) Incorrect arguments to sizeof in DomainSocket.c

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14509067#comment-14509067
 ] 

Hudson commented on HADOOP-11848:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2122 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2122/])
HADOOP-11848. Incorrect arguments to sizeof in DomainSocket.c (Malcolm Kavalsky 
via Colin P. McCabe) (cmccabe: rev a3b1d8c90288a6237089a98d4a81c25f44aedb2c)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/net/unix/DomainSocket.c


> Incorrect arguments to sizeof in DomainSocket.c
> ---
>
> Key: HADOOP-11848
> URL: https://issues.apache.org/jira/browse/HADOOP-11848
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.6.0
>Reporter: Malcolm Kavalsky
>Assignee: Malcolm Kavalsky
>  Labels: native
> Fix For: 2.8.0
>
> Attachments: HADOOP-11848.001.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Length of buffer to be zeroed using sizeof , should not use the address of 
> the structure rather the structure itself.
> DomainSocket.c line 156
> Replace current:
> memset(&addr,0,sizeof,(&addr));
> With:
> memset(&addr, 0, sizeof(addr));



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11864) JWTRedirectAuthenticationHandler breaks java8 javadocs

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14509065#comment-14509065
 ] 

Hudson commented on HADOOP-11864:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2122 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2122/])
HADOOP-11864. JWTRedirectAuthenticationHandler breaks java8 javadocs. (Larry 
McCay via stevel) (stevel: rev 08d4386162a878e88ac8f3d8db246e17c2943dad)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/JWTRedirectAuthenticationHandler.java


> JWTRedirectAuthenticationHandler breaks java8 javadocs
> --
>
> Key: HADOOP-11864
> URL: https://issues.apache.org/jira/browse/HADOOP-11864
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
> Environment: Jenkins on Java8
>Reporter: Steve Loughran
>Assignee: Larry McCay
> Fix For: 2.8.0
>
> Attachments: HADOOP-11864-0.patch
>
>
> Jenkins on Java8 is failing as {{JWTRedirectAuthenticationHandler}} has 
> {{}} tags in it, something javadoc on java8 considers illegal



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11859) PseudoAuthenticationHandler fails with httpcomponents v4.4

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14509061#comment-14509061
 ] 

Hudson commented on HADOOP-11859:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2122 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2122/])
HADOOP-11859. PseudoAuthenticationHandler fails with httpcomponents v4.4. 
Contributed by Eugene Koifman. (jitendra: rev 
1f4767c7f2d1fdd23954c16e903acd2dca78a1e1)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/PseudoAuthenticationHandler.java


> PseudoAuthenticationHandler fails with httpcomponents v4.4
> --
>
> Key: HADOOP-11859
> URL: https://issues.apache.org/jira/browse/HADOOP-11859
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Fix For: 2.8.0
>
> Attachments: HADOOP-11859.patch
>
>
> This shows in the context of WebHCat and Hive (which recently moved to 
> httpcomponents:httpclient:4.4) but could happen in other places.
> URLEncodedUtils.parse(String, Charset) which is called from 
> PseudoAuthenticationHandler.getUserName() with the 1st argument produced by 
> HttpServletRequest.getQueryString().
> The later returns NULL if there is no query string in the URL.
> in httpcoponents:httpclient:4.2.5 parse() gracefully handles first argument 
> being NULL, but in 4.4 it NPEs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11851) s3n to swallow IOEs on inner stream close

2015-04-23 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14509210#comment-14509210
 ] 

Steve Loughran commented on HADOOP-11851:
-

it's similar to HADOOP-11730; that's the overall "recover from failure" code. 
This is for {{close()}} to not trigger problems.

HADOOP-11730 is probably a superset.

> s3n to swallow IOEs on inner stream close
> -
>
> Key: HADOOP-11851
> URL: https://issues.apache.org/jira/browse/HADOOP-11851
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>Assignee: Anu Engineer
>Priority: Minor
>
> We've seen a situation where some work was failing from (recurrent) 
> connection reset exceptions.
> Irrespective of the root cause, these were surfacing not in the read 
> operations, but when the input stream was being closed -including during a 
> seek()
> These exceptions could be caught & logged & warn, rather than trigger 
> immediate failures. It shouldn't matter to the next GET whether the last 
> stream closed prematurely, as long as the new one works



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11730) Regression: s3n read failure recovery broken

2015-04-23 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11730:

Target Version/s: 2.7.1
  Status: Patch Available  (was: Open)

> Regression: s3n read failure recovery broken
> 
>
> Key: HADOOP-11730
> URL: https://issues.apache.org/jira/browse/HADOOP-11730
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.6.0, 2.5.0
> Environment: HDP 2.2
>Reporter: Takenori Sato
>Assignee: Takenori Sato
> Attachments: HADOOP-11730-branch-2.6.0.001.patch
>
>
> s3n attempts to read again when it encounters IOException during read. But 
> the current logic does not reopen the connection, thus, it ends up with 
> no-op, and committing the wrong(truncated) output.
> Here's a stack trace as an example.
> {quote}
> 2015-03-13 20:17:24,835 [TezChild] INFO  
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor - 
> Starting output org.apache.tez.mapreduce.output.MROutput@52008dbd to vertex 
> scope-12
> 2015-03-13 20:17:24,866 [TezChild] DEBUG 
> org.jets3t.service.impl.rest.httpclient.HttpMethodReleaseInputStream - 
> Released HttpMethod as its response data stream threw an exception
> org.apache.http.ConnectionClosedException: Premature end of Content-Length 
> delimited message body (expected: 296587138; received: 155648
>   at 
> org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:184)
>   at 
> org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:138)
>   at 
> org.jets3t.service.io.InterruptableInputStream.read(InterruptableInputStream.java:78)
>   at 
> org.jets3t.service.impl.rest.httpclient.HttpMethodReleaseInputStream.read(HttpMethodReleaseInputStream.java:146)
>   at 
> org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.read(NativeS3FileSystem.java:145)
>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
>   at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
>   at java.io.DataInputStream.read(DataInputStream.java:100)
>   at org.apache.hadoop.util.LineReader.fillBuffer(LineReader.java:180)
>   at 
> org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:216)
>   at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)
>   at 
> org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:185)
>   at org.apache.pig.builtin.PigStorage.getNext(PigStorage.java:259)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:204)
>   at 
> org.apache.tez.mapreduce.lib.MRReaderMapReduce.next(MRReaderMapReduce.java:116)
>   at 
> org.apache.pig.backend.hadoop.executionengine.tez.plan.operator.POSimpleTezLoad.getNextTuple(POSimpleTezLoad.java:106)
>   at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:307)
>   at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.getNextTuple(POForEach.java:246)
>   at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:307)
>   at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POFilter.getNextTuple(POFilter.java:91)
>   at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:307)
>   at 
> org.apache.pig.backend.hadoop.executionengine.tez.plan.operator.POStoreTez.getNextTuple(POStoreTez.java:117)
>   at 
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor.runPipeline(PigProcessor.java:313)
>   at 
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor.run(PigProcessor.java:192)
>   at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:324)
>   at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:176)
>   at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:168)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>   at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.call(TezTaskRunner.java:168)
>   at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.call(TezTaskRunner.java:163)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.Thre

[jira] [Commented] (HADOOP-11730) Regression: s3n read failure recovery broken

2015-04-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14509304#comment-14509304
 ] 

Hadoop QA commented on HADOOP-11730:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 32s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 3  line(s) that 
end in whitespace. |
| {color:green}+1{color} | javac |   7m 27s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 36s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   5m 28s | The applied patch generated  1 
 additional checkstyle issues. |
| {color:green}+1{color} | install |   1m 31s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   0m 38s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | tools/hadoop tests |   0m 15s | Tests passed in 
hadoop-aws. |
| | |  40m 33s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12705845/HADOOP-11730-branch-2.6.0.001.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 189a63a |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6167/artifact/patchprocess/whitespace.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6167/artifact/patchprocess/checkstyle-result-diff.txt
 |
| hadoop-aws test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6167/artifact/patchprocess/testrun_hadoop-aws.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6167/testReport/ |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6167/console |


This message was automatically generated.

> Regression: s3n read failure recovery broken
> 
>
> Key: HADOOP-11730
> URL: https://issues.apache.org/jira/browse/HADOOP-11730
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.5.0, 2.6.0
> Environment: HDP 2.2
>Reporter: Takenori Sato
>Assignee: Takenori Sato
> Attachments: HADOOP-11730-branch-2.6.0.001.patch
>
>
> s3n attempts to read again when it encounters IOException during read. But 
> the current logic does not reopen the connection, thus, it ends up with 
> no-op, and committing the wrong(truncated) output.
> Here's a stack trace as an example.
> {quote}
> 2015-03-13 20:17:24,835 [TezChild] INFO  
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor - 
> Starting output org.apache.tez.mapreduce.output.MROutput@52008dbd to vertex 
> scope-12
> 2015-03-13 20:17:24,866 [TezChild] DEBUG 
> org.jets3t.service.impl.rest.httpclient.HttpMethodReleaseInputStream - 
> Released HttpMethod as its response data stream threw an exception
> org.apache.http.ConnectionClosedException: Premature end of Content-Length 
> delimited message body (expected: 296587138; received: 155648
>   at 
> org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:184)
>   at 
> org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:138)
>   at 
> org.jets3t.service.io.InterruptableInputStream.read(InterruptableInputStream.java:78)
>   at 
> org.jets3t.service.impl.rest.httpclient.HttpMethodReleaseInputStream.read(HttpMethodReleaseInputStream.java:146)
>   at 
> org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.read(NativeS3FileSystem.java:145)
>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
>   at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
>   at java.io.DataInputStream.read(DataInputStream.java:100)
>   at org.apache.hadoop.util.LineReader.fillBuffer(LineReader.java:180)
>   at 
> org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:216)
>   at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)
>   at 
> org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:185)
>   at org.apache.pig.builtin.PigStorage.getNext(PigStorage.java:259)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.ne

[jira] [Updated] (HADOOP-10597) RPC Server signals backoff to clients when all request queues are full

2015-04-23 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-10597:
---
Summary: RPC Server signals backoff to clients when all request queues are 
full  (was: Evaluate if we can have RPC client back off when server is under 
heavy load)

> RPC Server signals backoff to clients when all request queues are full
> --
>
> Key: HADOOP-10597
> URL: https://issues.apache.org/jira/browse/HADOOP-10597
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ming Ma
>Assignee: Ming Ma
> Attachments: HADOOP-10597-2.patch, HADOOP-10597-3.patch, 
> HADOOP-10597-4.patch, HADOOP-10597-5.patch, HADOOP-10597-6.patch, 
> HADOOP-10597.patch, MoreRPCClientBackoffEvaluation.pdf, 
> RPCClientBackoffDesignAndEvaluation.pdf
>
>
> Currently if an application hits NN too hard, RPC requests be in blocking 
> state, assuming OS connection doesn't run out. Alternatively RPC or NN can 
> throw some well defined exception back to the client based on certain 
> policies when it is under heavy load; client will understand such exception 
> and do exponential back off, as another implementation of 
> RetryInvocationHandler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10597) RPC Server signals backoff to clients when all request queues are full

2015-04-23 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-10597:
---
   Resolution: Fixed
Fix Version/s: 2.8.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I committed it to trunk and branch-2.

Thanks Ming and Steve. [~mingma], could you please add a short release note to 
the Jira?

> RPC Server signals backoff to clients when all request queues are full
> --
>
> Key: HADOOP-10597
> URL: https://issues.apache.org/jira/browse/HADOOP-10597
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ming Ma
>Assignee: Ming Ma
> Fix For: 2.8.0
>
> Attachments: HADOOP-10597-2.patch, HADOOP-10597-3.patch, 
> HADOOP-10597-4.patch, HADOOP-10597-5.patch, HADOOP-10597-6.patch, 
> HADOOP-10597.patch, MoreRPCClientBackoffEvaluation.pdf, 
> RPCClientBackoffDesignAndEvaluation.pdf
>
>
> Currently if an application hits NN too hard, RPC requests be in blocking 
> state, assuming OS connection doesn't run out. Alternatively RPC or NN can 
> throw some well defined exception back to the client based on certain 
> policies when it is under heavy load; client will understand such exception 
> and do exponential back off, as another implementation of 
> RetryInvocationHandler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10597) RPC Server signals backoff to clients when all request queues are full

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14509326#comment-14509326
 ] 

Hudson commented on HADOOP-10597:
-

FAILURE: Integrated in Hadoop-trunk-Commit #7647 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7647/])
HADOOP-10597. RPC Server signals backoff to clients when all request queues are 
full. (Contributed by Ming Ma) (arp: rev 
49f6e3d35e0f89637ae9ea970f249c13bdc0fd49)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestCallQueueManager.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/metrics/RpcMetrics.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallQueueManager.java


> RPC Server signals backoff to clients when all request queues are full
> --
>
> Key: HADOOP-10597
> URL: https://issues.apache.org/jira/browse/HADOOP-10597
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ming Ma
>Assignee: Ming Ma
> Fix For: 2.8.0
>
> Attachments: HADOOP-10597-2.patch, HADOOP-10597-3.patch, 
> HADOOP-10597-4.patch, HADOOP-10597-5.patch, HADOOP-10597-6.patch, 
> HADOOP-10597.patch, MoreRPCClientBackoffEvaluation.pdf, 
> RPCClientBackoffDesignAndEvaluation.pdf
>
>
> Currently if an application hits NN too hard, RPC requests be in blocking 
> state, assuming OS connection doesn't run out. Alternatively RPC or NN can 
> throw some well defined exception back to the client based on certain 
> policies when it is under heavy load; client will understand such exception 
> and do exponential back off, as another implementation of 
> RetryInvocationHandler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11866) increase readability of the output of white space and checkstyle script

2015-04-23 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated HADOOP-11866:
---
Attachment: HADOOP-11866.20150423-1.patch

Thanks for the comments [~wheat9] & [~busbey],
+1 for suggestion in the output file, as many like me might not be aware of 
{{git apply --whitespace=fix}} option, 
IMO If it was small number of white space issues then {{git apply 
--whitespace=fix}} would be little more work than manually correcting, so i 
would prefer to have line numbers to be printed so that i can do the required 
changes faster. 
 [~busbey], IMHO just the filename is not so useful, file name and followed 
with the actual lines number within that file would be useful else line numbers 
based on the patch would be better. IMO later approach was simpler and better, 
hence updated the patch with the later approach and  header for checkstyle 
output

> increase readability of the output of white space and checkstyle script
> ---
>
> Key: HADOOP-11866
> URL: https://issues.apache.org/jira/browse/HADOOP-11866
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Minor
> Attachments: HADOOP-11866.20150422-1.patch, 
> HADOOP-11866.20150423-1.patch
>
>
> HADOOP-11746 supports listing of the lines which has trailing white spaces 
> but doesn't inform patch line number. Without this report output will not be 
> of much help as in most cases it reports blank lines. Also for the first 
> timers it would be difficult to understand the output check style script 
> hence adding an header



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11852) Disable symlinks in trunk

2015-04-23 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14509496#comment-14509496
 ] 

Colin Patrick McCabe commented on HADOOP-11852:
---

+1.  Thanks, Andrew.

> Disable symlinks in trunk
> -
>
> Key: HADOOP-11852
> URL: https://issues.apache.org/jira/browse/HADOOP-11852
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hadoop-11852.001.patch
>
>
> In HADOOP-10020 and HADOOP-10162 we disabled symlinks in branch-2. Since 
> there's currently no plan to finish this work, let's disable it in trunk too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11852) Disable symlinks in trunk

2015-04-23 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14509499#comment-14509499
 ] 

Colin Patrick McCabe commented on HADOOP-11852:
---

Re: eclipse:eclipse, I think there have been problems with that recently due to 
some other changes like the configuration reorganization.  I definitely don't 
think it's anything in this patch

> Disable symlinks in trunk
> -
>
> Key: HADOOP-11852
> URL: https://issues.apache.org/jira/browse/HADOOP-11852
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hadoop-11852.001.patch
>
>
> In HADOOP-10020 and HADOOP-10162 we disabled symlinks in branch-2. Since 
> there's currently no plan to finish this work, let's disable it in trunk too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11843) Make setting up the build environment easier

2015-04-23 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-11843:
---
 Component/s: build
Target Version/s: 2.8.0
Hadoop Flags: Reviewed

+1 for the latest patch.  Thanks again, Nils.  I'll hold off committing this 
until tomorrow in case there is any remaining feedback from the other watchers.

> Make setting up the build environment easier
> 
>
> Key: HADOOP-11843
> URL: https://issues.apache.org/jira/browse/HADOOP-11843
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: build
>Reporter: Niels Basjes
>Assignee: Niels Basjes
> Attachments: HADOOP-11843-2015-04-17-1612.patch, 
> HADOOP-11843-2015-04-17-2226.patch, HADOOP-11843-2015-04-17-2308.patch, 
> HADOOP-11843-2015-04-19-2206.patch, HADOOP-11843-2015-04-19-2232.patch, 
> HADOOP-11843-2015-04-22-1122.patch, HADOOP-11843-2015-04-23-1000.patch
>
>
> ( As discussed with [~aw] )
> In AVRO-1537 a docker based solution was created to setup all the tools for 
> doing a full build. This enables much easier reproduction of any issues and 
> getting up and running for new developers.
> This issue is to 'copy/port' that setup into the hadoop project in 
> preparation for the bug squash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11872) "hadoop dfs" command prints message about using "yarn jar" on Windows(branch-2 only)

2015-04-23 Thread Varun Vasudev (JIRA)
Varun Vasudev created HADOOP-11872:
--

 Summary: "hadoop dfs" command prints message about using "yarn 
jar" on Windows(branch-2 only)
 Key: HADOOP-11872
 URL: https://issues.apache.org/jira/browse/HADOOP-11872
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Reporter: Varun Vasudev
Assignee: Varun Vasudev
Priority: Minor


Using the "hadoop dfs" command on a branch-2 build prints a message about using 
yarn jar.

{noformat}
C:\hadoop\hadoop-common-project\hadoop-common\src\main\bin> hadoop.cmd dfs -ls
   note: please use "yarn jar" to launch
 YARN applications, not this command.
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11872) "hadoop dfs" command prints message about using "yarn jar" on Windows(branch-2 only)

2015-04-23 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated HADOOP-11872:
---
Attachment: HADOOP-11872-branch-2.001.patch

Uploaded a patch with the fix.

> "hadoop dfs" command prints message about using "yarn jar" on 
> Windows(branch-2 only)
> 
>
> Key: HADOOP-11872
> URL: https://issues.apache.org/jira/browse/HADOOP-11872
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
>Priority: Minor
> Attachments: HADOOP-11872-branch-2.001.patch
>
>
> Using the "hadoop dfs" command on a branch-2 build prints a message about 
> using yarn jar.
> {noformat}
> C:\hadoop\hadoop-common-project\hadoop-common\src\main\bin> hadoop.cmd dfs -ls
>note: please use "yarn jar" to launch
>  YARN applications, not this command.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11872) "hadoop dfs" command prints message about using "yarn jar" on Windows(branch-2 only)

2015-04-23 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated HADOOP-11872:
---
Status: Patch Available  (was: Open)

> "hadoop dfs" command prints message about using "yarn jar" on 
> Windows(branch-2 only)
> 
>
> Key: HADOOP-11872
> URL: https://issues.apache.org/jira/browse/HADOOP-11872
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
>Priority: Minor
> Attachments: HADOOP-11872-branch-2.001.patch
>
>
> Using the "hadoop dfs" command on a branch-2 build prints a message about 
> using yarn jar.
> {noformat}
> C:\hadoop\hadoop-common-project\hadoop-common\src\main\bin> hadoop.cmd dfs -ls
>note: please use "yarn jar" to launch
>  YARN applications, not this command.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11872) "hadoop dfs" command prints message about using "yarn jar" on Windows(branch-2 only)

2015-04-23 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14509565#comment-14509565
 ] 

Chris Nauroth commented on HADOOP-11872:


This looks like a merge error from the HADOOP-11257 addendum patch.  I'm 
linking the issues.

> "hadoop dfs" command prints message about using "yarn jar" on 
> Windows(branch-2 only)
> 
>
> Key: HADOOP-11872
> URL: https://issues.apache.org/jira/browse/HADOOP-11872
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
>Priority: Minor
> Attachments: HADOOP-11872-branch-2.001.patch
>
>
> Using the "hadoop dfs" command on a branch-2 build prints a message about 
> using yarn jar.
> {noformat}
> C:\hadoop\hadoop-common-project\hadoop-common\src\main\bin> hadoop.cmd dfs -ls
>note: please use "yarn jar" to launch
>  YARN applications, not this command.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11802) DomainSocketWatcher thread terminates sometimes after there is an I/O error during requestShortCircuitShm

2015-04-23 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14509569#comment-14509569
 ] 

Andrew Wang commented on HADOOP-11802:
--

Cool patch, only have nit-like stuff. +1 pending, though it is a lot of nits.

* There's CHANGES.txt included in the patch
* extra imports in DataXCeiver, though really you probably meant to add the 
@Private annotation and just forgot.
* Add a newline in the DSW C file change, break the new POLLHUP check to the 
next line (like the other if you changed). Adding a link to the webpage 
reference (along with mentioning portability / Cygwin) would also be nice, 
since I wondered why we didn't have to catch yet more poll errors.
* Typo "repsponse" in DataXceiver
* We typically have used a singleton to do fault injection, would be good to be 
consistent since it doesn't look like we need per-instance injection. See 
DataNodeFaultInjector, probably the best home. 
* Good fix on the javadoc for allocSlot, but mind adding the blockId param doc 
too for full coverage?
* The Throwable catch, it subsumes the IOException catch, so can we just delete 
it? I think the more specific name of the exception will be printed by its 
toString.
* Param indentation in TestSCCache#checkNumberOfSeg... is inconsistent, I think 
we typically do double indent?
* TestSCCache, the comment "Remove the failure injector" should be moved up a 
few lines

> DomainSocketWatcher thread terminates sometimes after there is an I/O error 
> during requestShortCircuitShm
> -
>
> Key: HADOOP-11802
> URL: https://issues.apache.org/jira/browse/HADOOP-11802
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Eric Payne
>Assignee: Colin Patrick McCabe
> Attachments: HADOOP-11802.001.patch, HADOOP-11802.002.patch, 
> HADOOP-11802.003.patch
>
>
> In {{DataXceiver#requestShortCircuitShm}}, we attempt to recover from some 
> errors by closing the {{DomainSocket}}.  However, this violates the invariant 
> that the domain socket should never be closed when it is being managed by the 
> {{DomainSocketWatcher}}.  Instead, we should call {{shutdown}} on the 
> {{DomainSocket}}.  When this bug hits, it terminates the 
> {{DomainSocketWatcher}} thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11852) Disable symlinks in trunk

2015-04-23 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-11852:
-
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Thanks Colin, committed to trunk.

> Disable symlinks in trunk
> -
>
> Key: HADOOP-11852
> URL: https://issues.apache.org/jira/browse/HADOOP-11852
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 3.0.0
>
> Attachments: hadoop-11852.001.patch
>
>
> In HADOOP-10020 and HADOOP-10162 we disabled symlinks in branch-2. Since 
> there's currently no plan to finish this work, let's disable it in trunk too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11852) Disable symlinks in trunk

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14509586#comment-14509586
 ] 

Hudson commented on HADOOP-11852:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #7651 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7651/])
HADOOP-11852. Disable symlinks in trunk. (wang: rev 
26971e52ae65590e618a23621be244e588845adc)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestStat.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystemLinkResolver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/SymlinkBaseTest.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileContextResolveAfs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSLinkResolver.java


> Disable symlinks in trunk
> -
>
> Key: HADOOP-11852
> URL: https://issues.apache.org/jira/browse/HADOOP-11852
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 3.0.0
>
> Attachments: hadoop-11852.001.patch
>
>
> In HADOOP-10020 and HADOOP-10162 we disabled symlinks in branch-2. Since 
> there's currently no plan to finish this work, let's disable it in trunk too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11872) "hadoop dfs" command prints message about using "yarn jar" on Windows(branch-2 only)

2015-04-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14509567#comment-14509567
 ] 

Hadoop QA commented on HADOOP-11872:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12727670/HADOOP-11872-branch-2.001.patch
 |
| Optional Tests |  |
| git revision | branch-2 / 0ec6e7e |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6168/console |


This message was automatically generated.

> "hadoop dfs" command prints message about using "yarn jar" on 
> Windows(branch-2 only)
> 
>
> Key: HADOOP-11872
> URL: https://issues.apache.org/jira/browse/HADOOP-11872
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
>Priority: Minor
> Attachments: HADOOP-11872-branch-2.001.patch
>
>
> Using the "hadoop dfs" command on a branch-2 build prints a message about 
> using yarn jar.
> {noformat}
> C:\hadoop\hadoop-common-project\hadoop-common\src\main\bin> hadoop.cmd dfs -ls
>note: please use "yarn jar" to launch
>  YARN applications, not this command.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11802) DomainSocketWatcher thread terminates sometimes after there is an I/O error during requestShortCircuitShm

2015-04-23 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14509652#comment-14509652
 ] 

Colin Patrick McCabe commented on HADOOP-11802:
---

bq. extra imports in DataXCeiver, though really you probably meant to add the 
@Private annotation and just forgot.

fixed

bq. Add a newline in the DSW C file change, break the new POLLHUP check to the 
next line (like the other if you changed)

ok

bq. Adding a link to the webpage reference (along with mentioning portability / 
Cygwin) would also be nice, since I wondered why we didn't have to catch yet 
more poll errors.

I added a comment explaining why POLLHUP

bq. Typo "repsponse" in DataXceiver

fixed

bq. We typically have used a singleton to do fault injection, would be good to 
be consistent since it doesn't look like we need per-instance injection. See 
DataNodeFaultInjector, probably the best home.

OK.  That would eliminate the need to make the DataXceiver class public, which 
would be nice.

bq. Good fix on the javadoc for allocSlot, but mind adding the blockId param 
doc too for full coverage?

Hey, I'm trying to make incremental changes here :)  Fixed.

bq. The Throwable catch, it subsumes the IOException catch, so can we just 
delete it? I think the more specific name of the exception will be printed by 
its toString.

ok

bq. Param indentation in TestSCCache#checkNumberOfSeg... is inconsistent, I 
think we typically do double indent?

ok

bq. TestSCCache, the comment "Remove the failure injector" should be moved up a 
few lines

let me just get rid of that since the log messages says the same thing

> DomainSocketWatcher thread terminates sometimes after there is an I/O error 
> during requestShortCircuitShm
> -
>
> Key: HADOOP-11802
> URL: https://issues.apache.org/jira/browse/HADOOP-11802
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Eric Payne
>Assignee: Colin Patrick McCabe
> Attachments: HADOOP-11802.001.patch, HADOOP-11802.002.patch, 
> HADOOP-11802.003.patch
>
>
> In {{DataXceiver#requestShortCircuitShm}}, we attempt to recover from some 
> errors by closing the {{DomainSocket}}.  However, this violates the invariant 
> that the domain socket should never be closed when it is being managed by the 
> {{DomainSocketWatcher}}.  Instead, we should call {{shutdown}} on the 
> {{DomainSocket}}.  When this bug hits, it terminates the 
> {{DomainSocketWatcher}} thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11802) DomainSocketWatcher thread terminates sometimes after there is an I/O error during requestShortCircuitShm

2015-04-23 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-11802:
--
Attachment: HADOOP-11802.004.patch

> DomainSocketWatcher thread terminates sometimes after there is an I/O error 
> during requestShortCircuitShm
> -
>
> Key: HADOOP-11802
> URL: https://issues.apache.org/jira/browse/HADOOP-11802
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Eric Payne
>Assignee: Colin Patrick McCabe
> Attachments: HADOOP-11802.001.patch, HADOOP-11802.002.patch, 
> HADOOP-11802.003.patch, HADOOP-11802.004.patch
>
>
> In {{DataXceiver#requestShortCircuitShm}}, we attempt to recover from some 
> errors by closing the {{DomainSocket}}.  However, this violates the invariant 
> that the domain socket should never be closed when it is being managed by the 
> {{DomainSocketWatcher}}.  Instead, we should call {{shutdown}} on the 
> {{DomainSocket}}.  When this bug hits, it terminates the 
> {{DomainSocketWatcher}} thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11872) "hadoop dfs" command prints message about using "yarn jar" on Windows(branch-2 only)

2015-04-23 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-11872:
---
 Target Version/s: 2.7.1
Affects Version/s: 2.7.0

I think this is a worthwhile and low-risk patch for 2.7.1, so I'm setting the 
target version.

> "hadoop dfs" command prints message about using "yarn jar" on 
> Windows(branch-2 only)
> 
>
> Key: HADOOP-11872
> URL: https://issues.apache.org/jira/browse/HADOOP-11872
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.7.0
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
>Priority: Minor
> Attachments: HADOOP-11872-branch-2.001.patch
>
>
> Using the "hadoop dfs" command on a branch-2 build prints a message about 
> using yarn jar.
> {noformat}
> C:\hadoop\hadoop-common-project\hadoop-common\src\main\bin> hadoop.cmd dfs -ls
>note: please use "yarn jar" to launch
>  YARN applications, not this command.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11872) "hadoop dfs" command prints message about using "yarn jar" on Windows(branch-2 only)

2015-04-23 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-11872:
---
   Resolution: Fixed
Fix Version/s: 2.7.1
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

+1 for the patch.  I tested on a local Windows VM to confirm the fix.  I 
committed this to branch-2 and branch-2.7.

Nice find, Varun!  Thank you for the patch.

> "hadoop dfs" command prints message about using "yarn jar" on 
> Windows(branch-2 only)
> 
>
> Key: HADOOP-11872
> URL: https://issues.apache.org/jira/browse/HADOOP-11872
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.7.0
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
>Priority: Minor
> Fix For: 2.7.1
>
> Attachments: HADOOP-11872-branch-2.001.patch
>
>
> Using the "hadoop dfs" command on a branch-2 build prints a message about 
> using yarn jar.
> {noformat}
> C:\hadoop\hadoop-common-project\hadoop-common\src\main\bin> hadoop.cmd dfs -ls
>note: please use "yarn jar" to launch
>  YARN applications, not this command.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9891) CLIMiniCluster instructions fail with MiniYarnCluster ClassNotFoundException

2015-04-23 Thread Darrell Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14509736#comment-14509736
 ] 

Darrell Taylor commented on HADOOP-9891:


I'll have a go a fixing this as I'm trying to use it.  Would anybody be able to 
give me any pointers towards where I should be looking to get the missing class 
into the jar?

> CLIMiniCluster instructions fail with MiniYarnCluster ClassNotFoundException
> 
>
> Key: HADOOP-9891
> URL: https://issues.apache.org/jira/browse/HADOOP-9891
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.1.1-beta
>Reporter: Steve Loughran
>Priority: Minor
>
> The instruction on how to start up a mini CLI cluster in 
> {{CLIMiniCluster.md}} don't work -it looks like {{MiniYarnCluster}} isn't on 
> the classpath



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11730) Regression: s3n read failure recovery broken

2015-04-23 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11730:

Attachment: HADOOP-11730-002.patch

+1 committing

Here's the patch in sync with trunk; it also incorporates HADOOP-11851 in the 
close logic, as they go hand in hand. We can't have the recovery process 
damaged by ConnectionReset exceptions being picked up while it closes the old 
stream.

> Regression: s3n read failure recovery broken
> 
>
> Key: HADOOP-11730
> URL: https://issues.apache.org/jira/browse/HADOOP-11730
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.5.0, 2.6.0
> Environment: HDP 2.2
>Reporter: Takenori Sato
>Assignee: Takenori Sato
> Attachments: HADOOP-11730-002.patch, 
> HADOOP-11730-branch-2.6.0.001.patch
>
>
> s3n attempts to read again when it encounters IOException during read. But 
> the current logic does not reopen the connection, thus, it ends up with 
> no-op, and committing the wrong(truncated) output.
> Here's a stack trace as an example.
> {quote}
> 2015-03-13 20:17:24,835 [TezChild] INFO  
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor - 
> Starting output org.apache.tez.mapreduce.output.MROutput@52008dbd to vertex 
> scope-12
> 2015-03-13 20:17:24,866 [TezChild] DEBUG 
> org.jets3t.service.impl.rest.httpclient.HttpMethodReleaseInputStream - 
> Released HttpMethod as its response data stream threw an exception
> org.apache.http.ConnectionClosedException: Premature end of Content-Length 
> delimited message body (expected: 296587138; received: 155648
>   at 
> org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:184)
>   at 
> org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:138)
>   at 
> org.jets3t.service.io.InterruptableInputStream.read(InterruptableInputStream.java:78)
>   at 
> org.jets3t.service.impl.rest.httpclient.HttpMethodReleaseInputStream.read(HttpMethodReleaseInputStream.java:146)
>   at 
> org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.read(NativeS3FileSystem.java:145)
>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
>   at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
>   at java.io.DataInputStream.read(DataInputStream.java:100)
>   at org.apache.hadoop.util.LineReader.fillBuffer(LineReader.java:180)
>   at 
> org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:216)
>   at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)
>   at 
> org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:185)
>   at org.apache.pig.builtin.PigStorage.getNext(PigStorage.java:259)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:204)
>   at 
> org.apache.tez.mapreduce.lib.MRReaderMapReduce.next(MRReaderMapReduce.java:116)
>   at 
> org.apache.pig.backend.hadoop.executionengine.tez.plan.operator.POSimpleTezLoad.getNextTuple(POSimpleTezLoad.java:106)
>   at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:307)
>   at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.getNextTuple(POForEach.java:246)
>   at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:307)
>   at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POFilter.getNextTuple(POFilter.java:91)
>   at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:307)
>   at 
> org.apache.pig.backend.hadoop.executionengine.tez.plan.operator.POStoreTez.getNextTuple(POStoreTez.java:117)
>   at 
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor.runPipeline(PigProcessor.java:313)
>   at 
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor.run(PigProcessor.java:192)
>   at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:324)
>   at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:176)
>   at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:168)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>   at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallab

[jira] [Commented] (HADOOP-9891) CLIMiniCluster instructions fail with MiniYarnCluster ClassNotFoundException

2015-04-23 Thread Darrell Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14509739#comment-14509739
 ] 

Darrell Taylor commented on HADOOP-9891:


This seems to be the solution.  If I can make it work I'll update the docs.

> CLIMiniCluster instructions fail with MiniYarnCluster ClassNotFoundException
> 
>
> Key: HADOOP-9891
> URL: https://issues.apache.org/jira/browse/HADOOP-9891
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.1.1-beta
>Reporter: Steve Loughran
>Priority: Minor
>
> The instruction on how to start up a mini CLI cluster in 
> {{CLIMiniCluster.md}} don't work -it looks like {{MiniYarnCluster}} isn't on 
> the classpath



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9891) CLIMiniCluster instructions fail with MiniYarnCluster ClassNotFoundException

2015-04-23 Thread Darrell Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14509741#comment-14509741
 ] 

Darrell Taylor commented on HADOOP-9891:


the above comment is about the related Jira I just linked...

https://issues.apache.org/jira/browse/YARN-683

> CLIMiniCluster instructions fail with MiniYarnCluster ClassNotFoundException
> 
>
> Key: HADOOP-9891
> URL: https://issues.apache.org/jira/browse/HADOOP-9891
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.1.1-beta
>Reporter: Steve Loughran
>Priority: Minor
>
> The instruction on how to start up a mini CLI cluster in 
> {{CLIMiniCluster.md}} don't work -it looks like {{MiniYarnCluster}} isn't on 
> the classpath



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11730) Regression: s3n read failure recovery broken

2015-04-23 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11730:

   Resolution: Fixed
Fix Version/s: 2.7.1
   Status: Resolved  (was: Patch Available)

patch applied; tested against s3 EU.

Given the nature of these problems, it may be good to start thinking about 
whether we can do things with better simulate failures; the test here is a good 
start, though we may want more complex policies...mockito might be the tool to 
reach for.

> Regression: s3n read failure recovery broken
> 
>
> Key: HADOOP-11730
> URL: https://issues.apache.org/jira/browse/HADOOP-11730
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.5.0, 2.6.0
> Environment: HDP 2.2
>Reporter: Takenori Sato
>Assignee: Takenori Sato
> Fix For: 2.7.1
>
> Attachments: HADOOP-11730-002.patch, 
> HADOOP-11730-branch-2.6.0.001.patch
>
>
> s3n attempts to read again when it encounters IOException during read. But 
> the current logic does not reopen the connection, thus, it ends up with 
> no-op, and committing the wrong(truncated) output.
> Here's a stack trace as an example.
> {quote}
> 2015-03-13 20:17:24,835 [TezChild] INFO  
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor - 
> Starting output org.apache.tez.mapreduce.output.MROutput@52008dbd to vertex 
> scope-12
> 2015-03-13 20:17:24,866 [TezChild] DEBUG 
> org.jets3t.service.impl.rest.httpclient.HttpMethodReleaseInputStream - 
> Released HttpMethod as its response data stream threw an exception
> org.apache.http.ConnectionClosedException: Premature end of Content-Length 
> delimited message body (expected: 296587138; received: 155648
>   at 
> org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:184)
>   at 
> org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:138)
>   at 
> org.jets3t.service.io.InterruptableInputStream.read(InterruptableInputStream.java:78)
>   at 
> org.jets3t.service.impl.rest.httpclient.HttpMethodReleaseInputStream.read(HttpMethodReleaseInputStream.java:146)
>   at 
> org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.read(NativeS3FileSystem.java:145)
>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
>   at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
>   at java.io.DataInputStream.read(DataInputStream.java:100)
>   at org.apache.hadoop.util.LineReader.fillBuffer(LineReader.java:180)
>   at 
> org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:216)
>   at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)
>   at 
> org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:185)
>   at org.apache.pig.builtin.PigStorage.getNext(PigStorage.java:259)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:204)
>   at 
> org.apache.tez.mapreduce.lib.MRReaderMapReduce.next(MRReaderMapReduce.java:116)
>   at 
> org.apache.pig.backend.hadoop.executionengine.tez.plan.operator.POSimpleTezLoad.getNextTuple(POSimpleTezLoad.java:106)
>   at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:307)
>   at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.getNextTuple(POForEach.java:246)
>   at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:307)
>   at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POFilter.getNextTuple(POFilter.java:91)
>   at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:307)
>   at 
> org.apache.pig.backend.hadoop.executionengine.tez.plan.operator.POStoreTez.getNextTuple(POStoreTez.java:117)
>   at 
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor.runPipeline(PigProcessor.java:313)
>   at 
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor.run(PigProcessor.java:192)
>   at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:324)
>   at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:176)
>   at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:168)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserG

[jira] [Commented] (HADOOP-11730) Regression: s3n read failure recovery broken

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14509776#comment-14509776
 ] 

Hudson commented on HADOOP-11730:
-

FAILURE: Integrated in Hadoop-trunk-Commit #7653 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7653/])
HADOOP-11730. Regression: s3n read failure recovery broken.  (Takenori Sato via 
stevel) (stevel: rev 19262d99ebbbd143a7ac9740d3a8e7c842b37591)
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3native/NativeS3FileSystem.java
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3native/NativeS3FileSystemContractBaseTest.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Regression: s3n read failure recovery broken
> 
>
> Key: HADOOP-11730
> URL: https://issues.apache.org/jira/browse/HADOOP-11730
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.5.0, 2.6.0
> Environment: HDP 2.2
>Reporter: Takenori Sato
>Assignee: Takenori Sato
> Fix For: 2.7.1
>
> Attachments: HADOOP-11730-002.patch, 
> HADOOP-11730-branch-2.6.0.001.patch
>
>
> s3n attempts to read again when it encounters IOException during read. But 
> the current logic does not reopen the connection, thus, it ends up with 
> no-op, and committing the wrong(truncated) output.
> Here's a stack trace as an example.
> {quote}
> 2015-03-13 20:17:24,835 [TezChild] INFO  
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor - 
> Starting output org.apache.tez.mapreduce.output.MROutput@52008dbd to vertex 
> scope-12
> 2015-03-13 20:17:24,866 [TezChild] DEBUG 
> org.jets3t.service.impl.rest.httpclient.HttpMethodReleaseInputStream - 
> Released HttpMethod as its response data stream threw an exception
> org.apache.http.ConnectionClosedException: Premature end of Content-Length 
> delimited message body (expected: 296587138; received: 155648
>   at 
> org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:184)
>   at 
> org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:138)
>   at 
> org.jets3t.service.io.InterruptableInputStream.read(InterruptableInputStream.java:78)
>   at 
> org.jets3t.service.impl.rest.httpclient.HttpMethodReleaseInputStream.read(HttpMethodReleaseInputStream.java:146)
>   at 
> org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.read(NativeS3FileSystem.java:145)
>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
>   at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
>   at java.io.DataInputStream.read(DataInputStream.java:100)
>   at org.apache.hadoop.util.LineReader.fillBuffer(LineReader.java:180)
>   at 
> org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:216)
>   at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)
>   at 
> org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:185)
>   at org.apache.pig.builtin.PigStorage.getNext(PigStorage.java:259)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:204)
>   at 
> org.apache.tez.mapreduce.lib.MRReaderMapReduce.next(MRReaderMapReduce.java:116)
>   at 
> org.apache.pig.backend.hadoop.executionengine.tez.plan.operator.POSimpleTezLoad.getNextTuple(POSimpleTezLoad.java:106)
>   at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:307)
>   at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.getNextTuple(POForEach.java:246)
>   at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:307)
>   at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POFilter.getNextTuple(POFilter.java:91)
>   at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:307)
>   at 
> org.apache.pig.backend.hadoop.executionengine.tez.plan.operator.POStoreTez.getNextTuple(POStoreTez.java:117)
>   at 
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor.runPipeline(PigProcessor.java:313)
>   at 
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor.run(PigProcessor.java:192)
>   at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:324)
>   at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:176)
>   at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:168)
>   at

[jira] [Commented] (HADOOP-11869) checkstyle rules/script need re-visiting

2015-04-23 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14509799#comment-14509799
 ] 

Jason Lowe commented on HADOOP-11869:
-

It's complaining about things I don't recall being part of the coding style 
guidelines, like every method parameter being declared final or every variable 
or method must have a javadoc comment.

> checkstyle rules/script need re-visiting
> 
>
> Key: HADOOP-11869
> URL: https://issues.apache.org/jira/browse/HADOOP-11869
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sidharta Seethana
>
> There seem to be a lot of arcane errors being caused by the checkstyle 
> rules/script. Real issues tend to be buried in this noise. Some examples :
> 1. "Line is longer than 80 characters" - this shows up even for cases like 
> import statements, package names
> 2. "Missing a Javadoc comment." - for every private member including cases 
> like "Configuration conf". 
> Having rules like these will result in a large number of pre-commit job 
> failures. We should fine tune the rules used for checkstyle. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11851) s3n to swallow IOEs on inner stream close

2015-04-23 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11851:

Assignee: Takenori Sato  (was: Anu Engineer)

> s3n to swallow IOEs on inner stream close
> -
>
> Key: HADOOP-11851
> URL: https://issues.apache.org/jira/browse/HADOOP-11851
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>Assignee: Takenori Sato
>Priority: Minor
>
> We've seen a situation where some work was failing from (recurrent) 
> connection reset exceptions.
> Irrespective of the root cause, these were surfacing not in the read 
> operations, but when the input stream was being closed -including during a 
> seek()
> These exceptions could be caught & logged & warn, rather than trigger 
> immediate failures. It shouldn't matter to the next GET whether the last 
> stream closed prematurely, as long as the new one works



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11873) Include disk read/write time in FileSystem.Statistics

2015-04-23 Thread Kay Ousterhout (JIRA)
Kay Ousterhout created HADOOP-11873:
---

 Summary: Include disk read/write time in FileSystem.Statistics
 Key: HADOOP-11873
 URL: https://issues.apache.org/jira/browse/HADOOP-11873
 Project: Hadoop Common
  Issue Type: New Feature
  Components: metrics
Reporter: Kay Ousterhout
Priority: Minor


Measuring the time spent blocking on reading / writing data from / to disk is 
very useful for debugging performance problems in applications that read data 
from Hadoop, and can give much more information (e.g., to reflect disk 
contention) than just knowing the total amount of data read.  I'd like to add 
something like "diskMillis" to FileSystem#Statistics to track this.

For data read from HDFS, this can be done with very low overhead by adding 
logging around calls to RemoteBlockReader2.readNextPacket (because this reads 
larger chunks of data, the time added by the instrumentation is very small 
relative to the time to actually read the data).  For data written to HDFS, 
this can be done in DFSOutputStream.waitAndQueueCurrentPacket.

As far as I know, if you want this information today, it is only currently 
accessible by turning on HTrace. It looks like HTrace can't be selectively 
enabled, so a user can't just turn on the tracing on 
RemoteBlockReader2.readNextPacket for example, and instead needs to turn on 
tracing everywhere (which then introduces a bunch of overhead -- so sampling is 
necessary).  It would be hugely helpful to have native metrics for time reading 
/ writing to disk that are sufficiently low-overhead to be always on. (Please 
correct me if I'm wrong here about what's possible today!)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11730) Regression: s3n read failure recovery broken

2015-04-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14509843#comment-14509843
 ] 

Hadoop QA commented on HADOOP-11730:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 33s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | javac |   7m 27s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 33s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   7m 43s | The applied patch generated  1 
 additional checkstyle issues. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   0m 38s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | tools/hadoop tests |   0m 14s | Tests passed in 
hadoop-aws. |
| | |  42m 39s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12727699/HADOOP-11730-002.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 416b843 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6170/artifact/patchprocess/checkstyle-result-diff.txt
 |
| hadoop-aws test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6170/artifact/patchprocess/testrun_hadoop-aws.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6170/testReport/ |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6170/console |


This message was automatically generated.

> Regression: s3n read failure recovery broken
> 
>
> Key: HADOOP-11730
> URL: https://issues.apache.org/jira/browse/HADOOP-11730
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.5.0, 2.6.0
> Environment: HDP 2.2
>Reporter: Takenori Sato
>Assignee: Takenori Sato
> Fix For: 2.7.1
>
> Attachments: HADOOP-11730-002.patch, 
> HADOOP-11730-branch-2.6.0.001.patch
>
>
> s3n attempts to read again when it encounters IOException during read. But 
> the current logic does not reopen the connection, thus, it ends up with 
> no-op, and committing the wrong(truncated) output.
> Here's a stack trace as an example.
> {quote}
> 2015-03-13 20:17:24,835 [TezChild] INFO  
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor - 
> Starting output org.apache.tez.mapreduce.output.MROutput@52008dbd to vertex 
> scope-12
> 2015-03-13 20:17:24,866 [TezChild] DEBUG 
> org.jets3t.service.impl.rest.httpclient.HttpMethodReleaseInputStream - 
> Released HttpMethod as its response data stream threw an exception
> org.apache.http.ConnectionClosedException: Premature end of Content-Length 
> delimited message body (expected: 296587138; received: 155648
>   at 
> org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:184)
>   at 
> org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:138)
>   at 
> org.jets3t.service.io.InterruptableInputStream.read(InterruptableInputStream.java:78)
>   at 
> org.jets3t.service.impl.rest.httpclient.HttpMethodReleaseInputStream.read(HttpMethodReleaseInputStream.java:146)
>   at 
> org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.read(NativeS3FileSystem.java:145)
>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
>   at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
>   at java.io.DataInputStream.read(DataInputStream.java:100)
>   at org.apache.hadoop.util.LineReader.fillBuffer(LineReader.java:180)
>   at 
> org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:216)
>   at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)
>   at 
> org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:185)
>   at org.apache.pig.builtin.PigStorage.getNext(PigStorage.java:259)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:204)
>   at 
> org.apache.tez.map

[jira] [Assigned] (HADOOP-11793) Update create-release for releasedocmaker.py

2015-04-23 Thread ramtin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramtin reassigned HADOOP-11793:
---

Assignee: ramtin

[~aw], based on HADOOP-11792 I removed those lines that copy CHANGES.txt to 
stage directory.
Also based on HADOOP-11743 add a line to call clean with releasedocs profile.
Last but not least, based on HADOOP-11731 generate the site.
Please let me know if any more modification is required.

> Update create-release for releasedocmaker.py
> 
>
> Key: HADOOP-11793
> URL: https://issues.apache.org/jira/browse/HADOOP-11793
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: ramtin
>
> With the commit of HADOOP-11731, the changelog and release note data is now 
> automated with the build.  The create-release script needs to do the correct 
> thing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11793) Update create-release for releasedocmaker.py

2015-04-23 Thread ramtin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramtin updated HADOOP-11793:

Attachment: HADOOP-11793.001.patch

> Update create-release for releasedocmaker.py
> 
>
> Key: HADOOP-11793
> URL: https://issues.apache.org/jira/browse/HADOOP-11793
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: ramtin
> Attachments: HADOOP-11793.001.patch
>
>
> With the commit of HADOOP-11731, the changelog and release note data is now 
> automated with the build.  The create-release script needs to do the correct 
> thing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11851) s3n to swallow IOEs on inner stream close

2015-04-23 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14509893#comment-14509893
 ] 

Steve Loughran commented on HADOOP-11851:
-

This turns out to be a different symptom of the HADOOP-11570 problem; the 
chunked stream reader is trying to read to the end of the input stream. 

That patched S3a.close() to shut the stream down more aggressively. We don't 
have a patch for s3n to do the same.  Looking at HADOOP-11570 though, it's 
vulnerable to the same problem of a clean close() triggering an exception. It 
needs a more robust close() operator too

> s3n to swallow IOEs on inner stream close
> -
>
> Key: HADOOP-11851
> URL: https://issues.apache.org/jira/browse/HADOOP-11851
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>Assignee: Takenori Sato
>Priority: Minor
>
> We've seen a situation where some work was failing from (recurrent) 
> connection reset exceptions.
> Irrespective of the root cause, these were surfacing not in the read 
> operations, but when the input stream was being closed -including during a 
> seek()
> These exceptions could be caught & logged & warn, rather than trigger 
> immediate failures. It shouldn't matter to the next GET whether the last 
> stream closed prematurely, as long as the new one works



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11874) s3a can throw spurious IOEs on close()

2015-04-23 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-11874:
---

 Summary: s3a can throw spurious IOEs on close()
 Key: HADOOP-11874
 URL: https://issues.apache.org/jira/browse/HADOOP-11874
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.7.0
Reporter: Steve Loughran


from a code review, it's clear that the issue seen in HADOOP-11851 can surface 
in S3a, though with HADOOP-11570, it's less likely. It will only happen on 
those cases when abort() isn't called.

The "clean" close() code path needs to catch IOEs from the wrappedStream and 
call abort() in that situation too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11851) s3n to swallow IOEs on inner stream close

2015-04-23 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-11851.
-
   Resolution: Fixed
Fix Version/s: 2.7.1

> s3n to swallow IOEs on inner stream close
> -
>
> Key: HADOOP-11851
> URL: https://issues.apache.org/jira/browse/HADOOP-11851
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>Assignee: Takenori Sato
>Priority: Minor
> Fix For: 2.7.1
>
>
> We've seen a situation where some work was failing from (recurrent) 
> connection reset exceptions.
> Irrespective of the root cause, these were surfacing not in the read 
> operations, but when the input stream was being closed -including during a 
> seek()
> These exceptions could be caught & logged & warn, rather than trigger 
> immediate failures. It shouldn't matter to the next GET whether the last 
> stream closed prematurely, as long as the new one works



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11802) DomainSocketWatcher thread terminates sometimes after there is an I/O error during requestShortCircuitShm

2015-04-23 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14509959#comment-14509959
 ] 

Andrew Wang commented on HADOOP-11802:
--

Thanks Colin, +1 pending Jenkins.

> DomainSocketWatcher thread terminates sometimes after there is an I/O error 
> during requestShortCircuitShm
> -
>
> Key: HADOOP-11802
> URL: https://issues.apache.org/jira/browse/HADOOP-11802
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Eric Payne
>Assignee: Colin Patrick McCabe
> Attachments: HADOOP-11802.001.patch, HADOOP-11802.002.patch, 
> HADOOP-11802.003.patch, HADOOP-11802.004.patch
>
>
> In {{DataXceiver#requestShortCircuitShm}}, we attempt to recover from some 
> errors by closing the {{DomainSocket}}.  However, this violates the invariant 
> that the domain socket should never be closed when it is being managed by the 
> {{DomainSocketWatcher}}.  Instead, we should call {{shutdown}} on the 
> {{DomainSocket}}.  When this bug hits, it terminates the 
> {{DomainSocketWatcher}} thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11854) Fix Typos in all the projects

2015-04-23 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14509965#comment-14509965
 ] 

Ray Chiang commented on HADOOP-11854:
-

One other thing to be aware of.  The last time I tried submitting a bunch of 
typo fixes to multiple projects, I ran into Jenkins issues with HADOOP-11320.  
It looks like the previous patch I submitted works okay (i.e. it gets a Jenkins 
update), but it could result in issues if this patch covers a lot of files.

> Fix Typos in all the projects
> -
>
> Key: HADOOP-11854
> URL: https://issues.apache.org/jira/browse/HADOOP-11854
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Priority: Minor
> Attachments: HADOOP-11854.suggestions.001.patch
>
>
> Recently I had seen, there are so many jira's for fixing the typo's ( Keep on 
> accumulating more ). Hence I want to plan in proper manner such that 
> everything will be addressed..
> I am thinking, we can fix project level ( at most package level)...
> My intention to avoid the number of jira's on typo's...One more suggestion to 
> reviewer's is please dn't commit for class level try to check project level ( 
> atmost package level) if any such typo's present...
> Please correct me If I am wrong.. I will close this jira..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10597) RPC Server signals backoff to clients when all request queues are full

2015-04-23 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HADOOP-10597:
-
Release Note: This change introduces a new configuration key used by RPC 
server to decide whether to send backoff signal to RPC Client when RPC call 
queue is full. When the feature is enabled, RPC server will no longer block on 
the processing of RPC requests when RPC call queue is full. It helps to improve 
quality of service when the service is under heavy load. The configuration key 
is in the format of "ipc.#port#.backoff.enable" where #port# is the port number 
that RPC server listens on. For example, if you want to enable the feature for 
the RPC server that listens on 8020, set ipc.8020.backoff.enable to true.

> RPC Server signals backoff to clients when all request queues are full
> --
>
> Key: HADOOP-10597
> URL: https://issues.apache.org/jira/browse/HADOOP-10597
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ming Ma
>Assignee: Ming Ma
> Fix For: 2.8.0
>
> Attachments: HADOOP-10597-2.patch, HADOOP-10597-3.patch, 
> HADOOP-10597-4.patch, HADOOP-10597-5.patch, HADOOP-10597-6.patch, 
> HADOOP-10597.patch, MoreRPCClientBackoffEvaluation.pdf, 
> RPCClientBackoffDesignAndEvaluation.pdf
>
>
> Currently if an application hits NN too hard, RPC requests be in blocking 
> state, assuming OS connection doesn't run out. Alternatively RPC or NN can 
> throw some well defined exception back to the client based on certain 
> policies when it is under heavy load; client will understand such exception 
> and do exponential back off, as another implementation of 
> RetryInvocationHandler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10597) RPC Server signals backoff to clients when all request queues are full

2015-04-23 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1451#comment-1451
 ] 

Ming Ma commented on HADOOP-10597:
--

I have updated the release note. Thanks for your suggestions, Chris, Arpit and 
Steve.

> RPC Server signals backoff to clients when all request queues are full
> --
>
> Key: HADOOP-10597
> URL: https://issues.apache.org/jira/browse/HADOOP-10597
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ming Ma
>Assignee: Ming Ma
> Fix For: 2.8.0
>
> Attachments: HADOOP-10597-2.patch, HADOOP-10597-3.patch, 
> HADOOP-10597-4.patch, HADOOP-10597-5.patch, HADOOP-10597-6.patch, 
> HADOOP-10597.patch, MoreRPCClientBackoffEvaluation.pdf, 
> RPCClientBackoffDesignAndEvaluation.pdf
>
>
> Currently if an application hits NN too hard, RPC requests be in blocking 
> state, assuming OS connection doesn't run out. Alternatively RPC or NN can 
> throw some well defined exception back to the client based on certain 
> policies when it is under heavy load; client will understand such exception 
> and do exponential back off, as another implementation of 
> RetryInvocationHandler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11627) Remove io.native.lib.available from trunk

2015-04-23 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14510031#comment-14510031
 ] 

Akira AJISAKA commented on HADOOP-11627:


Looks good to me, +1. The timeout looks unrelated to the patch. Committing this 
to trunk.

bq. The patch has 1 line(s) that end in whitespace.
I'll fix it by {{git apply -p1 /path/to/the/patch --whitespace=fix}}.

> Remove io.native.lib.available from trunk
> -
>
> Key: HADOOP-11627
> URL: https://issues.apache.org/jira/browse/HADOOP-11627
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-11627-002.patch, HADOOP-11627-003.patch, 
> HADOOP-11627-004.patch, HADOOP-11627-005.patch, HADOOP-11627-006.patch, 
> HADOOP-11627-007.patch, HADOOP-11627-008.patch, HADOOP-11627-009.patch, 
> HADOOP-11627-010.patch, HADOOP-11627-011.patch, HADOOP-11627.patch
>
>
> According to the discussion in HADOOP-8642, we should remove 
> {{io.native.lib.available}} from trunk, and always use native libraries if 
> they exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11627) Remove io.native.lib.available

2015-04-23 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11627:
---
 Component/s: native
 Summary: Remove io.native.lib.available  (was: Remove 
io.native.lib.available from trunk)
Hadoop Flags: Incompatible change,Reviewed  (was: Incompatible change)

> Remove io.native.lib.available
> --
>
> Key: HADOOP-11627
> URL: https://issues.apache.org/jira/browse/HADOOP-11627
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 3.0.0
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-11627-002.patch, HADOOP-11627-003.patch, 
> HADOOP-11627-004.patch, HADOOP-11627-005.patch, HADOOP-11627-006.patch, 
> HADOOP-11627-007.patch, HADOOP-11627-008.patch, HADOOP-11627-009.patch, 
> HADOOP-11627-010.patch, HADOOP-11627-011.patch, HADOOP-11627.patch
>
>
> According to the discussion in HADOOP-8642, we should remove 
> {{io.native.lib.available}} from trunk, and always use native libraries if 
> they exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11843) Make setting up the build environment easier

2015-04-23 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14510042#comment-14510042
 ] 

Arpit Agarwal commented on HADOOP-11843:


+1 from me too. I was able to get an environment on an Ubuntu 14 VM and build 
with {{mvn install package -Pnative}}. This is very cool!

> Make setting up the build environment easier
> 
>
> Key: HADOOP-11843
> URL: https://issues.apache.org/jira/browse/HADOOP-11843
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: build
>Reporter: Niels Basjes
>Assignee: Niels Basjes
> Attachments: HADOOP-11843-2015-04-17-1612.patch, 
> HADOOP-11843-2015-04-17-2226.patch, HADOOP-11843-2015-04-17-2308.patch, 
> HADOOP-11843-2015-04-19-2206.patch, HADOOP-11843-2015-04-19-2232.patch, 
> HADOOP-11843-2015-04-22-1122.patch, HADOOP-11843-2015-04-23-1000.patch
>
>
> ( As discussed with [~aw] )
> In AVRO-1537 a docker based solution was created to setup all the tools for 
> doing a full build. This enables much easier reproduction of any issues and 
> getting up and running for new developers.
> This issue is to 'copy/port' that setup into the hadoop project in 
> preparation for the bug squash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11627) Remove io.native.lib.available

2015-04-23 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11627:
---
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk. Thanks [~brahmareddy] for the continuous work.

> Remove io.native.lib.available
> --
>
> Key: HADOOP-11627
> URL: https://issues.apache.org/jira/browse/HADOOP-11627
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 3.0.0
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
> Fix For: 3.0.0
>
> Attachments: HADOOP-11627-002.patch, HADOOP-11627-003.patch, 
> HADOOP-11627-004.patch, HADOOP-11627-005.patch, HADOOP-11627-006.patch, 
> HADOOP-11627-007.patch, HADOOP-11627-008.patch, HADOOP-11627-009.patch, 
> HADOOP-11627-010.patch, HADOOP-11627-011.patch, HADOOP-11627.patch
>
>
> According to the discussion in HADOOP-8642, we should remove 
> {{io.native.lib.available}} from trunk, and always use native libraries if 
> they exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11627) Remove io.native.lib.available

2015-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14510050#comment-14510050
 ] 

Hudson commented on HADOOP-11627:
-

FAILURE: Integrated in Hadoop-trunk-Commit #7655 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7655/])
HADOOP-11627. Remove io.native.lib.available. Contributed by Brahma Reddy 
Battula. (aajisaka: rev ac281e3fc8681e9b421cb5fb442851293766e949)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/ZlibFactory.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/bzip2/Bzip2Factory.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zlib/TestZlibCompressorDecompressor.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeCodeLoader.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestConcatenatedCompressedInput.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodec.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/file/tfile/TestTFileSeqFileComparison.java
* hadoop-common-project/hadoop-common/src/site/markdown/DeprecatedProperties.md


> Remove io.native.lib.available
> --
>
> Key: HADOOP-11627
> URL: https://issues.apache.org/jira/browse/HADOOP-11627
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 3.0.0
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
> Fix For: 3.0.0
>
> Attachments: HADOOP-11627-002.patch, HADOOP-11627-003.patch, 
> HADOOP-11627-004.patch, HADOOP-11627-005.patch, HADOOP-11627-006.patch, 
> HADOOP-11627-007.patch, HADOOP-11627-008.patch, HADOOP-11627-009.patch, 
> HADOOP-11627-010.patch, HADOOP-11627-011.patch, HADOOP-11627.patch
>
>
> According to the discussion in HADOOP-8642, we should remove 
> {{io.native.lib.available}} from trunk, and always use native libraries if 
> they exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11802) DomainSocketWatcher thread terminates sometimes after there is an I/O error during requestShortCircuitShm

2015-04-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14510095#comment-14510095
 ] 

Hadoop QA commented on HADOOP-11802:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 26s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | javac |   7m 24s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 34s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   5m 29s | The applied patch generated  2 
 additional checkstyle issues. |
| {color:green}+1{color} | install |   1m 32s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 46s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | common tests |  22m 55s | Tests passed in 
hadoop-common. |
| {color:green}+1{color} | hdfs tests | 168m  3s | Tests passed in hadoop-hdfs. 
|
| | | 235m  9s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12727690/HADOOP-11802.004.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 416b843 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6169/artifact/patchprocess/checkstyle-result-diff.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6169/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6169/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6169/testReport/ |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6169/console |


This message was automatically generated.

> DomainSocketWatcher thread terminates sometimes after there is an I/O error 
> during requestShortCircuitShm
> -
>
> Key: HADOOP-11802
> URL: https://issues.apache.org/jira/browse/HADOOP-11802
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Eric Payne
>Assignee: Colin Patrick McCabe
> Attachments: HADOOP-11802.001.patch, HADOOP-11802.002.patch, 
> HADOOP-11802.003.patch, HADOOP-11802.004.patch
>
>
> In {{DataXceiver#requestShortCircuitShm}}, we attempt to recover from some 
> errors by closing the {{DomainSocket}}.  However, this violates the invariant 
> that the domain socket should never be closed when it is being managed by the 
> {{DomainSocketWatcher}}.  Instead, we should call {{shutdown}} on the 
> {{DomainSocket}}.  When this bug hits, it terminates the 
> {{DomainSocketWatcher}} thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11807) add a lint mode to releasedocmaker

2015-04-23 Thread ramtin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14510195#comment-14510195
 ] 

ramtin commented on HADOOP-11807:
-

I am thinking of the following JQL for finding errors:
{code}
project in (HADOOP,HDFS,MAPREDUCE,YARN) and fixVersion = xxx and resolution = 
Fixed and (component = EMPTY or assignee in (EMPTY) or "Release Note" is EMPTY)
{code}

Not sure how to invoke the lint mode
- Manually call the python script by --lintmode param and see the result in the 
console
- Run it through maven by adding a new goal and see the result in the console
- Save the output as ERRORS.md file and then generate html based on this 
markdown file

> add a lint mode to releasedocmaker
> --
>
> Key: HADOOP-11807
> URL: https://issues.apache.org/jira/browse/HADOOP-11807
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Priority: Minor
>
> * check for missing components (error)
> * check for missing assignee (error)
> * check for common version problems (warning)
> * add an error message for missing release notes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11802) DomainSocketWatcher thread terminates sometimes after there is an I/O error during requestShortCircuitShm

2015-04-23 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14510346#comment-14510346
 ] 

Colin Patrick McCabe commented on HADOOP-11802:
---

the checkstyle plugin has some known issues right now.  committing to 2.7.1  
thanks for the reviews.

> DomainSocketWatcher thread terminates sometimes after there is an I/O error 
> during requestShortCircuitShm
> -
>
> Key: HADOOP-11802
> URL: https://issues.apache.org/jira/browse/HADOOP-11802
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Eric Payne
>Assignee: Colin Patrick McCabe
> Attachments: HADOOP-11802.001.patch, HADOOP-11802.002.patch, 
> HADOOP-11802.003.patch, HADOOP-11802.004.patch
>
>
> In {{DataXceiver#requestShortCircuitShm}}, we attempt to recover from some 
> errors by closing the {{DomainSocket}}.  However, this violates the invariant 
> that the domain socket should never be closed when it is being managed by the 
> {{DomainSocketWatcher}}.  Instead, we should call {{shutdown}} on the 
> {{DomainSocket}}.  When this bug hits, it terminates the 
> {{DomainSocketWatcher}} thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >