[jira] [Commented] (HADOOP-12031) test-patch.sh should have an xml plugin

2015-05-26 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14560521#comment-14560521
 ] 

Kengo Seki commented on HADOOP-12031:
-

Thanks [~busbey]. As you pointed out, I realized that 002.patch does not work 
on Python 2.6-, because except ... as ... statement is supported from 2.6.1.
So I'm planning:

* If xmllint is available, use it
* If not, try to validate using Python. In that case, use sys.exc_info() 
instead of except ... as ... statement. It makes the plugin to work on Python 
at least 2.1+ (including 3.x).


> test-patch.sh should have an xml plugin
> ---
>
> Key: HADOOP-12031
> URL: https://issues.apache.org/jira/browse/HADOOP-12031
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Kengo Seki
>  Labels: newbie, test-patch
> Attachments: HADOOP-12031.001.patch, HADOOP-12031.002.patch
>
>
> HADOOP-11178 demonstrates why there is a need to verify xml files on a patch 
> change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14560517#comment-14560517
 ] 

Hadoop QA commented on HADOOP-11934:


\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  15m 13s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 33s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 47s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m  2s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 39s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  22m 35s | Tests passed in 
hadoop-common. |
| | |  60m 24s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735523/HADOOP-11934-11.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / cdbd66b |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6841/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6841/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6841/console |


This message was automatically generated.

> Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
> -
>
> Key: HADOOP-11934
> URL: https://issues.apache.org/jira/browse/HADOOP-11934
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Mike Yoder
>Assignee: Larry McCay
> Attachments: HADOOP-11934-11.patch, HADOOP-11934.001.patch, 
> HADOOP-11934.002.patch, HADOOP-11934.003.patch, HADOOP-11934.004.patch, 
> HADOOP-11934.005.patch, HADOOP-11934.006.patch, HADOOP-11934.007.patch, 
> HADOOP-11934.008.patch, HADOOP-11934.009.patch, HADOOP-11934.010.patch
>
>
> I was attempting to use the LdapGroupsMapping code and the 
> JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
> fatal, issue.  The code goes into what ought to have been an infinite loop, 
> were it not for it overflowing the stack and Java ending the loop.  Here is a 
> snippet of the stack; my annotations are at the bottom.
> {noformat}
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:65)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
>   at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
>   at 
> org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
>   at 
> org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
>   at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>   at org.apache.hadoop.security.Groups.(Groups.java:70)
>   at org.apache.hadoop.security.Groups.(Groups.java:66)
>   at 
> org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
>   at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
>   at 
> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGr

[jira] [Commented] (HADOOP-12011) Allow to dump verbose information to ease debugging in raw erasure coders

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14560505#comment-14560505
 ] 

Hadoop QA commented on HADOOP-12011:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 42s | Pre-patch HDFS-7285 compilation 
is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 3 new or modified test files. |
| {color:green}+1{color} | javac |   7m 34s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 43s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 14s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m  7s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 42s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  23m 56s | Tests passed in 
hadoop-common. |
| | |  61m 16s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735526/HADOOP-12011-HDFS-7285-v4.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7285 / 1299357 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6840/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6840/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6840/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6840/console |


This message was automatically generated.

> Allow to dump verbose information to ease debugging in raw erasure coders
> -
>
> Key: HADOOP-12011
> URL: https://issues.apache.org/jira/browse/HADOOP-12011
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-7285
>
> Attachments: HADOOP-12011-HDFS-7285-v1.patch, 
> HADOOP-12011-HDFS-7285-v3.patch, HADOOP-12011-HDFS-7285-v4.patch
>
>
> While working on native erasure coders, it was found useful to dump key 
> information like encode/decode matrix, erasures and etc. for the 
> encode/decode call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11242) Record the time of calling in tracing span of IPC server

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14560477#comment-14560477
 ] 

Hadoop QA commented on HADOOP-11242:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 38s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 28s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 40s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 21s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 24s | The applied patch generated  1 
new checkstyle issues (total was 218, now 218). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 39s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  22m 31s | Tests passed in 
hadoop-common. |
| {color:green}+1{color} | hdfs tests | 162m 13s | Tests passed in hadoop-hdfs. 
|
| | | 227m 17s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12731539/HADOOP-11242.003.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / cdbd66b |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/6836/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6836/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6836/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6836/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6836/console |


This message was automatically generated.

> Record the time of calling in tracing span of IPC server
> 
>
> Key: HADOOP-11242
> URL: https://issues.apache.org/jira/browse/HADOOP-11242
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
>  Labels: BB2015-05-RFC
> Attachments: HADOOP-11242.002.patch, HADOOP-11242.003.patch, 
> HADOOP-11242.1.patch, HADOOP-11242.1.patch
>
>
> Current tracing span starts when the Call is put into callQueue. Recording 
> the time of calling is useful to debug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-26 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14560464#comment-14560464
 ] 

Chris Nauroth commented on HADOOP-11934:


Thanks for addressing the feedback, Larry.  A few more notes:

# {{AbstractJavaKeyStoreProvider#bytesToChars}}: This is a minor nit.  The 
declaration and initialization of {{pass}} can be condensed to one line, i.e. 
{{String pass = ...}}.
# {{JavaKeyStoreProvider#initFileSystem}}: Please add the {{@Override}} 
annotation.
# {{LocalJavaKeyStoreProvider}}: The class JavaDoc mentions the "jceks" scheme. 
 Should that be changed to "localjceks"?
# {{LocalJavaKeyStoreProvider#flush}}: I'm sorry I didn't spot this earlier, 
but unfortunately, the JDK does not implement a mapping of POSIX permissions to 
NTFS ACLs for its {{Files#setPosixFilePermissions}} and 
{{Files#getPosixFilePermissions}} methods.  It just throws an 
{{UnsupportedOperationException}} if we try to run these methods on Windows.  
(See test failure below.)  Fortunately, we do implement that mapping in Hadoop! 
 :-)  To make this Windows-compatible, I think we're going to need to explore 
using {{org.apache.hadoop.fs.FileUtil#setPermission}} for the set operation.  
The get operation unfortunately is more awkward, involving a combination of 
{{org.apache.hadoop.fs.Stat}}, {{org.apache.hadoop.fs.FileUtil#execCommand}} 
and {{org.apache.hadoop.util.Shell#getGetPermissionCommand}}.  The high level 
flow for this is in {{org.apache.hadoop.fs.RawLocalFileSystem}}.  
Alternatively, maybe you can think of a simpler way to do a special case for 
Windows.  Let me know.

{code}
Running org.apache.hadoop.security.alias.TestCredentialProviderFactory
Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.031 sec <<< 
FAILURE! - in org.apache.hadoop.security.alias.TestCredentialProviderFactory
testLocalJksProvider(org.apache.hadoop.security.alias.TestCredentialProviderFactory)
  Time elapsed: 0.031 sec  <<< ERROR!
java.lang.UnsupportedOperationException: null
at java.nio.file.Files.setPosixFilePermissions(Files.java:1991)
at 
org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider.flush(LocalJavaKeyStoreProvider.java:149)
at 
org.apache.hadoop.security.alias.TestCredentialProviderFactory.checkSpecificProvider(TestCredentialProviderFactory.java:148)
at 
org.apache.hadoop.security.alias.TestCredentialProviderFactory.testLocalJksProvider(TestCredentialProviderFactory.java:220)
{code}


> Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
> -
>
> Key: HADOOP-11934
> URL: https://issues.apache.org/jira/browse/HADOOP-11934
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Mike Yoder
>Assignee: Larry McCay
> Attachments: HADOOP-11934-11.patch, HADOOP-11934.001.patch, 
> HADOOP-11934.002.patch, HADOOP-11934.003.patch, HADOOP-11934.004.patch, 
> HADOOP-11934.005.patch, HADOOP-11934.006.patch, HADOOP-11934.007.patch, 
> HADOOP-11934.008.patch, HADOOP-11934.009.patch, HADOOP-11934.010.patch
>
>
> I was attempting to use the LdapGroupsMapping code and the 
> JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
> fatal, issue.  The code goes into what ought to have been an infinite loop, 
> were it not for it overflowing the stack and Java ending the loop.  Here is a 
> snippet of the stack; my annotations are at the bottom.
> {noformat}
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:65)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
>   at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
>   at 
> org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
>   at 
> org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
>   at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>   at org.apache.hadoop.security.Groups.(Groups.java:70)
>   at org.apache.hadoop.security.Groups.(Groups.java:66)
>   at 
> org.apache.hadoop.sec

[jira] [Commented] (HADOOP-12031) test-patch.sh should have an xml plugin

2015-05-26 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14560447#comment-14560447
 ] 

Sean Busbey commented on HADOOP-12031:
--

{quote}
One concern is, this plugin depends on Python currently. I assume we can use 
Python in most build environment, but please advise if there is a more portable 
and not-so-hard way to validate XML.
{quote}

Relying on python is problematic. If you stick with it, you'll need to detect 
and gracefully degrade when the version you need isn't present.

If we're just checking well-formed-ness, how about using xmllint?

> test-patch.sh should have an xml plugin
> ---
>
> Key: HADOOP-12031
> URL: https://issues.apache.org/jira/browse/HADOOP-12031
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Kengo Seki
>  Labels: newbie, test-patch
> Attachments: HADOOP-12031.001.patch, HADOOP-12031.002.patch
>
>
> HADOOP-11178 demonstrates why there is a need to verify xml files on a patch 
> change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12011) Allow to dump verbose information to ease debugging in raw erasure coders

2015-05-26 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-12011:
---
Attachment: HADOOP-12011-HDFS-7285-v4.patch

Corrected the check style reported issues.

> Allow to dump verbose information to ease debugging in raw erasure coders
> -
>
> Key: HADOOP-12011
> URL: https://issues.apache.org/jira/browse/HADOOP-12011
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-7285
>
> Attachments: HADOOP-12011-HDFS-7285-v1.patch, 
> HADOOP-12011-HDFS-7285-v3.patch, HADOOP-12011-HDFS-7285-v4.patch
>
>
> While working on native erasure coders, it was found useful to dump key 
> information like encode/decode matrix, erasures and etc. for the 
> encode/decode call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12011) Allow to dump verbose information to ease debugging in raw erasure coders

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14560433#comment-14560433
 ] 

Hadoop QA commented on HADOOP-12011:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 47s | Pre-patch HDFS-7285 compilation 
is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 3 new or modified test files. |
| {color:green}+1{color} | javac |   7m 34s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 46s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 15s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m  6s | The applied patch generated  6 
new checkstyle issues (total was 0, now 6). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 37s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 40s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  23m  3s | Tests passed in 
hadoop-common. |
| | |  60m 26s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735520/HADOOP-12011-HDFS-7285-v3.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7285 / 1299357 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6838/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/6838/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6838/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6838/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6838/console |


This message was automatically generated.

> Allow to dump verbose information to ease debugging in raw erasure coders
> -
>
> Key: HADOOP-12011
> URL: https://issues.apache.org/jira/browse/HADOOP-12011
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-7285
>
> Attachments: HADOOP-12011-HDFS-7285-v1.patch, 
> HADOOP-12011-HDFS-7285-v3.patch
>
>
> While working on native erasure coders, it was found useful to dump key 
> information like encode/decode matrix, erasures and etc. for the 
> encode/decode call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-26 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11934:
-
Status: Patch Available  (was: Open)

> Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
> -
>
> Key: HADOOP-11934
> URL: https://issues.apache.org/jira/browse/HADOOP-11934
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Mike Yoder
>Assignee: Larry McCay
> Attachments: HADOOP-11934-11.patch, HADOOP-11934.001.patch, 
> HADOOP-11934.002.patch, HADOOP-11934.003.patch, HADOOP-11934.004.patch, 
> HADOOP-11934.005.patch, HADOOP-11934.006.patch, HADOOP-11934.007.patch, 
> HADOOP-11934.008.patch, HADOOP-11934.009.patch, HADOOP-11934.010.patch
>
>
> I was attempting to use the LdapGroupsMapping code and the 
> JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
> fatal, issue.  The code goes into what ought to have been an infinite loop, 
> were it not for it overflowing the stack and Java ending the loop.  Here is a 
> snippet of the stack; my annotations are at the bottom.
> {noformat}
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:65)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
>   at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
>   at 
> org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
>   at 
> org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
>   at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>   at org.apache.hadoop.security.Groups.(Groups.java:70)
>   at org.apache.hadoop.security.Groups.(Groups.java:66)
>   at 
> org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
>   at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
>   at 
> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:2753)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:2745)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:65)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
>   at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
>   at 
> org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
>   at 
> org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
>   at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>   at org.apache.hadoop.security.Groups.(Groups.java:70)
>   at org.apache.hadoop.security.Groups.(Groups.java:66)
>   at 
> org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
>   at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
>   at 
> org.apache.hadoop.se

[jira] [Updated] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-26 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11934:
-
Attachment: HADOOP-11934-11.patch

Addressed review comments.

> Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
> -
>
> Key: HADOOP-11934
> URL: https://issues.apache.org/jira/browse/HADOOP-11934
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Mike Yoder
>Assignee: Larry McCay
> Attachments: HADOOP-11934-11.patch, HADOOP-11934.001.patch, 
> HADOOP-11934.002.patch, HADOOP-11934.003.patch, HADOOP-11934.004.patch, 
> HADOOP-11934.005.patch, HADOOP-11934.006.patch, HADOOP-11934.007.patch, 
> HADOOP-11934.008.patch, HADOOP-11934.009.patch, HADOOP-11934.010.patch
>
>
> I was attempting to use the LdapGroupsMapping code and the 
> JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
> fatal, issue.  The code goes into what ought to have been an infinite loop, 
> were it not for it overflowing the stack and Java ending the loop.  Here is a 
> snippet of the stack; my annotations are at the bottom.
> {noformat}
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:65)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
>   at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
>   at 
> org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
>   at 
> org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
>   at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>   at org.apache.hadoop.security.Groups.(Groups.java:70)
>   at org.apache.hadoop.security.Groups.(Groups.java:66)
>   at 
> org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
>   at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
>   at 
> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:2753)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:2745)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:65)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
>   at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
>   at 
> org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
>   at 
> org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
>   at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>   at org.apache.hadoop.security.Groups.(Groups.java:70)
>   at org.apache.hadoop.security.Groups.(Groups.java:66)
>   at 
> org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
>   at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
>   a

[jira] [Commented] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-26 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14560421#comment-14560421
 ] 

Larry McCay commented on HADOOP-11934:
--

Ignore those last results - incorrectly run test-path.sh messed up the source 
and I regenerated the patch.


> Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
> -
>
> Key: HADOOP-11934
> URL: https://issues.apache.org/jira/browse/HADOOP-11934
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Mike Yoder
>Assignee: Larry McCay
> Attachments: HADOOP-11934.001.patch, HADOOP-11934.002.patch, 
> HADOOP-11934.003.patch, HADOOP-11934.004.patch, HADOOP-11934.005.patch, 
> HADOOP-11934.006.patch, HADOOP-11934.007.patch, HADOOP-11934.008.patch, 
> HADOOP-11934.009.patch, HADOOP-11934.010.patch
>
>
> I was attempting to use the LdapGroupsMapping code and the 
> JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
> fatal, issue.  The code goes into what ought to have been an infinite loop, 
> were it not for it overflowing the stack and Java ending the loop.  Here is a 
> snippet of the stack; my annotations are at the bottom.
> {noformat}
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:65)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
>   at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
>   at 
> org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
>   at 
> org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
>   at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>   at org.apache.hadoop.security.Groups.(Groups.java:70)
>   at org.apache.hadoop.security.Groups.(Groups.java:66)
>   at 
> org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
>   at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
>   at 
> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:2753)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:2745)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:65)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
>   at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
>   at 
> org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
>   at 
> org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
>   at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>   at org.apache.hadoop.security.Groups.(Groups.java:70)
>   at org.apache.hadoop.security.Groups.(Groups.java:66)
>   at 
> org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
>   at 
> org.apache.hadoop.sec

[jira] [Commented] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14560420#comment-14560420
 ] 

Hadoop QA commented on HADOOP-11934:


\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 12s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   8m 34s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 48s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 21s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 15s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 37s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 42s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  22m 59s | Tests passed in 
hadoop-common. |
| | |  64m  6s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735519/HADOOP-11934-11.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / cdbd66b |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6837/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6837/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6837/console |


This message was automatically generated.

> Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
> -
>
> Key: HADOOP-11934
> URL: https://issues.apache.org/jira/browse/HADOOP-11934
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Mike Yoder
>Assignee: Larry McCay
> Attachments: HADOOP-11934.001.patch, HADOOP-11934.002.patch, 
> HADOOP-11934.003.patch, HADOOP-11934.004.patch, HADOOP-11934.005.patch, 
> HADOOP-11934.006.patch, HADOOP-11934.007.patch, HADOOP-11934.008.patch, 
> HADOOP-11934.009.patch, HADOOP-11934.010.patch
>
>
> I was attempting to use the LdapGroupsMapping code and the 
> JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
> fatal, issue.  The code goes into what ought to have been an infinite loop, 
> were it not for it overflowing the stack and Java ending the loop.  Here is a 
> snippet of the stack; my annotations are at the bottom.
> {noformat}
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:65)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
>   at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
>   at 
> org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
>   at 
> org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
>   at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>   at org.apache.hadoop.security.Groups.(Groups.java:70)
>   at org.apache.hadoop.security.Groups.(Groups.java:66)
>   at 
> org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
>   at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
>   at 
> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260

[jira] [Updated] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2015-05-26 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-9613:
---
Status: Patch Available  (was: Open)

> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Timothy St. Clair
>Assignee: Timothy St. Clair
>  Labels: BB2015-05-TBR, maven
> Attachments: HADOOP-2.2.0-9613.patch, HADOOP-9613.1.patch, 
> HADOOP-9613.2.patch, HADOOP-9613.3.patch, HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2015-05-26 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-9613:
---
Attachment: HADOOP-9613.3.patch

Updating a patch to fix failures of tests.

> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Timothy St. Clair
>Assignee: Timothy St. Clair
>  Labels: BB2015-05-TBR, maven
> Attachments: HADOOP-2.2.0-9613.patch, HADOOP-9613.1.patch, 
> HADOOP-9613.2.patch, HADOOP-9613.3.patch, HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-26 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11934:
-
Attachment: (was: HADOOP-11934-11.patch)

> Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
> -
>
> Key: HADOOP-11934
> URL: https://issues.apache.org/jira/browse/HADOOP-11934
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Mike Yoder
>Assignee: Larry McCay
> Attachments: HADOOP-11934.001.patch, HADOOP-11934.002.patch, 
> HADOOP-11934.003.patch, HADOOP-11934.004.patch, HADOOP-11934.005.patch, 
> HADOOP-11934.006.patch, HADOOP-11934.007.patch, HADOOP-11934.008.patch, 
> HADOOP-11934.009.patch, HADOOP-11934.010.patch
>
>
> I was attempting to use the LdapGroupsMapping code and the 
> JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
> fatal, issue.  The code goes into what ought to have been an infinite loop, 
> were it not for it overflowing the stack and Java ending the loop.  Here is a 
> snippet of the stack; my annotations are at the bottom.
> {noformat}
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:65)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
>   at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
>   at 
> org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
>   at 
> org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
>   at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>   at org.apache.hadoop.security.Groups.(Groups.java:70)
>   at org.apache.hadoop.security.Groups.(Groups.java:66)
>   at 
> org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
>   at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
>   at 
> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:2753)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:2745)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:65)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
>   at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
>   at 
> org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
>   at 
> org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
>   at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>   at org.apache.hadoop.security.Groups.(Groups.java:70)
>   at org.apache.hadoop.security.Groups.(Groups.java:66)
>   at 
> org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
>   at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
>   at 
> org.apache.hadoop.security.UserGrou

[jira] [Updated] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-26 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11934:
-
Status: Open  (was: Patch Available)

> Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
> -
>
> Key: HADOOP-11934
> URL: https://issues.apache.org/jira/browse/HADOOP-11934
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Mike Yoder
>Assignee: Larry McCay
> Attachments: HADOOP-11934-11.patch, HADOOP-11934.001.patch, 
> HADOOP-11934.002.patch, HADOOP-11934.003.patch, HADOOP-11934.004.patch, 
> HADOOP-11934.005.patch, HADOOP-11934.006.patch, HADOOP-11934.007.patch, 
> HADOOP-11934.008.patch, HADOOP-11934.009.patch, HADOOP-11934.010.patch
>
>
> I was attempting to use the LdapGroupsMapping code and the 
> JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
> fatal, issue.  The code goes into what ought to have been an infinite loop, 
> were it not for it overflowing the stack and Java ending the loop.  Here is a 
> snippet of the stack; my annotations are at the bottom.
> {noformat}
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:65)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
>   at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
>   at 
> org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
>   at 
> org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
>   at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>   at org.apache.hadoop.security.Groups.(Groups.java:70)
>   at org.apache.hadoop.security.Groups.(Groups.java:66)
>   at 
> org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
>   at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
>   at 
> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:2753)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:2745)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:65)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
>   at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
>   at 
> org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
>   at 
> org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
>   at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>   at org.apache.hadoop.security.Groups.(Groups.java:70)
>   at org.apache.hadoop.security.Groups.(Groups.java:66)
>   at 
> org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
>   at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
>   at 
> org.apache.hadoop.se

[jira] [Commented] (HADOOP-11952) Native compilation on Solaris fails on Yarn due to use of FTS

2015-05-26 Thread Malcolm Kavalsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14560381#comment-14560381
 ] 

Malcolm Kavalsky commented on HADOOP-11952:
---

I have already ported to the ftw library ( It works on Hadoop 2.2, both 
Sparc and Intel)

I'll send you the code.




> Native compilation on Solaris fails on Yarn due to use of FTS
> -
>
> Key: HADOOP-11952
> URL: https://issues.apache.org/jira/browse/HADOOP-11952
> Project: Hadoop Common
>  Issue Type: Sub-task
> Environment: Solaris 11.2
>Reporter: Malcolm Kavalsky
>Assignee: Alan Burlison
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Compiling the Yarn Node Manager results in "fts" not found. On Solaris we 
> have an alternative ftw with similar functionality.
> This is isolated to a single file container-executor.c
> Note that this will just fix the compilation error. A more serious issue is 
> that Solaris does not support cgroups as Linux does.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12011) Allow to dump verbose information to ease debugging in raw erasure coders

2015-05-26 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-12011:
---
Attachment: HADOOP-12011-HDFS-7285-v3.patch

Thanks Uma for the good comments! Updated the patch accordingly.
Also moved the utilities into the rawcoder package because they're needed there 
to dump data in the concrete processing during coding/decoding in native coders.

> Allow to dump verbose information to ease debugging in raw erasure coders
> -
>
> Key: HADOOP-12011
> URL: https://issues.apache.org/jira/browse/HADOOP-12011
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-7285
>
> Attachments: HADOOP-12011-HDFS-7285-v1.patch, 
> HADOOP-12011-HDFS-7285-v3.patch
>
>
> While working on native erasure coders, it was found useful to dump key 
> information like encode/decode matrix, erasures and etc. for the 
> encode/decode call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11894) Bump the version of HTrace to 3.2.0-incubating

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14560378#comment-14560378
 ] 

Hadoop QA commented on HADOOP-11894:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m 19s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 36s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 39s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   3m  0s | Site still builds. |
| {color:red}-1{color} | checkstyle |   3m 33s | The applied patch generated  1 
new checkstyle issues (total was 118, now 118). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 42s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  23m 20s | Tests passed in 
hadoop-common. |
| {color:green}+1{color} | hdfs tests | 163m 11s | Tests passed in hadoop-hdfs. 
|
| | | 235m 59s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735483/HADOOP-11894.003.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle site |
| git revision | trunk / cdbd66b |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/6835/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6835/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6835/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6835/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6835/console |


This message was automatically generated.

> Bump the version of HTrace to 3.2.0-incubating
> --
>
> Key: HADOOP-11894
> URL: https://issues.apache.org/jira/browse/HADOOP-11894
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
> Attachments: HADOOP-11894.001.patch, HADOOP-11894.002.patch, 
> HADOOP-11894.003.patch
>
>
> * update pom.xml
> * update documentation
> * replace {{addKVAnnotation(byte[] key, byte[] value)}} with 
> {{addKVAnnotation(String key, String value)}}
> * replace {{SpanReceiverHost#getUniqueLocalTraceFileName}} with 
> {{LocalFileSpanReceiver#getUniqueLocalTraceFileName}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-26 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11934:
-
Status: Patch Available  (was: Open)

> Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
> -
>
> Key: HADOOP-11934
> URL: https://issues.apache.org/jira/browse/HADOOP-11934
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Mike Yoder
>Assignee: Larry McCay
> Attachments: HADOOP-11934-11.patch, HADOOP-11934.001.patch, 
> HADOOP-11934.002.patch, HADOOP-11934.003.patch, HADOOP-11934.004.patch, 
> HADOOP-11934.005.patch, HADOOP-11934.006.patch, HADOOP-11934.007.patch, 
> HADOOP-11934.008.patch, HADOOP-11934.009.patch, HADOOP-11934.010.patch
>
>
> I was attempting to use the LdapGroupsMapping code and the 
> JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
> fatal, issue.  The code goes into what ought to have been an infinite loop, 
> were it not for it overflowing the stack and Java ending the loop.  Here is a 
> snippet of the stack; my annotations are at the bottom.
> {noformat}
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:65)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
>   at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
>   at 
> org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
>   at 
> org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
>   at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>   at org.apache.hadoop.security.Groups.(Groups.java:70)
>   at org.apache.hadoop.security.Groups.(Groups.java:66)
>   at 
> org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
>   at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
>   at 
> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:2753)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:2745)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:65)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
>   at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
>   at 
> org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
>   at 
> org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
>   at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>   at org.apache.hadoop.security.Groups.(Groups.java:70)
>   at org.apache.hadoop.security.Groups.(Groups.java:66)
>   at 
> org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
>   at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
>   at 
> org.apache.hadoop.se

[jira] [Updated] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-26 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11934:
-
Attachment: HADOOP-11934-11.patch

Addresses [~cnauroth]'s review comments.

I will file a separate for issue #5 - as suggested.

> Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
> -
>
> Key: HADOOP-11934
> URL: https://issues.apache.org/jira/browse/HADOOP-11934
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Mike Yoder
>Assignee: Larry McCay
> Attachments: HADOOP-11934-11.patch, HADOOP-11934.001.patch, 
> HADOOP-11934.002.patch, HADOOP-11934.003.patch, HADOOP-11934.004.patch, 
> HADOOP-11934.005.patch, HADOOP-11934.006.patch, HADOOP-11934.007.patch, 
> HADOOP-11934.008.patch, HADOOP-11934.009.patch, HADOOP-11934.010.patch
>
>
> I was attempting to use the LdapGroupsMapping code and the 
> JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
> fatal, issue.  The code goes into what ought to have been an infinite loop, 
> were it not for it overflowing the stack and Java ending the loop.  Here is a 
> snippet of the stack; my annotations are at the bottom.
> {noformat}
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:65)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
>   at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
>   at 
> org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
>   at 
> org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
>   at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>   at org.apache.hadoop.security.Groups.(Groups.java:70)
>   at org.apache.hadoop.security.Groups.(Groups.java:66)
>   at 
> org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
>   at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
>   at 
> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:2753)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:2745)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:65)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
>   at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
>   at 
> org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
>   at 
> org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
>   at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>   at org.apache.hadoop.security.Groups.(Groups.java:70)
>   at org.apache.hadoop.security.Groups.(Groups.java:66)
>   at 
> org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
>   at 
> org.apache.hadoop.security.UserG

[jira] [Updated] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-26 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11934:
-
Status: Open  (was: Patch Available)

> Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
> -
>
> Key: HADOOP-11934
> URL: https://issues.apache.org/jira/browse/HADOOP-11934
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Mike Yoder
>Assignee: Larry McCay
> Attachments: HADOOP-11934.001.patch, HADOOP-11934.002.patch, 
> HADOOP-11934.003.patch, HADOOP-11934.004.patch, HADOOP-11934.005.patch, 
> HADOOP-11934.006.patch, HADOOP-11934.007.patch, HADOOP-11934.008.patch, 
> HADOOP-11934.009.patch, HADOOP-11934.010.patch
>
>
> I was attempting to use the LdapGroupsMapping code and the 
> JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
> fatal, issue.  The code goes into what ought to have been an infinite loop, 
> were it not for it overflowing the stack and Java ending the loop.  Here is a 
> snippet of the stack; my annotations are at the bottom.
> {noformat}
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:65)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
>   at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
>   at 
> org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
>   at 
> org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
>   at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>   at org.apache.hadoop.security.Groups.(Groups.java:70)
>   at org.apache.hadoop.security.Groups.(Groups.java:66)
>   at 
> org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
>   at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
>   at 
> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:2753)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:2745)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:65)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
>   at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
>   at 
> org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
>   at 
> org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
>   at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>   at org.apache.hadoop.security.Groups.(Groups.java:70)
>   at org.apache.hadoop.security.Groups.(Groups.java:66)
>   at 
> org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
>   at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
>   at 
> org.apache.hadoop.security.UserGroupInforma

[jira] [Comment Edited] (HADOOP-12027) enable bzip2 on OS X

2015-05-26 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14560337#comment-14560337
 ] 

Allen Wittenauer edited comment on HADOOP-12027 at 5/27/15 3:35 AM:


This is way simpler than what I thought:

The CMakeList.txt needs to get changed to be:
{code}
SET(STORED_CMAKE_FIND_LIBRARY_SUFFIXES ${CMAKE_FIND_LIBRARY_SUFFIXES})
IF(${CMAKE_SYSTEM_NAME} MATCHES "Darwin")
  # bzip2 detection fails on OS X for some reason here
ELSE()
  set_find_shared_library_version("1")
ENDIF()
find_package(BZip2 QUIET)
{code}

and then it appears that setting env vars, etc, works as expected.  (e.g., 
BZIP2_PREFIX_DIR=/usr/local/opt/bzip2 should make cmake pick it up from 
homebrew)


was (Author: aw):
This is way simpler than what I thought:

The CMakeList.txt needs to get changed to be:
{code}
SET(STORED_CMAKE_FIND_LIBRARY_SUFFIXES ${CMAKE_FIND_LIBRARY_SUFFIXES})
IF(${CMAKE_SYSTEM_NAME} MATCHES "Darwin")
  # No effect. bzip2 not built as a shared lib 
ELSE()
  set_find_shared_library_version("1")
ENDIF()
find_package(BZip2 QUIET)
{code}

and then it appears that setting env vars, etc, works as expected.  (e.g., 
BZIP2_PREFIX_DIR=/usr/local/opt/bzip2 should make cmake pick it up from 
homebrew)

> enable bzip2 on OS X
> 
>
> Key: HADOOP-12027
> URL: https://issues.apache.org/jira/browse/HADOOP-12027
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Reporter: Allen Wittenauer
>
> OS X Mavericks + homebrew could compile bzip2 bits if there was a way to 
> expose the bzip2 headers+lib location to CMake like we do for snappy, 
> OpenSSL, etc.  Additionally, bzip2 only comes as a static library on Darwin, 
> so we need to escape out the forced shared library bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12027) enable bzip2 on OS X

2015-05-26 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14560348#comment-14560348
 ] 

Allen Wittenauer commented on HADOOP-12027:
---

OK, env var isn't needed.  It appears that our hack around forcing shared libs 
doesn't work for bzip2 on OS X.

> enable bzip2 on OS X
> 
>
> Key: HADOOP-12027
> URL: https://issues.apache.org/jira/browse/HADOOP-12027
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Reporter: Allen Wittenauer
>
> OS X Mavericks + homebrew could compile bzip2 bits if there was a way to 
> expose the bzip2 headers+lib location to CMake like we do for snappy, 
> OpenSSL, etc.  Additionally, bzip2 only comes as a static library on Darwin, 
> so we need to escape out the forced shared library bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-12027) enable bzip2 on OS X

2015-05-26 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14560337#comment-14560337
 ] 

Allen Wittenauer edited comment on HADOOP-12027 at 5/27/15 3:19 AM:


This is way simpler than what I thought:

The CMakeList.txt needs to get changed to be:
{code}
SET(STORED_CMAKE_FIND_LIBRARY_SUFFIXES ${CMAKE_FIND_LIBRARY_SUFFIXES})
IF(${CMAKE_SYSTEM_NAME} MATCHES "Darwin")
  # No effect. bzip2 not built as a shared lib 
ELSE()
  set_find_shared_library_version("1")
ENDIF()
find_package(BZip2 QUIET)
{code}

and then it appears that setting env vars, etc, works as expected.  (e.g., 
BZIP2_PREFIX_DIR=/usr/local/opt/bzip2 should make cmake pick it up from 
homebrew)


was (Author: aw):
This is way simpler than what I thought:

The CMakeList.txt needs to get changed to be:
{code}
SET(STORED_CMAKE_FIND_LIBRARY_SUFFIXES ${CMAKE_FIND_LIBRARY_SUFFIXES})
IF(${CMAKE_SYSTEM_NAME} MATCHES "Darwin")
  # No effect. bzip2 not built as a shared lib 
ELSE()
  set_find_shared_library_version("1")
ENDIF()
find_package(BZip2 QUIET)
{code}

and then it appears that setting env vars, etc, works as expected.

> enable bzip2 on OS X
> 
>
> Key: HADOOP-12027
> URL: https://issues.apache.org/jira/browse/HADOOP-12027
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Reporter: Allen Wittenauer
>
> OS X Mavericks + homebrew could compile bzip2 bits if there was a way to 
> expose the bzip2 headers+lib location to CMake like we do for snappy, 
> OpenSSL, etc.  Additionally, bzip2 only comes as a static library on Darwin, 
> so we need to escape out the forced shared library bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12027) enable bzip2 on OS X

2015-05-26 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14560337#comment-14560337
 ] 

Allen Wittenauer commented on HADOOP-12027:
---

This is way simpler than what I thought:

The CMakeList.txt needs to get changed to be:
{code}
SET(STORED_CMAKE_FIND_LIBRARY_SUFFIXES ${CMAKE_FIND_LIBRARY_SUFFIXES})
IF(${CMAKE_SYSTEM_NAME} MATCHES "Darwin")
  # No effect. bzip2 not built as a shared lib 
ELSE()
  set_find_shared_library_version("1")
ENDIF()
find_package(BZip2 QUIET)
{code}

and then it appears that setting env vars, etc, works as expected.

> enable bzip2 on OS X
> 
>
> Key: HADOOP-12027
> URL: https://issues.apache.org/jira/browse/HADOOP-12027
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Reporter: Allen Wittenauer
>
> OS X Mavericks + homebrew could compile bzip2 bits if there was a way to 
> expose the bzip2 headers+lib location to CMake like we do for snappy, 
> OpenSSL, etc.  Additionally, bzip2 only comes as a static library on Darwin, 
> so we need to escape out the forced shared library bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12027) enable bzip2 on OS X

2015-05-26 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12027:
--
Summary: enable bzip2 on OS X  (was: need a maven property for bzip2 
headers)

> enable bzip2 on OS X
> 
>
> Key: HADOOP-12027
> URL: https://issues.apache.org/jira/browse/HADOOP-12027
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Reporter: Allen Wittenauer
>
> OS X Mavericks + homebrew could compile bzip2 bits if there was a way to 
> expose the bzip2 headers+lib location to CMake like we do for snappy, 
> OpenSSL, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12027) enable bzip2 on OS X

2015-05-26 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12027:
--
Description: OS X Mavericks + homebrew could compile bzip2 bits if there 
was a way to expose the bzip2 headers+lib location to CMake like we do for 
snappy, OpenSSL, etc.  Additionally, bzip2 only comes as a static library on 
Darwin, so we need to escape out the forced shared library bit.  (was: OS X 
Mavericks + homebrew could compile bzip2 bits if there was a way to expose the 
bzip2 headers+lib location to CMake like we do for snappy, OpenSSL, etc.)

> enable bzip2 on OS X
> 
>
> Key: HADOOP-12027
> URL: https://issues.apache.org/jira/browse/HADOOP-12027
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Reporter: Allen Wittenauer
>
> OS X Mavericks + homebrew could compile bzip2 bits if there was a way to 
> expose the bzip2 headers+lib location to CMake like we do for snappy, 
> OpenSSL, etc.  Additionally, bzip2 only comes as a static library on Darwin, 
> so we need to escape out the forced shared library bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12036) Consolidate all of the cmake extensions in one directory

2015-05-26 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-12036:
-

 Summary: Consolidate all of the cmake extensions in one directory
 Key: HADOOP-12036
 URL: https://issues.apache.org/jira/browse/HADOOP-12036
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Allen Wittenauer


Rather than have a half-dozen redefinitions, custom extensions, etc, we should 
move them all to one location so that the cmake environment is consistent 
between the various native components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12031) test-patch.sh should have an xml plugin

2015-05-26 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14560229#comment-14560229
 ] 

Kengo Seki commented on HADOOP-12031:
-

whitespace plugin seems not to be able to detect the trailing whitespace in the 
first patch at line 33. But I don't know why for now.

> test-patch.sh should have an xml plugin
> ---
>
> Key: HADOOP-12031
> URL: https://issues.apache.org/jira/browse/HADOOP-12031
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Kengo Seki
>  Labels: newbie, test-patch
> Attachments: HADOOP-12031.001.patch, HADOOP-12031.002.patch
>
>
> HADOOP-11178 demonstrates why there is a need to verify xml files on a patch 
> change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-26 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14560218#comment-14560218
 ] 

Larry McCay commented on HADOOP-11934:
--

Hi [~cnauroth] - thank you for the detailed review!
I will get right on it.

> Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
> -
>
> Key: HADOOP-11934
> URL: https://issues.apache.org/jira/browse/HADOOP-11934
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Mike Yoder
>Assignee: Larry McCay
> Attachments: HADOOP-11934.001.patch, HADOOP-11934.002.patch, 
> HADOOP-11934.003.patch, HADOOP-11934.004.patch, HADOOP-11934.005.patch, 
> HADOOP-11934.006.patch, HADOOP-11934.007.patch, HADOOP-11934.008.patch, 
> HADOOP-11934.009.patch, HADOOP-11934.010.patch
>
>
> I was attempting to use the LdapGroupsMapping code and the 
> JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
> fatal, issue.  The code goes into what ought to have been an infinite loop, 
> were it not for it overflowing the stack and Java ending the loop.  Here is a 
> snippet of the stack; my annotations are at the bottom.
> {noformat}
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:65)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
>   at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
>   at 
> org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
>   at 
> org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
>   at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>   at org.apache.hadoop.security.Groups.(Groups.java:70)
>   at org.apache.hadoop.security.Groups.(Groups.java:66)
>   at 
> org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
>   at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
>   at 
> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:2753)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:2745)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:65)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
>   at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
>   at 
> org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
>   at 
> org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
>   at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>   at org.apache.hadoop.security.Groups.(Groups.java:70)
>   at org.apache.hadoop.security.Groups.(Groups.java:66)
>   at 
> org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
>   at 
> org.apache.hadoop.security.UserGroupInformation.initia

[jira] [Commented] (HADOOP-11969) ThreadLocal initialization in several classes is not thread safe

2015-05-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14560189#comment-14560189
 ] 

Hudson commented on HADOOP-11969:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #209 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/209/])
HADOOP-11969. ThreadLocal initialization in several classes is not thread safe 
(Sean Busbey via Colin P. McCabe) (cmccabe: rev 
7dba7005b79994106321b0f86bc8f4ea51a3c185)
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesInput.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/record/BinaryRecordOutput.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ReflectionUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestDirHelper.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesOutput.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/Text.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/record/BinaryRecordInput.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestJettyHelper.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesRecordOutput.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/ShuffleSchedulerImpl.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/Chain.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesRecordInput.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestHdfsHelper.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesWritableOutput.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MD5Hash.java
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesWritableInput.java
* 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSMDCFilter.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/PipesPartitioner.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/servlet/ServerWebApp.java


> ThreadLocal initialization in several classes is not thread safe
> 
>
> Key: HADOOP-11969
> URL: https://issues.apache.org/jira/browse/HADOOP-11969
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
>  Labels: thread-safety
> Fix For: 2.8.0
>
> Attachments: HADOOP-11969.1.patch, HADOOP-11969.2.patch, 
> HADOOP-11969.3.patch, HADOOP-11969.4.patch, HADOOP-11969.5.patch
>
>
> Right now, the initialization of hte thread local factories for encoder / 
> decoder in Text are not marked final. This means they end up with a static 
> initializer that is not guaranteed to be finished running before the members 
> are visible. 
> Under heavy contention, this means during initialization some users will get 
> an NPE:
> {code}
> (2015-05-05 08:58:03.974 : solr_server_log.log) 
>  org.apache.solr.common.SolrException; null:java.lang.NullPointerException
>   at org.apache.hadoop.io.Text.decode(Text.java:406)
>   at org.apache.hadoop.io.Text.decode(Text.java:389)
>   at org.apache.hadoop.io.Text.toString(Text.java:280)
>   at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:764)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.buildBaseHeader(DataTransferProtoUtil.java:81)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.buildClientHeader(DataTransferProtoUtil.java:71)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Sender.readBlock(Sender.java:101)
>   at 
> org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:400)
>   at 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:785)
>   at 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:663)
>   at 
> org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:327)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1027)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:974)
>

[jira] [Updated] (HADOOP-11894) Bump the version of HTrace to 3.2.0-incubating

2015-05-26 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-11894:
--
Attachment: HADOOP-11894.003.patch

I attached updated patch. Thanks, [~cmccabe].

> Bump the version of HTrace to 3.2.0-incubating
> --
>
> Key: HADOOP-11894
> URL: https://issues.apache.org/jira/browse/HADOOP-11894
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
> Attachments: HADOOP-11894.001.patch, HADOOP-11894.002.patch, 
> HADOOP-11894.003.patch
>
>
> * update pom.xml
> * update documentation
> * replace {{addKVAnnotation(byte[] key, byte[] value)}} with 
> {{addKVAnnotation(String key, String value)}}
> * replace {{SpanReceiverHost#getUniqueLocalTraceFileName}} with 
> {{LocalFileSpanReceiver#getUniqueLocalTraceFileName}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-26 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14560173#comment-14560173
 ] 

Chris Nauroth commented on HADOOP-11934:


Hi [~lmccay].  This looks great overall!  Here are a few comments, mostly minor.

# Both {{AbstractJavaKeyStoreProvider}} and {{LocalJavaKeyStoreProvider}} have 
copied some class-level JavaDocs from {{JavaKeyStoreProvider}}.  This isn't 
completely accurate, because those comments talk about pointing to different 
{{FileSystem}} implementations.  Could you please revise this?
# {{AbstractJavaKeyStoreProvider}} constructor: The trunk version of the 
following code would trim the password.  Do we need to keep that?
{code}
  try (InputStream is = pwdFile.openStream()) {
password = IOUtils.toCharArray(is);
  }
{code}
{code}
  try (InputStream is = pwdFile.openStream()) {
password = IOUtils.toString(is).trim().toCharArray();
  }
{code}
# {{AbstractJavaKeyStoreProvider#bytesToChars}}: The existing trunk code used 
{{Charsets#UTF_8}} to avoid the need to handle 
{{UnsupportedEncodingException}}.  Shall we keep it the same, or was this an 
intentional change?
# {{AbstractJavaKeyStoreProvider#getPathAsString}}: This has the same 
implementation in both subclasses.  Would it make sense to refactor that up to 
the base class as a {{protected final}} method?
# {{JavaKeyStoreProvider#getOutputStreamForKeystore}}: This isn't a new thing 
with your patch, but I wanted to mention that this overload of the 
{{FileSystem.create}} method is not atomic.  First it creates the file with 
default permissions (usually 644), and then setting the requested permissions 
is done separately.  In the case of HDFS, this is 2 separate RPCs.  That means 
there is a brief window in which the file has default permissions.  If the 
process dies after the first RPC but before the second, then the permissions 
will never be changed.  To do this atomically, we'd need to switch to one of 
the other (much uglier) overloads of {{FileSystem#create}}.  If you think 
changing this would be a good improvement, then I recommend queuing up a 
separate jira for that change, since we already have a mid-sized patch going 
here.
# {{JavaKeyStoreProvider}} and {{LocalJavaKeyStoreProvider}}: Please add the 
{{@Override}} annotation on all applicable methods.
# {{TestCredentialProviderFactory}}: After this patch, the tests fail on 
Windows, due to invalid string concatenation of a test directory that contains 
'\' characters, which are not valid URI characters.  (See below.)  There have 
been similar patches in the past to fix these tests on Windows, so you could 
look back at those for inspiration on how to fix this.  It will probably 
involve some kind of usage of {{Path#toUri}}, which results in all '/' 
characters, which is valid URI syntax.

{code}
java.io.IOException: Bad configuration of 
hadoop.security.credential.provider.path at 
jceks://fileC:\hdc\hadoop-common-project\hadoop-common\target\test\data\creds/test.jks
at java.net.URI$Parser.fail(URI.java:2829)
at java.net.URI$Parser.parseAuthority(URI.java:3167)
at java.net.URI$Parser.parseHierarchical(URI.java:3078)
at java.net.URI$Parser.parse(URI.java:3034)
at java.net.URI.(URI.java:595)
at 
org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:55)
at 
org.apache.hadoop.security.alias.TestCredentialProviderFactory.testFactory(TestCredentialProviderFactory.java:58)
{code}


> Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
> -
>
> Key: HADOOP-11934
> URL: https://issues.apache.org/jira/browse/HADOOP-11934
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Mike Yoder
>Assignee: Larry McCay
> Attachments: HADOOP-11934.001.patch, HADOOP-11934.002.patch, 
> HADOOP-11934.003.patch, HADOOP-11934.004.patch, HADOOP-11934.005.patch, 
> HADOOP-11934.006.patch, HADOOP-11934.007.patch, HADOOP-11934.008.patch, 
> HADOOP-11934.009.patch, HADOOP-11934.010.patch
>
>
> I was attempting to use the LdapGroupsMapping code and the 
> JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
> fatal, issue.  The code goes into what ought to have been an infinite loop, 
> were it not for it overflowing the stack and Java ending the loop.  Here is a 
> snippet of the stack; my annotations are at the bottom.
> {noformat}
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>   at 

[jira] [Commented] (HADOOP-12035) shellcheck plugin displays a wrong version potentially

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14560132#comment-14560132
 ] 

Hadoop QA commented on HADOOP-12035:


\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | reexec |   0m  0s | dev-support patch detected. |
| {color:blue}0{color} | pre-patch |   0m  0s | Pre-patch trunk compilation is 
healthy. |
| {color:blue}0{color} | @author |   0m  0s | Skipping @author checks as 
test-patch has been patched. |
| {color:green}+1{color} | release audit |   0m 16s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | shellcheck |   0m  5s | There were no new shellcheck 
(v0.3.3) issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| | |   0m 24s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735455/HADOOP-12035.001.patch 
|
| Optional Tests | shellcheck |
| git revision | trunk / cdbd66b |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6834/console |


This message was automatically generated.

> shellcheck plugin displays a wrong version potentially
> --
>
> Key: HADOOP-12035
> URL: https://issues.apache.org/jira/browse/HADOOP-12035
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Trivial
>  Labels: newbie, test-patch
> Attachments: HADOOP-12035.001.patch
>
>
> In dev-support/test-patch.d/shellcheck.sh:
> {code}
> SHELLCHECK_VERSION=$(shellcheck --version | ${GREP} version: | ${AWK} '{print 
> $NF}')
> {code}
> it should be 
> {code}
> SHELLCHECK_VERSION=$(${SHELLCHECK} --version | …)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12031) test-patch.sh should have an xml plugin

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14560131#comment-14560131
 ] 

Hadoop QA commented on HADOOP-12031:


\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | reexec |   0m  0s | dev-support patch detected. |
| {color:blue}0{color} | pre-patch |   0m  0s | Pre-patch trunk compilation is 
healthy. |
| {color:blue}0{color} | @author |   0m  0s | Skipping @author checks as 
test-patch has been patched. |
| {color:green}+1{color} | release audit |   0m 15s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | shellcheck |   0m  4s | There were no new shellcheck 
(v0.3.3) issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | XML well-formedness |   0m  0s | The patch has no 
ill-formed XML file. |
| | |   0m 23s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735458/HADOOP-12031.002.patch 
|
| Optional Tests | shellcheck |
| git revision | trunk / cdbd66b |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6833/console |


This message was automatically generated.

> test-patch.sh should have an xml plugin
> ---
>
> Key: HADOOP-12031
> URL: https://issues.apache.org/jira/browse/HADOOP-12031
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Kengo Seki
>  Labels: newbie, test-patch
> Attachments: HADOOP-12031.001.patch, HADOOP-12031.002.patch
>
>
> HADOOP-11178 demonstrates why there is a need to verify xml files on a patch 
> change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12031) test-patch.sh should have an xml plugin

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14560128#comment-14560128
 ] 

Hadoop QA commented on HADOOP-12031:


(!) A patch to test-patch or smart-apply-patch has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6833/console in case of 
problems.

> test-patch.sh should have an xml plugin
> ---
>
> Key: HADOOP-12031
> URL: https://issues.apache.org/jira/browse/HADOOP-12031
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Kengo Seki
>  Labels: newbie, test-patch
> Attachments: HADOOP-12031.001.patch, HADOOP-12031.002.patch
>
>
> HADOOP-11178 demonstrates why there is a need to verify xml files on a patch 
> change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12035) shellcheck plugin displays a wrong version potentially

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14560129#comment-14560129
 ] 

Hadoop QA commented on HADOOP-12035:


(!) A patch to test-patch or smart-apply-patch has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6834/console in case of 
problems.

> shellcheck plugin displays a wrong version potentially
> --
>
> Key: HADOOP-12035
> URL: https://issues.apache.org/jira/browse/HADOOP-12035
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Trivial
>  Labels: newbie, test-patch
> Attachments: HADOOP-12035.001.patch
>
>
> In dev-support/test-patch.d/shellcheck.sh:
> {code}
> SHELLCHECK_VERSION=$(shellcheck --version | ${GREP} version: | ${AWK} '{print 
> $NF}')
> {code}
> it should be 
> {code}
> SHELLCHECK_VERSION=$(${SHELLCHECK} --version | …)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12031) test-patch.sh should have an xml plugin

2015-05-26 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12031:

Attachment: HADOOP-12031.002.patch

-02:

* remove trailing whitespace
* fix wrong display for subsystem (see above)

> test-patch.sh should have an xml plugin
> ---
>
> Key: HADOOP-12031
> URL: https://issues.apache.org/jira/browse/HADOOP-12031
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Kengo Seki
>  Labels: newbie, test-patch
> Attachments: HADOOP-12031.001.patch, HADOOP-12031.002.patch
>
>
> HADOOP-11178 demonstrates why there is a need to verify xml files on a patch 
> change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12035) shellcheck plugin displays a wrong version potentially

2015-05-26 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12035:

Assignee: Kengo Seki
  Status: Patch Available  (was: Open)

> shellcheck plugin displays a wrong version potentially
> --
>
> Key: HADOOP-12035
> URL: https://issues.apache.org/jira/browse/HADOOP-12035
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Trivial
>  Labels: newbie, test-patch
> Attachments: HADOOP-12035.001.patch
>
>
> In dev-support/test-patch.d/shellcheck.sh:
> {code}
> SHELLCHECK_VERSION=$(shellcheck --version | ${GREP} version: | ${AWK} '{print 
> $NF}')
> {code}
> it should be 
> {code}
> SHELLCHECK_VERSION=$(${SHELLCHECK} --version | …)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12035) shellcheck plugin displays a wrong version potentially

2015-05-26 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12035:

Attachment: HADOOP-12035.001.patch

Attaching a patch.

> shellcheck plugin displays a wrong version potentially
> --
>
> Key: HADOOP-12035
> URL: https://issues.apache.org/jira/browse/HADOOP-12035
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Kengo Seki
>Priority: Trivial
>  Labels: newbie, test-patch
> Attachments: HADOOP-12035.001.patch
>
>
> In dev-support/test-patch.d/shellcheck.sh:
> {code}
> SHELLCHECK_VERSION=$(shellcheck --version | ${GREP} version: | ${AWK} '{print 
> $NF}')
> {code}
> it should be 
> {code}
> SHELLCHECK_VERSION=$(${SHELLCHECK} --version | …)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11937) Guarantee a full build of all native code during pre-commit.

2015-05-26 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14560113#comment-14560113
 ] 

Allen Wittenauer commented on HADOOP-11937:
---

Fixing bzip2 header locations to be defined (and to be consistent with the rest 
of the native code), is minor compared to the other patches standing in line.  
I say that as someone who has this functionality in this JIRA implemented as 
part of another patch.

> Guarantee a full build of all native code during pre-commit.
> 
>
> Key: HADOOP-11937
> URL: https://issues.apache.org/jira/browse/HADOOP-11937
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Reporter: Chris Nauroth
>
> Some of the native components of the build are considered optional and either 
> will not build at all without passing special flags to Maven or will allow a 
> build to proceed if dependencies are missing from the build machine.  If 
> these components do not get built, then pre-commit isn't really providing 
> full coverage of the build.  This issue proposes to update test-patch.sh so 
> that it does a full build of all native components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12031) test-patch.sh should have an xml plugin

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14560094#comment-14560094
 ] 

Hadoop QA commented on HADOOP-12031:


(!) A patch to test-patch or smart-apply-patch has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6832/console in case of 
problems.

> test-patch.sh should have an xml plugin
> ---
>
> Key: HADOOP-12031
> URL: https://issues.apache.org/jira/browse/HADOOP-12031
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Kengo Seki
>  Labels: newbie, test-patch
> Attachments: HADOOP-12031.001.patch
>
>
> HADOOP-11178 demonstrates why there is a need to verify xml files on a patch 
> change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12031) test-patch.sh should have an xml plugin

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14560096#comment-14560096
 ] 

Hadoop QA commented on HADOOP-12031:


\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | reexec |   0m  0s | dev-support patch detected. |
| {color:blue}0{color} | pre-patch |   0m  0s | Pre-patch trunk compilation is 
healthy. |
| {color:blue}0{color} | @author |   0m  0s | Skipping @author checks as 
test-patch has been patched. |
| {color:green}+1{color} | release audit |   0m 15s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | shellcheck |   0m  4s | There were no new shellcheck 
(v0.3.3) issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no ill-formed 
XML file. |
| | |   0m 23s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735441/HADOOP-12031.001.patch 
|
| Optional Tests | shellcheck |
| git revision | trunk / cdbd66b |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6832/console |


This message was automatically generated.

> test-patch.sh should have an xml plugin
> ---
>
> Key: HADOOP-12031
> URL: https://issues.apache.org/jira/browse/HADOOP-12031
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Kengo Seki
>  Labels: newbie, test-patch
> Attachments: HADOOP-12031.001.patch
>
>
> HADOOP-11178 demonstrates why there is a need to verify xml files on a patch 
> change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12031) test-patch.sh should have an xml plugin

2015-05-26 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12031:

Assignee: Kengo Seki
  Labels: newbie test-patch  (was: newbie)
  Status: Patch Available  (was: Open)

> test-patch.sh should have an xml plugin
> ---
>
> Key: HADOOP-12031
> URL: https://issues.apache.org/jira/browse/HADOOP-12031
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Kengo Seki
>  Labels: newbie, test-patch
> Attachments: HADOOP-12031.001.patch
>
>
> HADOOP-11178 demonstrates why there is a need to verify xml files on a patch 
> change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12031) test-patch.sh should have an xml plugin

2015-05-26 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12031:

Component/s: build

> test-patch.sh should have an xml plugin
> ---
>
> Key: HADOOP-12031
> URL: https://issues.apache.org/jira/browse/HADOOP-12031
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Kengo Seki
>  Labels: newbie, test-patch
> Attachments: HADOOP-12031.001.patch
>
>
> HADOOP-11178 demonstrates why there is a need to verify xml files on a patch 
> change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12031) test-patch.sh should have an xml plugin

2015-05-26 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12031:

Attachment: HADOOP-12031.001.patch

Attaching a patch. One concern is, this plugin depends on Python currently. I 
assume we can use Python in most build environment, but please advise if there 
is a more portable and not-so-hard way to validate XML.

> test-patch.sh should have an xml plugin
> ---
>
> Key: HADOOP-12031
> URL: https://issues.apache.org/jira/browse/HADOOP-12031
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Allen Wittenauer
>  Labels: newbie
> Attachments: HADOOP-12031.001.patch
>
>
> HADOOP-11178 demonstrates why there is a need to verify xml files on a patch 
> change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11505) hadoop-mapreduce-client-nativetask uses bswap where be32toh is needed, doesn't work on non-x86

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14560059#comment-14560059
 ] 

Hadoop QA commented on HADOOP-11505:


\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 38s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 6 new or modified test files. |
| {color:green}+1{color} | javac |   7m 31s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 43s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 21s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  2s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   0m 35s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | mapreduce tests |   2m 11s | Tests passed in 
hadoop-mapreduce-client-nativetask. |
| | |  37m 37s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735425/HADOOP-11505.003.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 7dba700 |
| hadoop-mapreduce-client-nativetask test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6831/artifact/patchprocess/testrun_hadoop-mapreduce-client-nativetask.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6831/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6831/console |


This message was automatically generated.

> hadoop-mapreduce-client-nativetask uses bswap where be32toh is needed, 
> doesn't work on non-x86
> --
>
> Key: HADOOP-11505
> URL: https://issues.apache.org/jira/browse/HADOOP-11505
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11505.001.patch, HADOOP-11505.003.patch
>
>
> hadoop-mapreduce-client-nativetask fails to use x86 optimizations in some 
> cases.  Also, on some alternate, non-x86, non-ARM architectures the generated 
> code is incorrect.  Thanks to Steve Loughran and Edward Nevill for finding 
> this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11505) hadoop-mapreduce-client-nativetask uses bswap where be32toh is needed, doesn't work on non-x86

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559990#comment-14559990
 ] 

Hadoop QA commented on HADOOP-11505:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 49s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 5 new or modified test files. |
| {color:red}-1{color} | javac |   6m 24s | The patch appears to cause the 
build to fail. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735420/HADOOP-11505.002.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 7dba700 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6830/console |


This message was automatically generated.

> hadoop-mapreduce-client-nativetask uses bswap where be32toh is needed, 
> doesn't work on non-x86
> --
>
> Key: HADOOP-11505
> URL: https://issues.apache.org/jira/browse/HADOOP-11505
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11505.001.patch, HADOOP-11505.003.patch
>
>
> hadoop-mapreduce-client-nativetask fails to use x86 optimizations in some 
> cases.  Also, on some alternate, non-x86, non-ARM architectures the generated 
> code is incorrect.  Thanks to Steve Loughran and Edward Nevill for finding 
> this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12021) Augmenting Configuration to accomodate

2015-05-26 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559985#comment-14559985
 ] 

Andrew Wang commented on HADOOP-12021:
--

Gotcha, thanks Lewis. One more q, do you think loading a Configuration is 
easier than doing this yourself? I wrote up a little snippet which might be a 
nice starting point.

{code}
public class Hadooper {

  public static void main(String[] args) throws Exception {

ClassLoader cl = Configuration.class.getClassLoader();
DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
DocumentBuilder builder = factory.newDocumentBuilder();
Document document = 
builder.parse(cl.getResourceAsStream("core-default.xml"));
NodeList nodeList = document.getDocumentElement().getChildNodes();
for (int i = 0; i < nodeList.getLength(); i++) {
  Node node = nodeList.item(i);
  if (node instanceof Element) {
if (node.getNodeName().equals("property")) {
  System.out
  .println(((Element) 
node).getElementsByTagName("name").item(0).getTextContent());
  System.out.println(
  ((Element) 
node).getElementsByTagName("description").item(0).getTextContent());
}
  }
}
  }
}
{code}

> Augmenting Configuration to accomodate 
> 
>
> Key: HADOOP-12021
> URL: https://issues.apache.org/jira/browse/HADOOP-12021
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: conf
>Reporter: Lewis John McGibbney
>Priority: Minor
> Fix For: 1.3.0, 2.8.0
>
> Attachments: Screen Shot 2015-05-26 at 2.22.26 PM (2).png
>
>
> Over on the 
> [common-dev|http://www.mail-archive.com/common-dev%40hadoop.apache.org/msg16099.html]
>  ML I explained a use case which requires me to obtain the value of the 
> Configuration  tags.
> [~cnauroth] advised me to raise the issue to Jira for discussion.
> I am happy to provide a patch so that the  values are parsed out 
> of the various XML files and stored, and also that the Configuration class is 
> augmented to provide accessors to accommodate the use case.
> I wanted to find out what people think about this one and whether I should 
> check out Hadoop source and submit a patch. If you guys could provide some 
> advice it would be appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11505) hadoop-mapreduce-client-nativetask uses bswap where be32toh is needed, doesn't work on non-x86

2015-05-26 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-11505:
--
Attachment: (was: HADOOP-11505.002.patch)

> hadoop-mapreduce-client-nativetask uses bswap where be32toh is needed, 
> doesn't work on non-x86
> --
>
> Key: HADOOP-11505
> URL: https://issues.apache.org/jira/browse/HADOOP-11505
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11505.001.patch, HADOOP-11505.003.patch
>
>
> hadoop-mapreduce-client-nativetask fails to use x86 optimizations in some 
> cases.  Also, on some alternate, non-x86, non-ARM architectures the generated 
> code is incorrect.  Thanks to Steve Loughran and Edward Nevill for finding 
> this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11505) hadoop-mapreduce-client-nativetask uses bswap where be32toh is needed, doesn't work on non-x86

2015-05-26 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-11505:
--
Attachment: HADOOP-11505.003.patch

> hadoop-mapreduce-client-nativetask uses bswap where be32toh is needed, 
> doesn't work on non-x86
> --
>
> Key: HADOOP-11505
> URL: https://issues.apache.org/jira/browse/HADOOP-11505
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11505.001.patch, HADOOP-11505.003.patch
>
>
> hadoop-mapreduce-client-nativetask fails to use x86 optimizations in some 
> cases.  Also, on some alternate, non-x86, non-ARM architectures the generated 
> code is incorrect.  Thanks to Steve Loughran and Edward Nevill for finding 
> this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11505) hadoop-mapreduce-client-nativetask uses bswap where be32toh is needed, doesn't work on non-x86

2015-05-26 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-11505:
--
Attachment: HADOOP-11505.002.patch

> hadoop-mapreduce-client-nativetask uses bswap where be32toh is needed, 
> doesn't work on non-x86
> --
>
> Key: HADOOP-11505
> URL: https://issues.apache.org/jira/browse/HADOOP-11505
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11505.001.patch, HADOOP-11505.002.patch
>
>
> hadoop-mapreduce-client-nativetask fails to use x86 optimizations in some 
> cases.  Also, on some alternate, non-x86, non-ARM architectures the generated 
> code is incorrect.  Thanks to Steve Loughran and Edward Nevill for finding 
> this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11505) hadoop-mapreduce-client-nativetask uses bswap where be32toh is needed, doesn't work on non-x86

2015-05-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559954#comment-14559954
 ] 

Colin Patrick McCabe commented on HADOOP-11505:
---

It seems that the code was written with "bswap" (byte swap) used where 
"network-to-host" (ntohl, ntohs) was actually what was intended.  On a 
big-endian architecture, no byte swapping is needed in these cases.  There are 
already standard functions for converting between network and host byte order 
such as be32toh, be64toh, so we should use those.

> hadoop-mapreduce-client-nativetask uses bswap where be32toh is needed, 
> doesn't work on non-x86
> --
>
> Key: HADOOP-11505
> URL: https://issues.apache.org/jira/browse/HADOOP-11505
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11505.001.patch
>
>
> hadoop-mapreduce-client-nativetask fails to use x86 optimizations in some 
> cases.  Also, on some alternate, non-x86, non-ARM architectures the generated 
> code is incorrect.  Thanks to Steve Loughran and Edward Nevill for finding 
> this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11505) hadoop-mapreduce-client-nativetask uses bswap where be32toh is needed, doesn't work on non-x86

2015-05-26 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-11505:
--
Summary: hadoop-mapreduce-client-nativetask uses bswap where be32toh is 
needed, doesn't work on non-x86  (was: hadoop-mapreduce-client-nativetask fails 
to use x86 optimizations in some cases)

> hadoop-mapreduce-client-nativetask uses bswap where be32toh is needed, 
> doesn't work on non-x86
> --
>
> Key: HADOOP-11505
> URL: https://issues.apache.org/jira/browse/HADOOP-11505
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11505.001.patch
>
>
> hadoop-mapreduce-client-nativetask fails to use x86 optimizations in some 
> cases.  Also, on some alternate, non-x86, non-ARM architectures the generated 
> code is incorrect.  Thanks to Steve Loughran and Edward Nevill for finding 
> this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11894) Bump the version of HTrace to 3.2.0-incubating

2015-05-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559927#comment-14559927
 ] 

Colin Patrick McCabe commented on HADOOP-11894:
---

{code}
try (TraceScope ts = (TraceScope)Trace.startSpan("FsShell", 
Sampler.ALWAYS)) {
{code}
I don't think this cast is needed...

+1 after that's fixed

> Bump the version of HTrace to 3.2.0-incubating
> --
>
> Key: HADOOP-11894
> URL: https://issues.apache.org/jira/browse/HADOOP-11894
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
> Attachments: HADOOP-11894.001.patch, HADOOP-11894.002.patch
>
>
> * update pom.xml
> * update documentation
> * replace {{addKVAnnotation(byte[] key, byte[] value)}} with 
> {{addKVAnnotation(String key, String value)}}
> * replace {{SpanReceiverHost#getUniqueLocalTraceFileName}} with 
> {{LocalFileSpanReceiver#getUniqueLocalTraceFileName}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11887) Introduce Intel ISA-L erasure coding library for the native support

2015-05-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559921#comment-14559921
 ] 

Colin Patrick McCabe commented on HADOOP-11887:
---

The run-test macro in the ant build is clever.  However, it seems that these 
unit tests will now fail if the erasure encoding library is not present.  Can 
you add a check to this so that the native unit test binary is not executed if 
it doesn't exist?

+1 aside from that

> Introduce Intel ISA-L erasure coding library for the native support
> ---
>
> Key: HADOOP-11887
> URL: https://issues.apache.org/jira/browse/HADOOP-11887
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11887-v1.patch, HADOOP-11887-v2.patch, 
> HADOOP-11887-v3.patch
>
>
> This is to introduce Intel ISA-L erasure coding library for the native 
> support, via dynamic loading mechanism (dynamic module, like *.so in *nix and 
> *.dll on Windows).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11984) Enable parallel JUnit tests in pre-commit.

2015-05-26 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559914#comment-14559914
 ] 

Chris Nauroth commented on HADOOP-11984:


Ah-ha!  There is our root cause:

{code}
 [exec] Running mkdir test.build.data: 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/hadoop-common-project/hadoop-common/target/test/data/{1..4}
 test.build.dir: 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/hadoop-common-project/hadoop-common/target/test-dir/{1..4}
 hadoop.tmp.dir: 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/hadoop-common-project/hadoop-common/target/test/{1..4}
{code}

It looks like Jenkins (and only Jenkins so far) is running a shell that isn't 
expanding {{\{1..4\}}} in the loop.  I don't have enough historical shell 
knowledge here to know if only certain shells promise support for this syntax.  
I suppose I could change this to call {{seq}}, which is part of coreutils, but 
then I don't really know the portability promises on that one either.

[~aw], can I get some advice from you?  What would be the most portable way to 
write the following kind of loop?

{code}
for i in {1..4}; do mkdir -p myDirectory/$i; done
{code}


> Enable parallel JUnit tests in pre-commit.
> --
>
> Key: HADOOP-11984
> URL: https://issues.apache.org/jira/browse/HADOOP-11984
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, scripts, test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-11984.001.patch, HADOOP-11984.002.patch, 
> HADOOP-11984.003.patch, HADOOP-11984.004.patch, HADOOP-11984.005.patch, 
> HADOOP-11984.006.patch, HADOOP-11984.007.patch, HADOOP-11984.008.patch, 
> HADOOP-11984.009.patch, HADOOP-11984.010.patch, HADOOP-11984.011.patch
>
>
> HADOOP-9287 and related issues implemented the parallel-tests Maven profile 
> for running JUnit tests in multiple concurrent processes.  This issue 
> proposes to activate that profile during pre-commit to speed up execution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12021) Augmenting Configuration to accomodate

2015-05-26 Thread Lewis John McGibbney (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lewis John McGibbney updated HADOOP-12021:
--
Attachment: Screen Shot 2015-05-26 at 2.22.26 PM (2).png

Hi [~andrew.wang], please see attached screenshot which illustrates exactly 
what I am trying to do.
You will see on the right hand side that the property description is not 
available with the current values being duplicates of the .
This is due to the code currently not parsing out the values for the property 
descriptions.
https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java#L2661-L2692

I wanted to see if I could @override Configurtion#loadResource(Properties 
properties, Resource wrapper, boolean quiet) but I can't.

> Augmenting Configuration to accomodate 
> 
>
> Key: HADOOP-12021
> URL: https://issues.apache.org/jira/browse/HADOOP-12021
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: conf
>Reporter: Lewis John McGibbney
>Priority: Minor
> Fix For: 1.3.0, 2.8.0
>
> Attachments: Screen Shot 2015-05-26 at 2.22.26 PM (2).png
>
>
> Over on the 
> [common-dev|http://www.mail-archive.com/common-dev%40hadoop.apache.org/msg16099.html]
>  ML I explained a use case which requires me to obtain the value of the 
> Configuration  tags.
> [~cnauroth] advised me to raise the issue to Jira for discussion.
> I am happy to provide a patch so that the  values are parsed out 
> of the various XML files and stored, and also that the Configuration class is 
> augmented to provide accessors to accommodate the use case.
> I wanted to find out what people think about this one and whether I should 
> check out Hadoop source and submit a patch. If you guys could provide some 
> advice it would be appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12033) Reducer task failure with java.lang.NoClassDefFoundError: Ljava/lang/InternalError at org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect

2015-05-26 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559801#comment-14559801
 ] 

Ivan Mitic commented on HADOOP-12033:
-

bq. If the problem turns around to be in MR, please move this to the MapReduce 
JIRA project
Sounds good Vinod. I placed it under Hadoop based on my best guess. 

> Reducer task failure with java.lang.NoClassDefFoundError: 
> Ljava/lang/InternalError at 
> org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect
> ---
>
> Key: HADOOP-12033
> URL: https://issues.apache.org/jira/browse/HADOOP-12033
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ivan Mitic
>
> We have noticed intermittent reducer task failures with the below exception:
> {code}
> Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in 
> shuffle in fetcher#9 at 
> org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134) at 
> org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376) at 
> org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:415) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: 
> java.lang.NoClassDefFoundError: Ljava/lang/InternalError at 
> org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect(Native
>  Method) at 
> org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompress(SnappyDecompressor.java:239)
>  at 
> org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:88)
>  at 
> org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
>  at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192) at 
> org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput.shuffle(InMemoryMapOutput.java:97)
>  at 
> org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyMapOutput(Fetcher.java:534)
>  at 
> org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:329)
>  at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:193) 
> Caused by: java.lang.ClassNotFoundException: Ljava.lang.InternalError at 
> java.net.URLClassLoader$1.run(URLClassLoader.java:366) at 
> java.net.URLClassLoader$1.run(URLClassLoader.java:355) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> java.net.URLClassLoader.findClass(URLClassLoader.java:354) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:425) at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:358) ... 9 more 
> {code}
> Usually, the reduce task succeeds on retry. 
> Some of the symptoms are similar to HADOOP-8423, but this fix is already 
> included (this is on Hadoop 2.6).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12033) Reducer task failure with java.lang.NoClassDefFoundError: Ljava/lang/InternalError at org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect

2015-05-26 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559791#comment-14559791
 ] 

Vinod Kumar Vavilapalli commented on HADOOP-12033:
--

If the problem turns around to be in MR, please move this to the MapReduce JIRA 
project.

> Reducer task failure with java.lang.NoClassDefFoundError: 
> Ljava/lang/InternalError at 
> org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect
> ---
>
> Key: HADOOP-12033
> URL: https://issues.apache.org/jira/browse/HADOOP-12033
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ivan Mitic
>
> We have noticed intermittent reducer task failures with the below exception:
> {code}
> Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in 
> shuffle in fetcher#9 at 
> org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134) at 
> org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376) at 
> org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:415) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: 
> java.lang.NoClassDefFoundError: Ljava/lang/InternalError at 
> org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect(Native
>  Method) at 
> org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompress(SnappyDecompressor.java:239)
>  at 
> org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:88)
>  at 
> org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
>  at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192) at 
> org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput.shuffle(InMemoryMapOutput.java:97)
>  at 
> org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyMapOutput(Fetcher.java:534)
>  at 
> org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:329)
>  at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:193) 
> Caused by: java.lang.ClassNotFoundException: Ljava.lang.InternalError at 
> java.net.URLClassLoader$1.run(URLClassLoader.java:366) at 
> java.net.URLClassLoader$1.run(URLClassLoader.java:355) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> java.net.URLClassLoader.findClass(URLClassLoader.java:354) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:425) at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:358) ... 9 more 
> {code}
> Usually, the reduce task succeeds on retry. 
> Some of the symptoms are similar to HADOOP-8423, but this fix is already 
> included (this is on Hadoop 2.6).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-11937) Guarantee a full build of all native code during pre-commit.

2015-05-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559680#comment-14559680
 ] 

Colin Patrick McCabe edited comment on HADOOP-11937 at 5/26/15 8:17 PM:


You're right, I am behind the times.  It's nice that \-Pnative works on more 
platforms now.

If there is stuff included in [~cnauroth]'s "full build" that doesn't yet work 
on Mac, test-patch.sh can simply detect that we are running on a Mac and not 
add those compilation flags.  That way, we are not blocked here, but Mac users 
still can run test-patch.sh.


was (Author: cmccabe):
You're right, I am behind the times.  It's nice that \-Pnative works on more 
platforms now.

If there is still included in [~cnauroth]'s "full build" that doesn't yet work 
on Mac, test-patch.sh can simply detect that we are running on a Mac and not 
add those compilation flags.  That way, we are not blocked here, but Mac users 
still can run test-patch.sh.

> Guarantee a full build of all native code during pre-commit.
> 
>
> Key: HADOOP-11937
> URL: https://issues.apache.org/jira/browse/HADOOP-11937
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Reporter: Chris Nauroth
>
> Some of the native components of the build are considered optional and either 
> will not build at all without passing special flags to Maven or will allow a 
> build to proceed if dependencies are missing from the build machine.  If 
> these components do not get built, then pre-commit isn't really providing 
> full coverage of the build.  This issue proposes to update test-patch.sh so 
> that it does a full build of all native components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11969) ThreadLocal initialization in several classes is not thread safe

2015-05-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559765#comment-14559765
 ] 

Hudson commented on HADOOP-11969:
-

FAILURE: Integrated in Hadoop-trunk-Commit #7905 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7905/])
HADOOP-11969. ThreadLocal initialization in several classes is not thread safe 
(Sean Busbey via Colin P. McCabe) (cmccabe: rev 
7dba7005b79994106321b0f86bc8f4ea51a3c185)
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesWritableInput.java
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/record/BinaryRecordOutput.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestDirHelper.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/Chain.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesRecordOutput.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesInput.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/PipesPartitioner.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/ShuffleSchedulerImpl.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesWritableOutput.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesRecordInput.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestHdfsHelper.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/Text.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/servlet/ServerWebApp.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/record/BinaryRecordInput.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesOutput.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MD5Hash.java
* 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSMDCFilter.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ReflectionUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestJettyHelper.java


> ThreadLocal initialization in several classes is not thread safe
> 
>
> Key: HADOOP-11969
> URL: https://issues.apache.org/jira/browse/HADOOP-11969
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
>  Labels: thread-safety
> Fix For: 2.8.0
>
> Attachments: HADOOP-11969.1.patch, HADOOP-11969.2.patch, 
> HADOOP-11969.3.patch, HADOOP-11969.4.patch, HADOOP-11969.5.patch
>
>
> Right now, the initialization of hte thread local factories for encoder / 
> decoder in Text are not marked final. This means they end up with a static 
> initializer that is not guaranteed to be finished running before the members 
> are visible. 
> Under heavy contention, this means during initialization some users will get 
> an NPE:
> {code}
> (2015-05-05 08:58:03.974 : solr_server_log.log) 
>  org.apache.solr.common.SolrException; null:java.lang.NullPointerException
>   at org.apache.hadoop.io.Text.decode(Text.java:406)
>   at org.apache.hadoop.io.Text.decode(Text.java:389)
>   at org.apache.hadoop.io.Text.toString(Text.java:280)
>   at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:764)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.buildBaseHeader(DataTransferProtoUtil.java:81)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.buildClientHeader(DataTransferProtoUtil.java:71)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Sender.readBlock(Sender.java:101)
>   at 
> org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:400)
>   at 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:785)
>   at 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:663)
>   at 
> org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:327)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1027)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:974)
>  

[jira] [Commented] (HADOOP-12021) Augmenting Configuration to accomodate

2015-05-26 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559745#comment-14559745
 ] 

Andrew Wang commented on HADOOP-12021:
--

Lewis, could you give a little more detail of your Nutch usecase?

It's also worth noting that while we provide the description in 
core-default.xml / hdfs-default.xml / etc for documentation, but probably not 
in user-provided config files. The -default.xml files are already included in 
our JARs, so it shouldn't increase dependency size. Loading them in will, 
however, increase in-memory size, which is probably a concern for some user app.

> Augmenting Configuration to accomodate 
> 
>
> Key: HADOOP-12021
> URL: https://issues.apache.org/jira/browse/HADOOP-12021
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: conf
>Reporter: Lewis John McGibbney
>Priority: Minor
> Fix For: 1.3.0, 2.8.0
>
>
> Over on the 
> [common-dev|http://www.mail-archive.com/common-dev%40hadoop.apache.org/msg16099.html]
>  ML I explained a use case which requires me to obtain the value of the 
> Configuration  tags.
> [~cnauroth] advised me to raise the issue to Jira for discussion.
> I am happy to provide a patch so that the  values are parsed out 
> of the various XML files and stored, and also that the Configuration class is 
> augmented to provide accessors to accommodate the use case.
> I wanted to find out what people think about this one and whether I should 
> check out Hadoop source and submit a patch. If you guys could provide some 
> advice it would be appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11984) Enable parallel JUnit tests in pre-commit.

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559741#comment-14559741
 ] 

Hadoop QA commented on HADOOP-11984:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | reexec |   0m  0s | dev-support patch detected. |
| {color:blue}0{color} | pre-patch |  15m 20s | Pre-patch trunk compilation is 
healthy. |
| {color:blue}0{color} | @author |   0m  0s | Skipping @author checks as 
test-patch has been patched. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 6 new or modified test files. |
| {color:green}+1{color} | javac |   7m 48s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  1s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m  6s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | shellcheck |   0m  8s | There were no new shellcheck 
(v0.3.3) issues. |
| {color:red}-1{color} | whitespace |   0m  1s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 35s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 43s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 44s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |   1m 16s | Tests passed in 
hadoop-common. |
| | |  40m 16s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735382/HADOOP-11984.011.patch 
|
| Optional Tests | shellcheck javadoc javac unit findbugs checkstyle |
| git revision | trunk / 10732d5 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6829/artifact/patchprocess/whitespace.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6829/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6829/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6829/console |


This message was automatically generated.

> Enable parallel JUnit tests in pre-commit.
> --
>
> Key: HADOOP-11984
> URL: https://issues.apache.org/jira/browse/HADOOP-11984
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, scripts, test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-11984.001.patch, HADOOP-11984.002.patch, 
> HADOOP-11984.003.patch, HADOOP-11984.004.patch, HADOOP-11984.005.patch, 
> HADOOP-11984.006.patch, HADOOP-11984.007.patch, HADOOP-11984.008.patch, 
> HADOOP-11984.009.patch, HADOOP-11984.010.patch, HADOOP-11984.011.patch
>
>
> HADOOP-9287 and related issues implemented the parallel-tests Maven profile 
> for running JUnit tests in multiple concurrent processes.  This issue 
> proposes to activate that profile during pre-commit to speed up execution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11807) add a lint mode to releasedocmaker

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559725#comment-14559725
 ] 

Hadoop QA commented on HADOOP-11807:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 38s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 30s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 35s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | common tests |  22m 28s | Tests passed in 
hadoop-common. |
| | |  56m 46s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735376/HADOOP-11807.004.patch 
|
| Optional Tests | javadoc javac unit |
| git revision | trunk / 500a1d9 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6828/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6828/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6828/console |


This message was automatically generated.

> add a lint mode to releasedocmaker
> --
>
> Key: HADOOP-11807
> URL: https://issues.apache.org/jira/browse/HADOOP-11807
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: ramtin
>Priority: Minor
> Attachments: HADOOP-11807.001.patch, HADOOP-11807.002.patch, 
> HADOOP-11807.003.patch, HADOOP-11807.004.patch
>
>
> * check for missing components (error)
> * check for missing assignee (error)
> * check for common version problems (warning)
> * add an error message for missing release notes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12001) Limiting LDAP search conflicts with posixGroup addition

2015-05-26 Thread Patrick White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559724#comment-14559724
 ] 

Patrick White commented on HADOOP-12001:


Those two patches on their own don't break anything, but the combination will 
break posixGroups implementation as it stands.
My patch corrects that.

I'm new to patches in Hadoop, what additional work is required on the patch?

For testing, I've run it on a cluster and verified I see the right thing when 
configured in posixGroups mode with the 'groups' command. I'm not 100% sure how 
to modify the unit tests to fit this case, but haven't looked very deeply yet.

> Limiting LDAP search conflicts with posixGroup addition
> ---
>
> Key: HADOOP-12001
> URL: https://issues.apache.org/jira/browse/HADOOP-12001
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.0, 2.8.0
>Reporter: Patrick White
>Assignee: Patrick White
>Priority: Blocker
> Attachments: HADOOP-12001.patch
>
>
> In HADOOP-9477, posixGroup support was added
> In HADOOP-10626, a limit on the returned attributes was added to speed up 
> queries.
> Limiting the attributes can break the SEARCH_CONTROLS object in the context 
> of the isPosix block, since it only asks LDAP for the groupNameAttr



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11969) ThreadLocal initialization in several classes is not thread safe

2015-05-26 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-11969:
--
  Resolution: Fixed
   Fix Version/s: 2.8.0
Target Version/s: 2.8.0
  Status: Resolved  (was: Patch Available)

> ThreadLocal initialization in several classes is not thread safe
> 
>
> Key: HADOOP-11969
> URL: https://issues.apache.org/jira/browse/HADOOP-11969
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
>  Labels: thread-safety
> Fix For: 2.8.0
>
> Attachments: HADOOP-11969.1.patch, HADOOP-11969.2.patch, 
> HADOOP-11969.3.patch, HADOOP-11969.4.patch, HADOOP-11969.5.patch
>
>
> Right now, the initialization of hte thread local factories for encoder / 
> decoder in Text are not marked final. This means they end up with a static 
> initializer that is not guaranteed to be finished running before the members 
> are visible. 
> Under heavy contention, this means during initialization some users will get 
> an NPE:
> {code}
> (2015-05-05 08:58:03.974 : solr_server_log.log) 
>  org.apache.solr.common.SolrException; null:java.lang.NullPointerException
>   at org.apache.hadoop.io.Text.decode(Text.java:406)
>   at org.apache.hadoop.io.Text.decode(Text.java:389)
>   at org.apache.hadoop.io.Text.toString(Text.java:280)
>   at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:764)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.buildBaseHeader(DataTransferProtoUtil.java:81)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.buildClientHeader(DataTransferProtoUtil.java:71)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Sender.readBlock(Sender.java:101)
>   at 
> org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:400)
>   at 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:785)
>   at 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:663)
>   at 
> org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:327)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1027)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:974)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1305)
>   at org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:78)
>   at 
> org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:107)
> ... SNIP...
> {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10798) globStatus() does not return sorted list of files

2015-05-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559711#comment-14559711
 ] 

Colin Patrick McCabe commented on HADOOP-10798:
---

I don't feel strongly about this either way.  If you want to implement the 
no-sort option, please add sorting to the relevant parts of the shell, though, 
and post a patch modifying the javadoc.

> globStatus() does not return sorted list of files
> -
>
> Key: HADOOP-10798
> URL: https://issues.apache.org/jira/browse/HADOOP-10798
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Felix Borchers
>Assignee: Colin Patrick McCabe
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10798.001.patch
>
>
> (FileSystem) globStatus() does not return a sorted file list anymore.
> But the API says: " ... Results are sorted by their names."
> Seems to be lost, when the Globber Object was introduced. Can't find a sort 
> in actual code.
> code to check this behavior:
> {code}
> Configuration conf = new Configuration();
> FileSystem fs = FileSystem.get(conf);
> Path path = new Path("/tmp/" + System.currentTimeMillis());
> fs.mkdirs(path);
> fs.deleteOnExit(path);
> fs.createNewFile(new Path(path, "2"));
> fs.createNewFile(new Path(path, "3"));
> fs.createNewFile(new Path(path, "1"));
> FileStatus[] status = fs.globStatus(new Path(path, "*"));
> Collection list = new ArrayList();
> for (FileStatus f: status) {
> list.add(f.getPath().toString());
> //System.out.println(f.getPath().toString());
> }
> boolean sorted = Ordering.natural().isOrdered(list);
> Assert.assertTrue(sorted);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12001) Limiting LDAP search conflicts with posixGroup addition

2015-05-26 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-12001:
-
Priority: Blocker  (was: Major)
Target Version/s: 2.7.1
Assignee: Patrick White

Bumping this to be a blocker for 2.7.1 given Elliot Clark's comment on the 
mailing lists
bq. HADOOP-12001 should probably be added to the blocker list since it's a
regression that can keep ldap from working.

Assigning this to Patrick White who put up a patch.

Given HADOOP-9477 was only in 2.8, the only breaking change in 2.7 is 
HADOOP-10626?

> Limiting LDAP search conflicts with posixGroup addition
> ---
>
> Key: HADOOP-12001
> URL: https://issues.apache.org/jira/browse/HADOOP-12001
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.0, 2.8.0
>Reporter: Patrick White
>Assignee: Patrick White
>Priority: Blocker
> Attachments: HADOOP-12001.patch
>
>
> In HADOOP-9477, posixGroup support was added
> In HADOOP-10626, a limit on the returned attributes was added to speed up 
> queries.
> Limiting the attributes can break the SEARCH_CONTROLS object in the context 
> of the isPosix block, since it only asks LDAP for the groupNameAttr



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11937) Guarantee a full build of all native code during pre-commit.

2015-05-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559680#comment-14559680
 ] 

Colin Patrick McCabe commented on HADOOP-11937:
---

You're right, I am behind the times.  It's nice that \-Pnative works on more 
platforms now.

If there is still included in [~cnauroth]'s "full build" that doesn't yet work 
on Mac, test-patch.sh can simply detect that we are running on a Mac and not 
add those compilation flags.  That way, we are not blocked here, but Mac users 
still can run test-patch.sh.

> Guarantee a full build of all native code during pre-commit.
> 
>
> Key: HADOOP-11937
> URL: https://issues.apache.org/jira/browse/HADOOP-11937
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Reporter: Chris Nauroth
>
> Some of the native components of the build are considered optional and either 
> will not build at all without passing special flags to Maven or will allow a 
> build to proceed if dependencies are missing from the build machine.  If 
> these components do not get built, then pre-commit isn't really providing 
> full coverage of the build.  This issue proposes to update test-patch.sh so 
> that it does a full build of all native components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11980) Make DataChecksum APIs public

2015-05-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559665#comment-14559665
 ] 

Colin Patrick McCabe commented on HADOOP-11980:
---

I would like to move {{DataChecksum#writeHeader}}, {{DataChecksum#getHeader}}, 
{{DataChecksum#newDataChecksum}}, etc. into a private class for HDFS.  Perhaps 
it could be a subclass.  These functions govern the format of HDFS meta files, 
which we would like to keep private from public users.

> Make DataChecksum APIs public
> -
>
> Key: HADOOP-11980
> URL: https://issues.apache.org/jira/browse/HADOOP-11980
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Apekshit Sharma
>Priority: Trivial
> Attachments: HADOOP-11980.patch
>
>
> HBASE-11927 adds functionality in hbase to use native hadoop library if 
> available, by using DataChecksum library.
> Add HBase to InterfaceAudience.LimitedPrivate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11984) Enable parallel JUnit tests in pre-commit.

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559664#comment-14559664
 ] 

Hadoop QA commented on HADOOP-11984:


(!) A patch to test-patch or smart-apply-patch has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6829/console in case of 
problems.

> Enable parallel JUnit tests in pre-commit.
> --
>
> Key: HADOOP-11984
> URL: https://issues.apache.org/jira/browse/HADOOP-11984
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, scripts, test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-11984.001.patch, HADOOP-11984.002.patch, 
> HADOOP-11984.003.patch, HADOOP-11984.004.patch, HADOOP-11984.005.patch, 
> HADOOP-11984.006.patch, HADOOP-11984.007.patch, HADOOP-11984.008.patch, 
> HADOOP-11984.009.patch, HADOOP-11984.010.patch, HADOOP-11984.011.patch
>
>
> HADOOP-9287 and related issues implemented the parallel-tests Maven profile 
> for running JUnit tests in multiple concurrent processes.  This issue 
> proposes to activate that profile during pre-commit to speed up execution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11969) ThreadLocal initialization in several classes is not thread safe

2015-05-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559658#comment-14559658
 ] 

Colin Patrick McCabe commented on HADOOP-11969:
---

Thanks for rebasing, [~busbey].  +1 once more, will commit shortly

> ThreadLocal initialization in several classes is not thread safe
> 
>
> Key: HADOOP-11969
> URL: https://issues.apache.org/jira/browse/HADOOP-11969
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
>  Labels: thread-safety
> Attachments: HADOOP-11969.1.patch, HADOOP-11969.2.patch, 
> HADOOP-11969.3.patch, HADOOP-11969.4.patch, HADOOP-11969.5.patch
>
>
> Right now, the initialization of hte thread local factories for encoder / 
> decoder in Text are not marked final. This means they end up with a static 
> initializer that is not guaranteed to be finished running before the members 
> are visible. 
> Under heavy contention, this means during initialization some users will get 
> an NPE:
> {code}
> (2015-05-05 08:58:03.974 : solr_server_log.log) 
>  org.apache.solr.common.SolrException; null:java.lang.NullPointerException
>   at org.apache.hadoop.io.Text.decode(Text.java:406)
>   at org.apache.hadoop.io.Text.decode(Text.java:389)
>   at org.apache.hadoop.io.Text.toString(Text.java:280)
>   at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:764)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.buildBaseHeader(DataTransferProtoUtil.java:81)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.buildClientHeader(DataTransferProtoUtil.java:71)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Sender.readBlock(Sender.java:101)
>   at 
> org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:400)
>   at 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:785)
>   at 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:663)
>   at 
> org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:327)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1027)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:974)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1305)
>   at org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:78)
>   at 
> org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:107)
> ... SNIP...
> {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11975) Native code needs to be built to match the 32/64 bitness of the JVM

2015-05-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559654#comment-14559654
 ] 

Colin Patrick McCabe commented on HADOOP-11975:
---

I agree that it's a vile hack, but so far it's the best we've got.  If you have 
a patch for JNIFlags.cmake to handle your case, I will review it.

> Native code needs to be built to match the 32/64 bitness of the JVM
> ---
>
> Key: HADOOP-11975
> URL: https://issues.apache.org/jira/browse/HADOOP-11975
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.7.0
> Environment: Solaris
>Reporter: Alan Burlison
>Assignee: Alan Burlison
>
> When building with a 64-bit JVM on Solaris the following error occurs at the 
> link stage of building the native code:
>  [exec] ld: fatal: file 
> /usr/jdk/instances/jdk1.8.0/jre/lib/amd64/server/libjvm.so: wrong ELF class: 
> ELFCLASS64
>  [exec] collect2: error: ld returned 1 exit status
>  [exec] make[2]: *** [target/usr/local/lib/libhadoop.so.1.0.0] Error 1
>  [exec] make[1]: *** [CMakeFiles/hadoop.dir/all] Error 2
> The compilation flags in the makefiles need to explicitly state if 32 or 64 
> bit code is to be generated, to match the JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11987) JNI build should use default cmake FindJNI.cmake

2015-05-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559640#comment-14559640
 ] 

Colin Patrick McCabe commented on HADOOP-11987:
---

I agree that some of the logic in {{JNIFlags.cmake}} is irrelevant because 
{{FindJNI.cmake}} is loaded afterwards.  However {{FindJNI.cmake}} is still 
needed to:
* Set the \-m32 flag to CFLAGS, LDFLAGS, CPPFLAGS if the JVM is 32-bit and the 
architecture is x86
* Set CMAKE_SYSTEM_PROCESSOR to i686 if the JVM is 32-bit and the architecture 
is x86
* Add -mfloat-abi=softfp if the architecture is ARM and it is needed

I would suggest:
* removing the calls to FIND_PACKAGE(JNI REQUIRED) since they may override 
variables that we don't want to override
* removing the duplicate copy of {{FindJNI.cmake}} in {{hadoop-mapreduce}}.  
They should be sourcing the one in hadoop-common, just as HDFS does, rather 
than duplicating.

If we want to make more improvements later we always can, but this will clear 
things up.

> JNI build should use default cmake FindJNI.cmake
> 
>
> Key: HADOOP-11987
> URL: https://issues.apache.org/jira/browse/HADOOP-11987
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: native
>Affects Versions: 2.7.0
> Environment: All
>Reporter: Alan Burlison
>Assignee: Alan Burlison
>Priority: Minor
>
> From 
> http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201505.mbox/%3C55568DAC.1040303%40oracle.com%3E
> --
> Why does  hadoop-common-project/hadoop-common/src/CMakeLists.txt use 
> JNIFlags.cmake in the same directory to set things up for JNI 
> compilation rather than FindJNI.cmake, which comes as a standard cmake 
> module? The checks in JNIFlags.cmake make several assumptions that I 
> believe are only correct on Linux whereas I'd expect FindJNI.cmake to be 
> more platform-independent.
> --
> Just checked the repo of cmake and it turns out that FindJNI.cmake is
> available even before cmake 2.4. I think it makes sense to file a bug
> to replace it to the standard cmake module. Can you please file a jira
> for this?
> --
> This also applies to 
> hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/JNIFlags.cmake



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11984) Enable parallel JUnit tests in pre-commit.

2015-05-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-11984:
---
Attachment: HADOOP-11984.011.patch

Even when {{TestCredentials}} runs in isolation, the parent directory isn't 
there, so that rules out another concurrent test interfering.  This is very 
strange.  Is something going wrong with the mkdir of the test directories 
inside pom.xml?  I wouldn't expect so, because we'd see error output and an 
earlier failure in the build.

The fix might be just to change {{TestCredentials}} to use a recursive 
{{mkdirs}}, which is what other tests do.  I'm really curious about this 
though, so patch v011 is one more troubleshooting patch that echoes the 
directories that pom.xml tries to create.  Let's see if these are any different 
from what I see on my local machine.

> Enable parallel JUnit tests in pre-commit.
> --
>
> Key: HADOOP-11984
> URL: https://issues.apache.org/jira/browse/HADOOP-11984
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, scripts, test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-11984.001.patch, HADOOP-11984.002.patch, 
> HADOOP-11984.003.patch, HADOOP-11984.004.patch, HADOOP-11984.005.patch, 
> HADOOP-11984.006.patch, HADOOP-11984.007.patch, HADOOP-11984.008.patch, 
> HADOOP-11984.009.patch, HADOOP-11984.010.patch, HADOOP-11984.011.patch
>
>
> HADOOP-9287 and related issues implemented the parallel-tests Maven profile 
> for running JUnit tests in multiple concurrent processes.  This issue 
> proposes to activate that profile during pre-commit to speed up execution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12011) Allow to dump verbose information to ease debugging in raw erasure coders

2015-05-26 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559632#comment-14559632
 ] 

Uma Maheswara Rao G commented on HADOOP-12011:
--

Hi Kai, thanks for the patch.

He are my comments on patch.

BytesUtil coding format is wrong in the patch
{code}
 protected boolean allowDump = true;
 {code}

 do you want to take this as sys property or from config? otherwise this flag 
may be unnecessary?
checkDumpSetting --> dumpSettings? as this is doing just print. you can have 
javadoc saying, if allowDump false, it does not dump anything as its disabled.



> Allow to dump verbose information to ease debugging in raw erasure coders
> -
>
> Key: HADOOP-12011
> URL: https://issues.apache.org/jira/browse/HADOOP-12011
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-7285
>
> Attachments: HADOOP-12011-HDFS-7285-v1.patch
>
>
> While working on native erasure coders, it was found useful to dump key 
> information like encode/decode matrix, erasures and etc. for the 
> encode/decode call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12035) shellcheck plugin displays a wrong version potentially

2015-05-26 Thread Kengo Seki (JIRA)
Kengo Seki created HADOOP-12035:
---

 Summary: shellcheck plugin displays a wrong version potentially
 Key: HADOOP-12035
 URL: https://issues.apache.org/jira/browse/HADOOP-12035
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Kengo Seki
Priority: Trivial


In dev-support/test-patch.d/shellcheck.sh:

{code}
SHELLCHECK_VERSION=$(shellcheck --version | ${GREP} version: | ${AWK} '{print 
$NF}')
{code}

it should be 

{code}
SHELLCHECK_VERSION=$(${SHELLCHECK} --version | …)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11997) CMake CMAKE_C_FLAGS are non-portable

2015-05-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559619#comment-14559619
 ] 

Colin Patrick McCabe commented on HADOOP-11997:
---

I would rather set the flags explicitly than rely on {{CMAKE_BUILD_TYPE}}.  
It's clearer and less dependent on CMake version.

Are you going to post a patch to add Solaris compiler support, as Allen 
suggested?  Or add more \-W options and fix the resulting warnings?  Or should 
we close this JIRA and take up the discussion elsewhere?  It seems like if you 
are using gcc on Solaris, the flags don't need to be modified.

> CMake CMAKE_C_FLAGS are non-portable
> 
>
> Key: HADOOP-11997
> URL: https://issues.apache.org/jira/browse/HADOOP-11997
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.7.0
> Environment: All
>Reporter: Alan Burlison
>Assignee: Alan Burlison
>Priority: Critical
>
> hadoop-common-project/hadoop-common/src/CMakeLists.txt 
> (https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt#L110)
>  contains the following unconditional assignments to CMAKE_C_FLAGS:
> set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -g -Wall -O2")
> set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -D_REENTRANT -D_GNU_SOURCE")
> set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -D_LARGEFILE_SOURCE 
> -D_FILE_OFFSET_BITS=64")
> There are several issues here:
> 1. "-D_GNU_SOURCE" globally enables the use of all Linux-only extensions in 
> hadoop-common native source. This is probably a major contributor to the poor 
> cross-platform portability of Hadoop native code to non-Linux platforms as it 
> makes it easy for developers to use non-portable Linux features without 
> realising. Use of Linux-specific features should be correctly bracketed with 
> conditional macro blocks that provide an alternative for non-Linux platforms.
> 2. "-g -Wall -O2" turns on debugging for all builds, I believe the correct 
> mechanism is to set the CMAKE_BUILD_TYPE CMake variable. If it is still 
> necessary to override CFLAGS it should probably be done conditionally 
> dependent on the value of CMAKE_BUILD_TYPE.
> 3. "-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64" On Solaris these flags are 
> only needed for largefile support in ILP32 applications, LP64 applications 
> are largefile by default. I believe the same is true on Linux, so these flags 
> are harmless but redundant for 64-bit compilation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11807) add a lint mode to releasedocmaker

2015-05-26 Thread ramtin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramtin updated HADOOP-11807:

Attachment: HADOOP-11807.004.patch

Thank you [~sekikn] for review the patch, I attached a new patch regarding to 
your comment.

> add a lint mode to releasedocmaker
> --
>
> Key: HADOOP-11807
> URL: https://issues.apache.org/jira/browse/HADOOP-11807
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: ramtin
>Priority: Minor
> Attachments: HADOOP-11807.001.patch, HADOOP-11807.002.patch, 
> HADOOP-11807.003.patch, HADOOP-11807.004.patch
>
>
> * check for missing components (error)
> * check for missing assignee (error)
> * check for common version problems (warning)
> * add an error message for missing release notes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12033) Reducer task failure with java.lang.NoClassDefFoundError: Ljava/lang/InternalError at org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect

2015-05-26 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559608#comment-14559608
 ] 

Ivan Mitic commented on HADOOP-12033:
-

If I had to guess (and I can only guess at this time:) ) I'd say this is 
something similar to the root cause from HADOOP-8423, where in case of a 
transient error (e.g. a networking error) someone's state gets out of sync, and 
results in a task failure.

> Reducer task failure with java.lang.NoClassDefFoundError: 
> Ljava/lang/InternalError at 
> org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect
> ---
>
> Key: HADOOP-12033
> URL: https://issues.apache.org/jira/browse/HADOOP-12033
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ivan Mitic
>
> We have noticed intermittent reducer task failures with the below exception:
> {code}
> Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in 
> shuffle in fetcher#9 at 
> org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134) at 
> org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376) at 
> org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:415) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: 
> java.lang.NoClassDefFoundError: Ljava/lang/InternalError at 
> org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect(Native
>  Method) at 
> org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompress(SnappyDecompressor.java:239)
>  at 
> org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:88)
>  at 
> org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
>  at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192) at 
> org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput.shuffle(InMemoryMapOutput.java:97)
>  at 
> org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyMapOutput(Fetcher.java:534)
>  at 
> org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:329)
>  at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:193) 
> Caused by: java.lang.ClassNotFoundException: Ljava.lang.InternalError at 
> java.net.URLClassLoader$1.run(URLClassLoader.java:366) at 
> java.net.URLClassLoader$1.run(URLClassLoader.java:355) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> java.net.URLClassLoader.findClass(URLClassLoader.java:354) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:425) at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:358) ... 9 more 
> {code}
> Usually, the reduce task succeeds on retry. 
> Some of the symptoms are similar to HADOOP-8423, but this fix is already 
> included (this is on Hadoop 2.6).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12033) Reducer task failure with java.lang.NoClassDefFoundError: Ljava/lang/InternalError at org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect

2015-05-26 Thread zhihai xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559596#comment-14559596
 ] 

zhihai xu commented on HADOOP-12033:


Is it possible some early failure such as ClassNotFoundException or an 
ExceptionInInitializerError (indicating a failure in the static initialization 
block) or some incompatible version of the class found at runtime cause this 
exception?

> Reducer task failure with java.lang.NoClassDefFoundError: 
> Ljava/lang/InternalError at 
> org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect
> ---
>
> Key: HADOOP-12033
> URL: https://issues.apache.org/jira/browse/HADOOP-12033
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ivan Mitic
>
> We have noticed intermittent reducer task failures with the below exception:
> {code}
> Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in 
> shuffle in fetcher#9 at 
> org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134) at 
> org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376) at 
> org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:415) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: 
> java.lang.NoClassDefFoundError: Ljava/lang/InternalError at 
> org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect(Native
>  Method) at 
> org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompress(SnappyDecompressor.java:239)
>  at 
> org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:88)
>  at 
> org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
>  at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192) at 
> org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput.shuffle(InMemoryMapOutput.java:97)
>  at 
> org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyMapOutput(Fetcher.java:534)
>  at 
> org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:329)
>  at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:193) 
> Caused by: java.lang.ClassNotFoundException: Ljava.lang.InternalError at 
> java.net.URLClassLoader$1.run(URLClassLoader.java:366) at 
> java.net.URLClassLoader$1.run(URLClassLoader.java:355) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> java.net.URLClassLoader.findClass(URLClassLoader.java:354) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:425) at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:358) ... 9 more 
> {code}
> Usually, the reduce task succeeds on retry. 
> Some of the symptoms are similar to HADOOP-8423, but this fix is already 
> included (this is on Hadoop 2.6).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11985) Improve Solaris support in Hadoop

2015-05-26 Thread Alan Burlison (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559591#comment-14559591
 ] 

Alan Burlison commented on HADOOP-11985:


Solaris-related changes to YARN and HDFS are covered under the two top-level 
issues:

YARN-3719 Improve Solaris support in YARN
HDFS-8478 Improve Solaris support in HDFS

> Improve Solaris support in Hadoop
> -
>
> Key: HADOOP-11985
> URL: https://issues.apache.org/jira/browse/HADOOP-11985
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: build, conf
>Affects Versions: 2.7.0
> Environment: Solaris x86, Solaris sparc
>Reporter: Alan Burlison
>Assignee: Alan Burlison
>  Labels: solaris
>
> At present the Hadoop native components aren't fully supported on Solaris 
> primarily due to differences between Linux and Solaris. This top-level task 
> will be used to group together both existing and new issues related to this 
> work. A second goal is to improve Hadoop performance on Solaris wherever 
> possible.
> Steve Loughran suggested a top-level JIRA was the best way to manage the work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11984) Enable parallel JUnit tests in pre-commit.

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559583#comment-14559583
 ] 

Hadoop QA commented on HADOOP-11984:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | reexec |   0m  0s | dev-support patch detected. |
| {color:blue}0{color} | pre-patch |  14m 36s | Pre-patch trunk compilation is 
healthy. |
| {color:blue}0{color} | @author |   0m  0s | Skipping @author checks as 
test-patch has been patched. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 6 new or modified test files. |
| {color:green}+1{color} | javac |   7m 30s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 41s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m  5s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | shellcheck |   0m  9s | There were no new shellcheck 
(v0.3.3) issues. |
| {color:red}-1{color} | whitespace |   0m  1s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 35s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 39s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |   1m 23s | Tests passed in 
hadoop-common. |
| | |  38m 38s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735365/HADOOP-11984.010.patch 
|
| Optional Tests | shellcheck javadoc javac unit findbugs checkstyle |
| git revision | trunk / 022f49d |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6827/artifact/patchprocess/whitespace.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6827/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6827/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6827/console |


This message was automatically generated.

> Enable parallel JUnit tests in pre-commit.
> --
>
> Key: HADOOP-11984
> URL: https://issues.apache.org/jira/browse/HADOOP-11984
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, scripts, test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-11984.001.patch, HADOOP-11984.002.patch, 
> HADOOP-11984.003.patch, HADOOP-11984.004.patch, HADOOP-11984.005.patch, 
> HADOOP-11984.006.patch, HADOOP-11984.007.patch, HADOOP-11984.008.patch, 
> HADOOP-11984.009.patch, HADOOP-11984.010.patch
>
>
> HADOOP-9287 and related issues implemented the parallel-tests Maven profile 
> for running JUnit tests in multiple concurrent processes.  This issue 
> proposes to activate that profile during pre-commit to speed up execution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12034) Wrong comment for the filefilter function in test-patch checkstyle plugin

2015-05-26 Thread Kengo Seki (JIRA)
Kengo Seki created HADOOP-12034:
---

 Summary: Wrong comment for the filefilter function in test-patch 
checkstyle plugin
 Key: HADOOP-12034
 URL: https://issues.apache.org/jira/browse/HADOOP-12034
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Kengo Seki
Priority: Minor


This comment is attached to checkstyle_filefilter function, but it is a comment 
for shellcheck_filefilter actually.

{code}
# if it ends in an explicit .sh, then this is shell code.
# if it doesn't have an extension, we assume it is shell code too
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11347) Inconsistent enforcement of umask between FileSystem and FileContext interacting with local file system.

2015-05-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559566#comment-14559566
 ] 

Colin Patrick McCabe commented on HADOOP-11347:
---

Thanks for looking at this, Varun.  I don't think we need to change the 
FileSystem base class.  This JIRA is about the local file system-- that's the 
FS that is having trouble with this, and that's the one that should change.

> Inconsistent enforcement of umask between FileSystem and FileContext 
> interacting with local file system.
> 
>
> Key: HADOOP-11347
> URL: https://issues.apache.org/jira/browse/HADOOP-11347
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.0
>Reporter: Chris Nauroth
>Assignee: Varun Saxena
>  Labels: BB2015-05-RFC
> Attachments: HADOOP-11347.001.patch, HADOOP-11347.002.patch, 
> HADOOP-11347.03.patch
>
>
> The {{FileSystem}} and {{FileContext}} APIs are inconsistent in enforcement 
> of umask for newly created directories.  {{FileContext}} utilizes 
> configuration property {{fs.permissions.umask-mode}} and runs a separate 
> {{chmod}} call to guarantee bypassing the process umask.  This is the 
> expected behavior for Hadoop as discussed in the documentation of 
> {{fs.permissions.umask-mode}}.  For the equivalent {{FileSystem}} APIs, it 
> does not use {{fs.permissions.umask-mode}}.  Instead, the permissions end up 
> getting controlled by the process umask.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11982) Inconsistency in handling URI without authority

2015-05-26 Thread Kannan Rajah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559562#comment-14559562
 ] 

Kannan Rajah commented on HADOOP-11982:
---

Does anyone have a comment on this issue? Is it OK to create a patch that 
defaults to empty authority?

> Inconsistency in handling URI without authority
> ---
>
> Key: HADOOP-11982
> URL: https://issues.apache.org/jira/browse/HADOOP-11982
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.0
>Reporter: Kannan Rajah
>Assignee: Kannan Rajah
>
> There are some inconsistencies coming from Hadoop class Path.java. This seems 
> to be the behavior for a very long time. I am not sure about the implications 
> of correcting it, so want to get some opinion.
> When you use makeQualified, a NULL authority is converted into empty 
> authority. When authority is NULL, the toString will not contain the // 
> before the actual absolute path. Otherwise it will not. There are ecosystem 
> components that may or may not use makeQualified consistently. We have hit 
> cases where the Path.toString() is used as key in hashmap. So lookups start 
> failing when the entry has Path object constructed using makeQualified and 
> lookup key does not.
> Proposal: Can we default to empty authority always when its NULL?
> -
> Examples
> ---
> Path p = new Path("hdfs:/a/b/c")
> p.toString() -> hdfs:/a/b/c  -> There is a single slash
> p.makeQualified(fs);
> p/toString() -> hdfs:///a/b/c-> There are 3 slashes
> -



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11924) Tolerate JDK-8047340-related exceptions in Shell#isSetSidAvailable preventing class init

2015-05-26 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559560#comment-14559560
 ] 

Gera Shegalov commented on HADOOP-11924:


[~ozawa], are you going to work on 002? I think at the very least we should 
change the log level when swallowing the exception. The exception itself should 
also be included in the LOG statement:
{code}
 LOG.info("Avoiding JDK-8047340 on BSD-based systems.", t);
{code}

> Tolerate JDK-8047340-related exceptions in Shell#isSetSidAvailable preventing 
> class init
> 
>
> Key: HADOOP-11924
> URL: https://issues.apache.org/jira/browse/HADOOP-11924
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Gera Shegalov
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-11924.001.patch
>
>
> Address the root cause of HADOOP-11916 per 
> https://issues.apache.org/jira/browse/HADOOP-11916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14528009#comment-14528009
> {quote}
> JDK-8047340 explicitly calls out BSD-like systems, should not we just exclude 
> those systems instead of enabling solely Linux?
> {code}
> Assume.assumeFalse("Avoiding JDK-8047340 on BSD-based systems", Shell.FREEBSD 
> || Shell.MAC);
> {code}
> However, I don't think this is the right fix. Shell on BSD-like systems is 
> broken with the TR locale. Shell class initialization happens only because 
> StringUtils references Shell.WINDOWS.
> We can simply catch Throwable in Shell#isSetsidSupported instead of 
> IOException. If we want to be pedantic we can rethrow
> {code}
> if (!(t instanceof IOException) && !(Shell.FREEBSD || Shell.MAC))
> {code}
> With such a change the test can run unchanged.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12033) Reducer task failure with java.lang.NoClassDefFoundError: Ljava/lang/InternalError at org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect

2015-05-26 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559543#comment-14559543
 ] 

Ivan Mitic commented on HADOOP-12033:
-

Thanks for responding [~zxu]. The reducer task would succeed on retry, so I 
assumed it's not an environment problem. Below is the task syslog:
{noformat}
2015-05-21 18:33:10,773 INFO [main] 
org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from 
hadoop-metrics2.properties
2015-05-21 18:33:10,976 INFO [main] 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 
60 second(s).
2015-05-21 18:33:10,976 INFO [main] 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: ReduceTask metrics system 
started
2015-05-21 18:33:10,991 INFO [main] org.apache.hadoop.mapred.YarnChild: 
Executing with tokens:
2015-05-21 18:33:10,991 INFO [main] org.apache.hadoop.mapred.YarnChild: Kind: 
mapreduce.job, Service: job_1432143397187_0004, Ident: 
(org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier@5df3ade7)
2015-05-21 18:33:11,132 INFO [main] org.apache.hadoop.mapred.YarnChild: Kind: 
RM_DELEGATION_TOKEN, Service: 100.76.156.98:9010, Ident: (owner=btbig2, 
renewer=mr token, realUser=hdp, issueDate=1432225097662, maxDate=1432829897662, 
sequenceNumber=2, masterKeyId=2)
2015-05-21 18:33:11,351 INFO [main] org.apache.hadoop.mapred.YarnChild: 
Sleeping for 0ms before retrying again. Got null now.
2015-05-21 18:33:12,335 INFO [main] org.apache.hadoop.mapred.YarnChild: 
Sleeping for 500ms before retrying again. Got null now.
2015-05-21 18:33:13,804 INFO [main] org.apache.hadoop.mapred.YarnChild: 
Sleeping for 1000ms before retrying again. Got null now.
2015-05-21 18:33:16,308 INFO [main] org.apache.hadoop.mapred.YarnChild: 
mapreduce.cluster.local.dir for child: 
c:/apps/temp/hdfs/nm-local-dir/usercache/btbig2/appcache/application_1432143397187_0004
2015-05-21 18:33:17,199 INFO [main] 
org.apache.hadoop.conf.Configuration.deprecation: session.id is deprecated. 
Instead, use dfs.metrics.session-id
2015-05-21 18:33:17,402 INFO [main] 
org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from 
hadoop-metrics2-azure-file-system.properties
2015-05-21 18:33:17,418 INFO [main] 
org.apache.hadoop.metrics2.sink.WindowsAzureETWSink: Init starting.
2015-05-21 18:33:17,418 INFO [main] 
org.apache.hadoop.metrics2.sink.WindowsAzureETWSink: Successfully loaded native 
library. LibraryName = EtwLogger
2015-05-21 18:33:17,418 INFO [main] 
org.apache.hadoop.metrics2.sink.WindowsAzureETWSink: Init completed. Native 
library loaded and ETW handle obtained.
2015-05-21 18:33:17,418 INFO [main] 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter: Sink azurefs2 started
2015-05-21 18:33:17,433 INFO [main] 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 
60 second(s).
2015-05-21 18:33:17,433 INFO [main] 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: azure-file-system metrics 
system started
2015-05-21 18:33:17,699 INFO [main] 
org.apache.hadoop.yarn.util.ProcfsBasedProcessTree: ProcfsBasedProcessTree 
currently is supported only on Linux.
2015-05-21 18:33:17,714 INFO [main] org.apache.hadoop.mapred.Task:  Using 
ResourceCalculatorProcessTree : 
org.apache.hadoop.yarn.util.WindowsBasedProcessTree@36c76ec3
2015-05-21 18:33:17,746 INFO [main] org.apache.hadoop.mapred.ReduceTask: Using 
ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@5c7b1796
2015-05-21 18:33:17,793 INFO [main] 
org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl: MergerManager: 
memoryLimit=741710208, maxSingleShuffleLimit=185427552, 
mergeThreshold=489528768, ioSortFactor=100, memToMemMergeOutputsThreshold=100
2015-05-21 18:33:17,793 INFO [EventFetcher for fetching Map Completion Events] 
org.apache.hadoop.mapreduce.task.reduce.EventFetcher: 
attempt_1432143397187_0004_r_001735_0 Thread started: EventFetcher for fetching 
Map Completion Events
2015-05-21 18:33:19,187 INFO [fetcher#30] 
org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl: Assigning 
workernode165.btbig2.c2.internal.cloudapp.net:13562 with 1 to fetcher#30
2015-05-21 18:33:19,187 INFO [fetcher#30] 
org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl: assigned 1 of 1 
to workernode165.btbig2.c2.internal.cloudapp.net:13562 to fetcher#30
2015-05-21 18:33:19,187 INFO [fetcher#1] 
org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl: Assigning 
workernode279.btbig2.c2.internal.cloudapp.net:13562 with 1 to fetcher#1
2015-05-21 18:33:19,187 INFO [fetcher#1] 
org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl: assigned 1 of 1 
to workernode279.btbig2.c2.internal.cloudapp.net:13562 to fetcher#1
(fetch logs removed)
2015-05-21 19:25:08,983 INFO [fetcher#9] 
org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl: Assigning 
workernode133.btbig2.c2.internal.cloudapp.net:13562 with 88 to fetcher#9
2015-05-21 19:25:08,983 INFO

[jira] [Commented] (HADOOP-11937) Guarantee a full build of all native code during pre-commit.

2015-05-26 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559524#comment-14559524
 ] 

Allen Wittenauer commented on HADOOP-11937:
---

bq. test-patch already fails on OS X

You're behind the times. -Pnative has worked on OS X for almost a year now.  
(Also: It's probably worth pointing out that I rewrote test-patch.sh, including 
the Jenkins mode, on OS X)

> Guarantee a full build of all native code during pre-commit.
> 
>
> Key: HADOOP-11937
> URL: https://issues.apache.org/jira/browse/HADOOP-11937
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Reporter: Chris Nauroth
>
> Some of the native components of the build are considered optional and either 
> will not build at all without passing special flags to Maven or will allow a 
> build to proceed if dependencies are missing from the build machine.  If 
> these components do not get built, then pre-commit isn't really providing 
> full coverage of the build.  This issue proposes to update test-patch.sh so 
> that it does a full build of all native components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12033) Reducer task failure with java.lang.NoClassDefFoundError: Ljava/lang/InternalError at org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect

2015-05-26 Thread zhihai xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559525#comment-14559525
 ] 

zhihai xu commented on HADOOP-12033:


This looks likes hadoop native library was not loaded successfully.
Did you see this warning message?
  LOG.warn("Unable to load native-hadoop library for your platform... " +
   "using builtin-java classes where applicable");
You need configure LD_LIBRARY_PATH correctly in your environment.


> Reducer task failure with java.lang.NoClassDefFoundError: 
> Ljava/lang/InternalError at 
> org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect
> ---
>
> Key: HADOOP-12033
> URL: https://issues.apache.org/jira/browse/HADOOP-12033
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ivan Mitic
>
> We have noticed intermittent reducer task failures with the below exception:
> {code}
> Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in 
> shuffle in fetcher#9 at 
> org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134) at 
> org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376) at 
> org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:415) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: 
> java.lang.NoClassDefFoundError: Ljava/lang/InternalError at 
> org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect(Native
>  Method) at 
> org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompress(SnappyDecompressor.java:239)
>  at 
> org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:88)
>  at 
> org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
>  at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192) at 
> org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput.shuffle(InMemoryMapOutput.java:97)
>  at 
> org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyMapOutput(Fetcher.java:534)
>  at 
> org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:329)
>  at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:193) 
> Caused by: java.lang.ClassNotFoundException: Ljava.lang.InternalError at 
> java.net.URLClassLoader$1.run(URLClassLoader.java:366) at 
> java.net.URLClassLoader$1.run(URLClassLoader.java:355) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> java.net.URLClassLoader.findClass(URLClassLoader.java:354) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:425) at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:358) ... 9 more 
> {code}
> Usually, the reduce task succeeds on retry. 
> Some of the symptoms are similar to HADOOP-8423, but this fix is already 
> included (this is on Hadoop 2.6).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11937) Guarantee a full build of all native code during pre-commit.

2015-05-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559499#comment-14559499
 ] 

Colin Patrick McCabe commented on HADOOP-11937:
---

test-patch already fails on OS X.  That's why we added a workaround that allows 
you to disable the native parts of the build in order to get a test-patch build 
on that platform.

> Guarantee a full build of all native code during pre-commit.
> 
>
> Key: HADOOP-11937
> URL: https://issues.apache.org/jira/browse/HADOOP-11937
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Reporter: Chris Nauroth
>
> Some of the native components of the build are considered optional and either 
> will not build at all without passing special flags to Maven or will allow a 
> build to proceed if dependencies are missing from the build machine.  If 
> these components do not get built, then pre-commit isn't really providing 
> full coverage of the build.  This issue proposes to update test-patch.sh so 
> that it does a full build of all native components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11984) Enable parallel JUnit tests in pre-commit.

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559497#comment-14559497
 ] 

Hadoop QA commented on HADOOP-11984:


(!) A patch to test-patch or smart-apply-patch has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6827/console in case of 
problems.

> Enable parallel JUnit tests in pre-commit.
> --
>
> Key: HADOOP-11984
> URL: https://issues.apache.org/jira/browse/HADOOP-11984
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, scripts, test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-11984.001.patch, HADOOP-11984.002.patch, 
> HADOOP-11984.003.patch, HADOOP-11984.004.patch, HADOOP-11984.005.patch, 
> HADOOP-11984.006.patch, HADOOP-11984.007.patch, HADOOP-11984.008.patch, 
> HADOOP-11984.009.patch, HADOOP-11984.010.patch
>
>
> HADOOP-9287 and related issues implemented the parallel-tests Maven profile 
> for running JUnit tests in multiple concurrent processes.  This issue 
> proposes to activate that profile during pre-commit to speed up execution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11984) Enable parallel JUnit tests in pre-commit.

2015-05-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-11984:
---
Attachment: HADOOP-11984.010.patch

Patch v010 fixes a problem in the last experiment that I was trying.

> Enable parallel JUnit tests in pre-commit.
> --
>
> Key: HADOOP-11984
> URL: https://issues.apache.org/jira/browse/HADOOP-11984
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, scripts, test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-11984.001.patch, HADOOP-11984.002.patch, 
> HADOOP-11984.003.patch, HADOOP-11984.004.patch, HADOOP-11984.005.patch, 
> HADOOP-11984.006.patch, HADOOP-11984.007.patch, HADOOP-11984.008.patch, 
> HADOOP-11984.009.patch, HADOOP-11984.010.patch
>
>
> HADOOP-9287 and related issues implemented the parallel-tests Maven profile 
> for running JUnit tests in multiple concurrent processes.  This issue 
> proposes to activate that profile during pre-commit to speed up execution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12001) Limiting LDAP search conflicts with posixGroup addition

2015-05-26 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HADOOP-12001:
---
Affects Version/s: 2.7.0

> Limiting LDAP search conflicts with posixGroup addition
> ---
>
> Key: HADOOP-12001
> URL: https://issues.apache.org/jira/browse/HADOOP-12001
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.0, 2.8.0
>Reporter: Patrick White
> Attachments: HADOOP-12001.patch
>
>
> In HADOOP-9477, posixGroup support was added
> In HADOOP-10626, a limit on the returned attributes was added to speed up 
> queries.
> Limiting the attributes can break the SEARCH_CONTROLS object in the context 
> of the isPosix block, since it only asks LDAP for the groupNameAttr



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11984) Enable parallel JUnit tests in pre-commit.

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559474#comment-14559474
 ] 

Hadoop QA commented on HADOOP-11984:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | reexec |   0m  0s | dev-support patch detected. |
| {color:blue}0{color} | pre-patch |  14m 35s | Pre-patch trunk compilation is 
healthy. |
| {color:blue}0{color} | @author |   0m  0s | Skipping @author checks as 
test-patch has been patched. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 6 new or modified test files. |
| {color:green}+1{color} | javac |   7m 29s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 39s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m  6s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | shellcheck |   0m  8s | There were no new shellcheck 
(v0.3.3) issues. |
| {color:red}-1{color} | whitespace |   0m  1s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 39s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |   8m  9s | Tests passed in 
hadoop-common. |
| | |  45m 22s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735351/HADOOP-11984.009.patch 
|
| Optional Tests | shellcheck javadoc javac unit findbugs checkstyle |
| git revision | trunk / 022f49d |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6826/artifact/patchprocess/whitespace.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6826/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6826/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6826/console |


This message was automatically generated.

> Enable parallel JUnit tests in pre-commit.
> --
>
> Key: HADOOP-11984
> URL: https://issues.apache.org/jira/browse/HADOOP-11984
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, scripts, test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-11984.001.patch, HADOOP-11984.002.patch, 
> HADOOP-11984.003.patch, HADOOP-11984.004.patch, HADOOP-11984.005.patch, 
> HADOOP-11984.006.patch, HADOOP-11984.007.patch, HADOOP-11984.008.patch, 
> HADOOP-11984.009.patch
>
>
> HADOOP-9287 and related issues implemented the parallel-tests Maven profile 
> for running JUnit tests in multiple concurrent processes.  This issue 
> proposes to activate that profile during pre-commit to speed up execution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12033) Reducer task failure with java.lang.NoClassDefFoundError: Ljava/lang/InternalError at org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect

2015-05-26 Thread Ivan Mitic (JIRA)
Ivan Mitic created HADOOP-12033:
---

 Summary: Reducer task failure with java.lang.NoClassDefFoundError: 
Ljava/lang/InternalError at 
org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect
 Key: HADOOP-12033
 URL: https://issues.apache.org/jira/browse/HADOOP-12033
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ivan Mitic


We have noticed intermittent reducer task failures with the below exception:

{code}
Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in 
shuffle in fetcher#9 at 
org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134) at 
org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376) at 
org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:415) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: 
java.lang.NoClassDefFoundError: Ljava/lang/InternalError at 
org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect(Native
 Method) at 
org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompress(SnappyDecompressor.java:239)
 at 
org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:88)
 at 
org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192) at 
org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput.shuffle(InMemoryMapOutput.java:97)
 at 
org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyMapOutput(Fetcher.java:534) 
at 
org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:329) 
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:193) Caused 
by: java.lang.ClassNotFoundException: Ljava.lang.InternalError at 
java.net.URLClassLoader$1.run(URLClassLoader.java:366) at 
java.net.URLClassLoader$1.run(URLClassLoader.java:355) at 
java.security.AccessController.doPrivileged(Native Method) at 
java.net.URLClassLoader.findClass(URLClassLoader.java:354) at 
java.lang.ClassLoader.loadClass(ClassLoader.java:425) at 
sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at 
java.lang.ClassLoader.loadClass(ClassLoader.java:358) ... 9 more 
{code}

Usually, the reduce task succeeds on retry. 

Some of the symptoms are similar to HADOOP-8423, but this fix is already 
included (this is on Hadoop 2.6).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11984) Enable parallel JUnit tests in pre-commit.

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559389#comment-14559389
 ] 

Hadoop QA commented on HADOOP-11984:


(!) A patch to test-patch or smart-apply-patch has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6826/console in case of 
problems.

> Enable parallel JUnit tests in pre-commit.
> --
>
> Key: HADOOP-11984
> URL: https://issues.apache.org/jira/browse/HADOOP-11984
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, scripts, test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-11984.001.patch, HADOOP-11984.002.patch, 
> HADOOP-11984.003.patch, HADOOP-11984.004.patch, HADOOP-11984.005.patch, 
> HADOOP-11984.006.patch, HADOOP-11984.007.patch, HADOOP-11984.008.patch, 
> HADOOP-11984.009.patch
>
>
> HADOOP-9287 and related issues implemented the parallel-tests Maven profile 
> for running JUnit tests in multiple concurrent processes.  This issue 
> proposes to activate that profile during pre-commit to speed up execution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11985) Improve Solaris support in Hadoop

2015-05-26 Thread Alan Burlison (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559382#comment-14559382
 ] 

Alan Burlison commented on HADOOP-11985:


Another issue is YARN's use of Linux Cgroups for resource management, which is 
non-portable. See:
YARN-3718 hadoop-yarn-server-nodemanager's use of Linux Cgroups is non-portable

> Improve Solaris support in Hadoop
> -
>
> Key: HADOOP-11985
> URL: https://issues.apache.org/jira/browse/HADOOP-11985
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: build, conf
>Affects Versions: 2.7.0
> Environment: Solaris x86, Solaris sparc
>Reporter: Alan Burlison
>Assignee: Alan Burlison
>  Labels: solaris
>
> At present the Hadoop native components aren't fully supported on Solaris 
> primarily due to differences between Linux and Solaris. This top-level task 
> will be used to group together both existing and new issues related to this 
> work. A second goal is to improve Hadoop performance on Solaris wherever 
> possible.
> Steve Loughran suggested a top-level JIRA was the best way to manage the work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >