[jira] [Commented] (HADOOP-12839) hadoop-minikdc Missing artifact org.apache.directory.jdbm:apacheds-jdbm1:bundle:2.0.0-M2

2016-02-26 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15170427#comment-15170427
 ] 

Xiao Chen commented on HADOOP-12839:


Thanks for reporting this [~liguirong98].
I've seen the same failure locally on OSX within IntelliJ when trying to run 
unit tests that depend on hadoop-minikdc, and the workaround would be to:
- go to {{hadoop-minikdc}}'s Dependencies tab
- Edit the dependency {{Maven: 
org.apache.directory.jdbm:apacheds-jdbm1:bundle:2.0.0-M2}}, add 
{{DIR_TO_M2/.m2/repository/org/apache/directory/jdbm/apacheds-jdbm1/2.0.0-M2/apacheds-jdbm1-2.0.0-M2.jar}}
 to 'Classes' (initially only an entry with suffix {{bundle}} in there).

If I run those same tests from command line though, I don't see the problem. 
And considering jenkins never complained about it, I guess we don't need to 
change any pom.
Also see https://issues.apache.org/jira/browse/DIRSHARED-134 for similar 
discussions.

> hadoop-minikdc  Missing artifact 
> org.apache.directory.jdbm:apacheds-jdbm1:bundle:2.0.0-M2
> -
>
> Key: HADOOP-12839
> URL: https://issues.apache.org/jira/browse/HADOOP-12839
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: liguirong
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12849) TestSymlinkLocalFSFileSystem fails intermittently

2016-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15170392#comment-15170392
 ] 

Hudson commented on HADOOP-12849:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9381 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9381/])
HADOOP-12849. TestSymlinkLocalFSFileSystem fails intermittently. (cnauroth: rev 
798babf661311264fa895c89a33d0f2da927da33)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/SymlinkBaseTest.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> TestSymlinkLocalFSFileSystem fails intermittently 
> --
>
> Key: HADOOP-12849
> URL: https://issues.apache.org/jira/browse/HADOOP-12849
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-12849.000.patch
>
>
> *Error Message*
> expected:<1456523612000> but was:<1456523613000>
> *Stacktrace*
> {quote}
> java.lang.AssertionError: expected:<1456523612000> but was:<1456523613000>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.fs.SymlinkBaseTest.testSetTimesDanglingLink(SymlinkBaseTest.java:1410)
>   at 
> org.apache.hadoop.fs.TestSymlinkLocalFS.testSetTimesDanglingLink(TestSymlinkLocalFS.java:239)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {quote}
> It happens in recent builds:
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/8732/testReport/org.apache.hadoop.fs/TestSymlinkLocalFSFileSystem/testSetTimesDanglingLink/
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/8721/testReport/org.apache.hadoop.fs/TestSymlinkLocalFSFileSystem/testSetTimesSymlinkToFile/
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/8590/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_72.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12849) TestSymlinkLocalFSFileSystem fails intermittently

2016-02-26 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15170391#comment-15170391
 ] 

Mingliang Liu commented on HADOOP-12849:


Thank you [~cnauroth] very much for your diagnosing the root cause, reviewing 
and committing the patch.

> TestSymlinkLocalFSFileSystem fails intermittently 
> --
>
> Key: HADOOP-12849
> URL: https://issues.apache.org/jira/browse/HADOOP-12849
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-12849.000.patch
>
>
> *Error Message*
> expected:<1456523612000> but was:<1456523613000>
> *Stacktrace*
> {quote}
> java.lang.AssertionError: expected:<1456523612000> but was:<1456523613000>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.fs.SymlinkBaseTest.testSetTimesDanglingLink(SymlinkBaseTest.java:1410)
>   at 
> org.apache.hadoop.fs.TestSymlinkLocalFS.testSetTimesDanglingLink(TestSymlinkLocalFS.java:239)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {quote}
> It happens in recent builds:
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/8732/testReport/org.apache.hadoop.fs/TestSymlinkLocalFSFileSystem/testSetTimesDanglingLink/
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/8721/testReport/org.apache.hadoop.fs/TestSymlinkLocalFSFileSystem/testSetTimesSymlinkToFile/
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/8590/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_72.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12813) Migrate TestRPC and related codes to rebase on ProtobufRpcEngine

2016-02-26 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15170389#comment-15170389
 ] 

Haohui Mai commented on HADOOP-12813:
-

+1. It looks good to me. I'll commit it on Monday if there is no more comments.

> Migrate TestRPC and related codes to rebase on ProtobufRpcEngine
> 
>
> Key: HADOOP-12813
> URL: https://issues.apache.org/jira/browse/HADOOP-12813
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12813-v1.patch
>
>
> Sub task of HADOOP-12579. To prepare for getting rid of the obsolete 
> WritableRpcEngine, this will change the TestRPC test and the related to use 
> ProtobufRpcEngine instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12849) TestSymlinkLocalFSFileSystem fails intermittently

2016-02-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12849:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

+1 for the patch.  I have committed this to trunk, branch-2 and branch-2.8.  
[~liuml07], thank you for contributing this fix.

> TestSymlinkLocalFSFileSystem fails intermittently 
> --
>
> Key: HADOOP-12849
> URL: https://issues.apache.org/jira/browse/HADOOP-12849
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-12849.000.patch
>
>
> *Error Message*
> expected:<1456523612000> but was:<1456523613000>
> *Stacktrace*
> {quote}
> java.lang.AssertionError: expected:<1456523612000> but was:<1456523613000>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.fs.SymlinkBaseTest.testSetTimesDanglingLink(SymlinkBaseTest.java:1410)
>   at 
> org.apache.hadoop.fs.TestSymlinkLocalFS.testSetTimesDanglingLink(TestSymlinkLocalFS.java:239)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {quote}
> It happens in recent builds:
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/8732/testReport/org.apache.hadoop.fs/TestSymlinkLocalFSFileSystem/testSetTimesDanglingLink/
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/8721/testReport/org.apache.hadoop.fs/TestSymlinkLocalFSFileSystem/testSetTimesSymlinkToFile/
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/8590/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_72.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-12848) Configuration.ParsedTimeDuration.unitFor(String) should do more careful parsing

2016-02-26 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton resolved HADOOP-12848.
---
Resolution: Invalid

I am greatly saddened to report that I was too entertained by comedic 
possibilities of the issue to see that it's not actually an issue.  The 
implementation actually does work, just not the most intuitive way.  Sorry, no 
cows or pigs. :(

> Configuration.ParsedTimeDuration.unitFor(String) should do more careful 
> parsing
> ---
>
> Key: HADOOP-12848
> URL: https://issues.apache.org/jira/browse/HADOOP-12848
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>
> The unit parsing code is very loosey-goosey.  For example, "2 pigs" will 
> parse as 2 seconds.  "3 hams" will parse as 3 milliseconds.  We might want to 
> tighten that up a bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12847) hadoop daemonlog should support https and SPNEGO for Kerberized cluster

2016-02-26 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15170299#comment-15170299
 ] 

Wei-Chiu Chuang commented on HADOOP-12847:
--

And some test cases as well.

> hadoop daemonlog should support https and SPNEGO for Kerberized cluster
> ---
>
> Key: HADOOP-12847
> URL: https://issues.apache.org/jira/browse/HADOOP-12847
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12847.001.patch
>
>
> {{hadoop daemonlog}} is a simple, yet useful tool for debugging.
> However, it does not support https, nor does it support a Kerberized Hadoop 
> cluster.
> Using {{AuthenticatedURL}}, it will be able to support SPNEGO negotiation 
> with a Kerberized name node web ui. It will also fall back to simple 
> authentication if the cluster is not Kerberized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12849) TestSymlinkLocalFSFileSystem fails intermittently

2016-02-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15170262#comment-15170262
 ] 

Hadoop QA commented on HADOOP-12849:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 34s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 34s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 49s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 57s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 26s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m 11s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12790255/HADOOP-12849.000.patch
 |
| JIRA Issue | HADOOP-12849 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d4202f80d505 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d1d4e16 |
| Default Java | 1.7.0_95 |
| Multi-JDK versions |  

[jira] [Commented] (HADOOP-12848) Configuration.ParsedTimeDuration.unitFor(String) should do more careful parsing

2016-02-26 Thread Hari Shreedharan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15170256#comment-15170256
 ] 

Hari Shreedharan commented on HADOOP-12848:
---

Hence proved, cows == pigs

> Configuration.ParsedTimeDuration.unitFor(String) should do more careful 
> parsing
> ---
>
> Key: HADOOP-12848
> URL: https://issues.apache.org/jira/browse/HADOOP-12848
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>
> The unit parsing code is very loosey-goosey.  For example, "2 pigs" will 
> parse as 2 seconds.  "3 hams" will parse as 3 milliseconds.  We might want to 
> tighten that up a bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12666) Support Microsoft Azure Data Lake - as a file system in Hadoop

2016-02-26 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15170210#comment-15170210
 ] 

Chris Nauroth commented on HADOOP-12666:


If people would find it easier for line-by-line commentary, we could import the 
patch into ReviewBoard (reviews.apache.org) or a GitHub pull request.  
Reviewers, please comment if you'd like to go that way.

[~vishwajeet.dusane], thank you for updating the patch in response to feedback. 
 Here are a few more comments on patch v006.
# pom.xml should use hadoop-project as its parent.  See 
hadoop-tools/hadoop-azure/pom.xml for an example.
# artifactId should be hadoop-azure-datalake.  The resulting built jar will 
then be hadoop-azure-datalake-.jar.  This naming will be consistent 
with the other Hadoop build artifacts.
# Please indent pom.xml by 2 spaces.
# Thank you for the documentation!
# Would you also please update hadoop-project/src/site/site.xml, so that we get 
a hyperlink in the left nav?  Look for the "Hadoop Compatible File Systems" 
section, and you can add a link similar to the one for WASB.
# Typo: "File relication factor"
# Typo: Azure Data Lake Storage is access path syntax is
# Typo: expermental
# There are several links to this URL: 
https://azure.microsoft.com/en-in/services/data-lake-store/ .  Perhaps it's 
just my opinion, but the content there strays a little too much towards 
[marketecture|https://en.wikipedia.org/wiki/Marchitecture].  The readers of our 
documentation tend to be a technical audience that is already using Hadoop, so 
they want to find usage information and technical details as quickly as 
possible.  What do you think about linking to this other page instead, which 
seems to be the "developer hub"? 
https://azure.microsoft.com/en-in/documentation/services/data-lake-store/
# I see widespread mishandling of {{InterruptedException}} throughout the 
patch.  The classic article on how to handle {{InterruptedException}} is here: 
http://www.ibm.com/developerworks/library/j-jtp05236/ .  The bottom line is 
that it shouldn't be swallowed or rethrown as a different exception type unless 
you first restore the thread's interrupted status by calling 
{{Thread.currentThread().interrupt()}}.  Failure to restore interrupted status 
can cause unexpected behavior at other layers of the code that are expecting to 
observe the interrupted status later, such as during shutdown handling.  There 
is a lot of existing Hadoop code that doesn't do this correctly, so reading 
Hadoop code isn't always a good example.  Let's not introduce more of the same 
problems in new code though.

Now for some of the broader discussion points:

bq. We do have extended Contact test cases which integrated with back-end 
service however those test are not pushed as part of this check in. I will 
create separate JIRA for the Live mode test cases.

I would strongly prefer to have the contract tests in place from day one as 
part of this patch.  Is that possible?

We have seen multiple times that the contract test suites have been valuable 
for identifying subtle bugs in file system semantics.  It's a much more 
effective way to find and fix these kinds of bugs compared to getting multiple 
hours into a complex system integration test run with higher-level 
applications, only to have it fail because of some file system edge case.

Addiionally, it's really vital to have some kind of integration test suite 
available that really connects to the back-end service instead of using mocks.  
We have seen several times that mock-based testing has failed to expose bugs 
that surface when integrating with the real service.  Let's try to run the 
contract tests against the real service, not the mock.  The recent addition of 
the contract tests in WASB in HADOOP-12535 is a good example of how to achieve 
this.

The tests you already have in the patch are still useful, so please keep them.  
The good thing about the mock-based tests is that they'll continue to run on 
Jenkins pre-commit when people post new patches.

bq. Reason behind hiding logging through was to switch quickly between Log and 
System.out.println during debugging.  Quickest way is change the code than 
configuration file. We will migrate to use SLF4J but not part of this patch 
release.  is that fine?

Maybe I'm missing something, but I'm having a hard time justifying this.  Log4J 
makes it easy to reroute its logging to stdout via configuration using the 
ConsoleAppender.  Managing it through configuration instead of code removes the 
latency of the compile/deploy cycle when someone decides to change logging 
settings.  Hadoop developers are familiar and comfortable with Log4J 
configuration, so if they want to debug, their first instinct is going to be to 
change Log4J configuration, not change code.  Can we please switch to SLF4J and 
drop the {{ADLLogger}} class now instead of deferring it to later?

bq. 

[jira] [Commented] (HADOOP-12846) Credential Provider Recursive Dependencies

2016-02-26 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15170164#comment-15170164
 ] 

Larry McCay commented on HADOOP-12846:
--

Actually that won't need any regex - even though I already wrote it! :)

> Credential Provider Recursive Dependencies
> --
>
> Key: HADOOP-12846
> URL: https://issues.apache.org/jira/browse/HADOOP-12846
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Larry McCay
>Assignee: Larry McCay
>
> There are a few credential provider integration points in which the use of a 
> certain type of provider in a certain filesystem causes a recursive infinite 
> loop. 
> For instance, a component such as sqoop can be protecting a db password in a 
> credential provider within the wasb/azure filesystem. Now that HADOOP-12555 
> has introduced the ability to protect the access keys for wasb we suddenly 
> need access to wasb to get the database keys which initiates the attempt to 
> get the access keys from wasb - since there is a provider path configured for 
> sqoop.
> For such integrations, those in which it doesn't make sense to protect the 
> access keys inside the thing that we need the keys to access, we need a 
> solution to avoid this recursion - other than dictating what filesystems can 
> be used by other components.
> This patch proposes the ability to scrub the configured provider path of any 
> provider types that would be incompatible with the integration point. In 
> other words, before calling Configuration.getPassword for the access keys to 
> wasb, we can remove any configured providers that require access to wasb.
> This will require some regex expressions that can be used to identify the 
> configuration of such provider uri's within the provider path parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12849) TestSymlinkLocalFSFileSystem fails intermittently

2016-02-26 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-12849:
---
Status: Patch Available  (was: Open)

> TestSymlinkLocalFSFileSystem fails intermittently 
> --
>
> Key: HADOOP-12849
> URL: https://issues.apache.org/jira/browse/HADOOP-12849
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-12849.000.patch
>
>
> *Error Message*
> expected:<1456523612000> but was:<1456523613000>
> *Stacktrace*
> {quote}
> java.lang.AssertionError: expected:<1456523612000> but was:<1456523613000>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.fs.SymlinkBaseTest.testSetTimesDanglingLink(SymlinkBaseTest.java:1410)
>   at 
> org.apache.hadoop.fs.TestSymlinkLocalFS.testSetTimesDanglingLink(TestSymlinkLocalFS.java:239)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {quote}
> It happens in recent builds:
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/8732/testReport/org.apache.hadoop.fs/TestSymlinkLocalFSFileSystem/testSetTimesDanglingLink/
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/8721/testReport/org.apache.hadoop.fs/TestSymlinkLocalFSFileSystem/testSetTimesSymlinkToFile/
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/8590/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_72.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12849) TestSymlinkLocalFSFileSystem fails intermittently

2016-02-26 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-12849:
---
Attachment: HADOOP-12849.000.patch

The v0 patch is to relax the access time assertion, as [HADOOP-12603] did (see 
[~cnauroth]'s analysis above). I went through all the similar tests and found 
two cases.

> TestSymlinkLocalFSFileSystem fails intermittently 
> --
>
> Key: HADOOP-12849
> URL: https://issues.apache.org/jira/browse/HADOOP-12849
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-12849.000.patch
>
>
> *Error Message*
> expected:<1456523612000> but was:<1456523613000>
> *Stacktrace*
> {quote}
> java.lang.AssertionError: expected:<1456523612000> but was:<1456523613000>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.fs.SymlinkBaseTest.testSetTimesDanglingLink(SymlinkBaseTest.java:1410)
>   at 
> org.apache.hadoop.fs.TestSymlinkLocalFS.testSetTimesDanglingLink(TestSymlinkLocalFS.java:239)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {quote}
> It happens in recent builds:
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/8732/testReport/org.apache.hadoop.fs/TestSymlinkLocalFSFileSystem/testSetTimesDanglingLink/
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/8721/testReport/org.apache.hadoop.fs/TestSymlinkLocalFSFileSystem/testSetTimesSymlinkToFile/
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/8590/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_72.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11996) Native erasure coder facilities based on ISA-L

2016-02-26 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15170136#comment-15170136
 ] 

Kai Zheng commented on HADOOP-11996:


Hi Colin, how would you like, simply combine gf_util.h into erasure_code.h, 
which will overall correspond to erasure_code.c, the wrapper for the isal.so 
lib?

> Native erasure coder facilities based on ISA-L
> --
>
> Key: HADOOP-11996
> URL: https://issues.apache.org/jira/browse/HADOOP-11996
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11996-initial.patch, HADOOP-11996-v2.patch, 
> HADOOP-11996-v3.patch, HADOOP-11996-v4.patch, HADOOP-11996-v5.patch, 
> HADOOP-11996-v6.patch, HADOOP-11996-v7.patch
>
>
> While working on HADOOP-11540 and etc., it was found useful to write the 
> basic facilities based on Intel ISA-L library separately from JNI stuff. It's 
> also easy to debug and troubleshooting, as no JNI or Java stuffs are involved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11996) Native erasure coder facilities based on ISA-L

2016-02-26 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15170123#comment-15170123
 ] 

Kai Zheng commented on HADOOP-11996:


Right, it can do, then it will mean {{gf_util.c}} still couples with or depends 
on {{erasure_code.c}} because some setup work needs to be done first by calling 
initialization related function in the latter. I thought by the splitting you 
might want {{gf_util.h/c}} can be used separately.

> Native erasure coder facilities based on ISA-L
> --
>
> Key: HADOOP-11996
> URL: https://issues.apache.org/jira/browse/HADOOP-11996
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11996-initial.patch, HADOOP-11996-v2.patch, 
> HADOOP-11996-v3.patch, HADOOP-11996-v4.patch, HADOOP-11996-v5.patch, 
> HADOOP-11996-v6.patch, HADOOP-11996-v7.patch
>
>
> While working on HADOOP-11540 and etc., it was found useful to write the 
> basic facilities based on Intel ISA-L library separately from JNI stuff. It's 
> also easy to debug and troubleshooting, as no JNI or Java stuffs are involved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12849) TestSymlinkLocalFSFileSystem fails intermittently

2016-02-26 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu reassigned HADOOP-12849:
--

Assignee: Mingliang Liu

> TestSymlinkLocalFSFileSystem fails intermittently 
> --
>
> Key: HADOOP-12849
> URL: https://issues.apache.org/jira/browse/HADOOP-12849
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>
> *Error Message*
> expected:<1456523612000> but was:<1456523613000>
> *Stacktrace*
> {quote}
> java.lang.AssertionError: expected:<1456523612000> but was:<1456523613000>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.fs.SymlinkBaseTest.testSetTimesDanglingLink(SymlinkBaseTest.java:1410)
>   at 
> org.apache.hadoop.fs.TestSymlinkLocalFS.testSetTimesDanglingLink(TestSymlinkLocalFS.java:239)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {quote}
> It happens in recent builds:
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/8732/testReport/org.apache.hadoop.fs/TestSymlinkLocalFSFileSystem/testSetTimesDanglingLink/
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/8721/testReport/org.apache.hadoop.fs/TestSymlinkLocalFSFileSystem/testSetTimesSymlinkToFile/
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/8590/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_72.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12849) TestSymlinkLocalFSFileSystem fails intermittently

2016-02-26 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15170109#comment-15170109
 ] 

Mingliang Liu commented on HADOOP-12849:


Thanks [~cnauroth] very much for pointing out the related issue, which helps me 
a lot finding the root cause. I think we may also relax the {{getAccessTime()}} 
assertion here as well?

> TestSymlinkLocalFSFileSystem fails intermittently 
> --
>
> Key: HADOOP-12849
> URL: https://issues.apache.org/jira/browse/HADOOP-12849
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Mingliang Liu
>
> *Error Message*
> expected:<1456523612000> but was:<1456523613000>
> *Stacktrace*
> {quote}
> java.lang.AssertionError: expected:<1456523612000> but was:<1456523613000>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.fs.SymlinkBaseTest.testSetTimesDanglingLink(SymlinkBaseTest.java:1410)
>   at 
> org.apache.hadoop.fs.TestSymlinkLocalFS.testSetTimesDanglingLink(TestSymlinkLocalFS.java:239)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {quote}
> It happens in recent builds:
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/8732/testReport/org.apache.hadoop.fs/TestSymlinkLocalFSFileSystem/testSetTimesDanglingLink/
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/8721/testReport/org.apache.hadoop.fs/TestSymlinkLocalFSFileSystem/testSetTimesSymlinkToFile/
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/8590/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_72.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12849) TestSymlinkLocalFSFileSystem fails intermittently

2016-02-26 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15170074#comment-15170074
 ] 

Chris Nauroth commented on HADOOP-12849:


This looks similar to HADOOP-12603.  This is an assertion on equality of atime. 
 Unfortunately, the act of checking the inode might inadvertently trigger 
another update of atime.  Assuming the underlying local file system tracks 
atime at 1-second granularity, this can manifest as a 1-second difference in 
expected vs. actual atime value seen by the assertion.  In HADOOP-12603, we 
relaxed some of the assertions in {{SymlinkBaseTest}} to allow for this.  It 
looks like we didn't catch them all though.

[~liuml07], thank you for the bug report.

> TestSymlinkLocalFSFileSystem fails intermittently 
> --
>
> Key: HADOOP-12849
> URL: https://issues.apache.org/jira/browse/HADOOP-12849
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Mingliang Liu
>
> *Error Message*
> expected:<1456523612000> but was:<1456523613000>
> *Stacktrace*
> {quote}
> java.lang.AssertionError: expected:<1456523612000> but was:<1456523613000>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.fs.SymlinkBaseTest.testSetTimesDanglingLink(SymlinkBaseTest.java:1410)
>   at 
> org.apache.hadoop.fs.TestSymlinkLocalFS.testSetTimesDanglingLink(TestSymlinkLocalFS.java:239)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {quote}
> It happens in recent builds:
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/8732/testReport/org.apache.hadoop.fs/TestSymlinkLocalFSFileSystem/testSetTimesDanglingLink/
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/8721/testReport/org.apache.hadoop.fs/TestSymlinkLocalFSFileSystem/testSetTimesSymlinkToFile/
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/8590/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_72.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12846) Credential Provider Recursive Dependencies

2016-02-26 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15170072#comment-15170072
 ] 

Larry McCay commented on HADOOP-12846:
--

Understood.

I'll see what I can do.
This will require a list of regex expressions rather than just one as well.
It might be able to be done with one complicated expression that was 
dynamically built but not sure it is worth it.


> Credential Provider Recursive Dependencies
> --
>
> Key: HADOOP-12846
> URL: https://issues.apache.org/jira/browse/HADOOP-12846
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Larry McCay
>Assignee: Larry McCay
>
> There are a few credential provider integration points in which the use of a 
> certain type of provider in a certain filesystem causes a recursive infinite 
> loop. 
> For instance, a component such as sqoop can be protecting a db password in a 
> credential provider within the wasb/azure filesystem. Now that HADOOP-12555 
> has introduced the ability to protect the access keys for wasb we suddenly 
> need access to wasb to get the database keys which initiates the attempt to 
> get the access keys from wasb - since there is a provider path configured for 
> sqoop.
> For such integrations, those in which it doesn't make sense to protect the 
> access keys inside the thing that we need the keys to access, we need a 
> solution to avoid this recursion - other than dictating what filesystems can 
> be used by other components.
> This patch proposes the ability to scrub the configured provider path of any 
> provider types that would be incompatible with the integration point. In 
> other words, before calling Configuration.getPassword for the access keys to 
> wasb, we can remove any configured providers that require access to wasb.
> This will require some regex expressions that can be used to identify the 
> configuration of such provider uri's within the provider path parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12831) LocalFS/FSOutputSummer NPEs in constructor if bytes per checksum set to 0

2016-02-26 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15170031#comment-15170031
 ] 

Li Lu commented on HADOOP-12831:


Latest patch LGTM. Will wait for about a day then commit. [~ste...@apache.org] 
I believe you were talking about a fix like this, so would you please double 
check it? Thanks! 

> LocalFS/FSOutputSummer NPEs in constructor if bytes per checksum  set to 0
> --
>
> Key: HADOOP-12831
> URL: https://issues.apache.org/jira/browse/HADOOP-12831
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: HADOOP-12831.000.patch, HADOOP-12831.001.patch, 
> HADOOP-12831.002.patch
>
>
> If you set the number of bytes per checksum to zero, 
> {code}
> conf.setInt(LocalFileSystemConfigKeys.LOCAL_FS_BYTES_PER_CHECKSUM_KEY, 0)
> {code}
> then create a "file://" instance, you get to see a stack trace



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12849) TestSymlinkLocalFSFileSystem fails intermittently

2016-02-26 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15170020#comment-15170020
 ] 

Mingliang Liu commented on HADOOP-12849:


Not investigated in detail, but it seems that there are several tests creating 
the file named {{link}} which links to non-existent file. Is it possible that 
the {{link}} file already exits for different tests? If true, we may need 
unique file names along with unique directory names.

> TestSymlinkLocalFSFileSystem fails intermittently 
> --
>
> Key: HADOOP-12849
> URL: https://issues.apache.org/jira/browse/HADOOP-12849
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Mingliang Liu
>
> *Error Message*
> expected:<1456523612000> but was:<1456523613000>
> *Stacktrace*
> {quote}
> java.lang.AssertionError: expected:<1456523612000> but was:<1456523613000>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.fs.SymlinkBaseTest.testSetTimesDanglingLink(SymlinkBaseTest.java:1410)
>   at 
> org.apache.hadoop.fs.TestSymlinkLocalFS.testSetTimesDanglingLink(TestSymlinkLocalFS.java:239)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {quote}
> It happens in recent builds:
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/8732/testReport/org.apache.hadoop.fs/TestSymlinkLocalFSFileSystem/testSetTimesDanglingLink/
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/8721/testReport/org.apache.hadoop.fs/TestSymlinkLocalFSFileSystem/testSetTimesSymlinkToFile/
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/8590/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_72.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12849) TestSymlinkLocalFSFileSystem fails intermittently

2016-02-26 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-12849:
---
Affects Version/s: 3.0.0

> TestSymlinkLocalFSFileSystem fails intermittently 
> --
>
> Key: HADOOP-12849
> URL: https://issues.apache.org/jira/browse/HADOOP-12849
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Mingliang Liu
>
> *Error Message*
> expected:<1456523612000> but was:<1456523613000>
> *Stacktrace*
> {quote}
> java.lang.AssertionError: expected:<1456523612000> but was:<1456523613000>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.fs.SymlinkBaseTest.testSetTimesDanglingLink(SymlinkBaseTest.java:1410)
>   at 
> org.apache.hadoop.fs.TestSymlinkLocalFS.testSetTimesDanglingLink(TestSymlinkLocalFS.java:239)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {quote}
> It happens in recent builds:
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/8732/testReport/org.apache.hadoop.fs/TestSymlinkLocalFSFileSystem/testSetTimesDanglingLink/
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/8721/testReport/org.apache.hadoop.fs/TestSymlinkLocalFSFileSystem/testSetTimesSymlinkToFile/
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/8590/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_72.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12849) TestSymlinkLocalFSFileSystem fails intermittently

2016-02-26 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-12849:
---
Description: 
*Error Message*

expected:<1456523612000> but was:<1456523613000>

*Stacktrace*

{quote}
java.lang.AssertionError: expected:<1456523612000> but was:<1456523613000>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.fs.SymlinkBaseTest.testSetTimesDanglingLink(SymlinkBaseTest.java:1410)
at 
org.apache.hadoop.fs.TestSymlinkLocalFS.testSetTimesDanglingLink(TestSymlinkLocalFS.java:239)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{quote}

It happens in recent builds:
* 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8732/testReport/org.apache.hadoop.fs/TestSymlinkLocalFSFileSystem/testSetTimesDanglingLink/
* 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8721/testReport/org.apache.hadoop.fs/TestSymlinkLocalFSFileSystem/testSetTimesSymlinkToFile/
* 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8590/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_72.txt

  was:
*Error Message*

expected:<1456523612000> but was:<1456523613000>

*Stacktrace*

java.lang.AssertionError: expected:<1456523612000> but was:<1456523613000>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.fs.SymlinkBaseTest.testSetTimesDanglingLink(SymlinkBaseTest.java:1410)
at 
org.apache.hadoop.fs.TestSymlinkLocalFS.testSetTimesDanglingLink(TestSymlinkLocalFS.java:239)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)

It happens in recent builds:
* 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8732/testReport/org.apache.hadoop.fs/TestSymlinkLocalFSFileSystem/testSetTimesDanglingLink/
* 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8721/testReport/org.apache.hadoop.fs/TestSymlinkLocalFSFileSystem/testSetTimesSymlinkToFile/
* 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8590/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_72.txt


> TestSymlinkLocalFSFileSystem fails intermittently 
> --
>
> Key: HADOOP-12849
> URL: https://issues.apache.org/jira/browse/HADOOP-12849
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Mingliang Liu
>
> *Error Message*
> expected:<1456523612000> but was:<1456523613000>
> *Stacktrace*
> {quote}
> java.lang.AssertionError: expected:<1456523612000> but was:<1456523613000>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> 

[jira] [Commented] (HADOOP-12831) LocalFS/FSOutputSummer NPEs in constructor if bytes per checksum set to 0

2016-02-26 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15170008#comment-15170008
 ] 

Mingliang Liu commented on HADOOP-12831:


Test failure is not related. Specially, I filed [HADOOP-12849] to track the 
failing test.

> LocalFS/FSOutputSummer NPEs in constructor if bytes per checksum  set to 0
> --
>
> Key: HADOOP-12831
> URL: https://issues.apache.org/jira/browse/HADOOP-12831
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: HADOOP-12831.000.patch, HADOOP-12831.001.patch, 
> HADOOP-12831.002.patch
>
>
> If you set the number of bytes per checksum to zero, 
> {code}
> conf.setInt(LocalFileSystemConfigKeys.LOCAL_FS_BYTES_PER_CHECKSUM_KEY, 0)
> {code}
> then create a "file://" instance, you get to see a stack trace



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12849) TestSymlinkLocalFSFileSystem fails intermittently

2016-02-26 Thread Mingliang Liu (JIRA)
Mingliang Liu created HADOOP-12849:
--

 Summary: TestSymlinkLocalFSFileSystem fails intermittently 
 Key: HADOOP-12849
 URL: https://issues.apache.org/jira/browse/HADOOP-12849
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Mingliang Liu


*Error Message*

expected:<1456523612000> but was:<1456523613000>

*Stacktrace*

java.lang.AssertionError: expected:<1456523612000> but was:<1456523613000>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.fs.SymlinkBaseTest.testSetTimesDanglingLink(SymlinkBaseTest.java:1410)
at 
org.apache.hadoop.fs.TestSymlinkLocalFS.testSetTimesDanglingLink(TestSymlinkLocalFS.java:239)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)

It happens in recent builds:
* 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8732/testReport/org.apache.hadoop.fs/TestSymlinkLocalFSFileSystem/testSetTimesDanglingLink/
* 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8721/testReport/org.apache.hadoop.fs/TestSymlinkLocalFSFileSystem/testSetTimesSymlinkToFile/
* 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8590/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_72.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11996) Native erasure coder facilities based on ISA-L

2016-02-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169969#comment-15169969
 ] 

Colin Patrick McCabe commented on HADOOP-11996:
---

If {{gf_util.c}} needs to invoke function pointers that are dynamically loaded 
in {{erasure_code.c}} (or any other file), the function pointers can just be 
declared {{extern}} in the header file, right?

> Native erasure coder facilities based on ISA-L
> --
>
> Key: HADOOP-11996
> URL: https://issues.apache.org/jira/browse/HADOOP-11996
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11996-initial.patch, HADOOP-11996-v2.patch, 
> HADOOP-11996-v3.patch, HADOOP-11996-v4.patch, HADOOP-11996-v5.patch, 
> HADOOP-11996-v6.patch, HADOOP-11996-v7.patch
>
>
> While working on HADOOP-11540 and etc., it was found useful to write the 
> basic facilities based on Intel ISA-L library separately from JNI stuff. It's 
> also easy to debug and troubleshooting, as no JNI or Java stuffs are involved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12848) Configuration.ParsedTimeDuration.unitFor(String) should do more careful parsing

2016-02-26 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HADOOP-12848:
--
Hadoop Flags: Incompatible change

> Configuration.ParsedTimeDuration.unitFor(String) should do more careful 
> parsing
> ---
>
> Key: HADOOP-12848
> URL: https://issues.apache.org/jira/browse/HADOOP-12848
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>
> The unit parsing code is very loosey-goosey.  For example, "2 pigs" will 
> parse as 2 seconds.  "3 hams" will parse as 3 milliseconds.  We might want to 
> tighten that up a bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12846) Credential Provider Recursive Dependencies

2016-02-26 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169925#comment-15169925
 ] 

Chris Nauroth commented on HADOOP-12846:


[~lmccay], the default scheme (the one returned by however the {{FileSystem}} 
chooses to override the {{getScheme}} method) continues to work even in the 
presence of additional custom mappings.  Focusing on the default scheme names 
might be an acceptable compromise near-term.

However, I'm wondering if we might be able to cover everything right now 
without too much difficulty by using something like the following pseudo-code.

{code}
credentialProviderPath = conf.get("hadoop.security.credential.provider.path")
fsScheme = parseFileSystemScheme(credentialProviderPath)
fsClass = FileSystem.getFileSystemClass(fsScheme, conf)
if (NativeAzureFileSystem.class.isAssignableFrom(fsClass))
  exclude
{code}

Let me know your thoughts on this.  Thanks!

> Credential Provider Recursive Dependencies
> --
>
> Key: HADOOP-12846
> URL: https://issues.apache.org/jira/browse/HADOOP-12846
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Larry McCay
>Assignee: Larry McCay
>
> There are a few credential provider integration points in which the use of a 
> certain type of provider in a certain filesystem causes a recursive infinite 
> loop. 
> For instance, a component such as sqoop can be protecting a db password in a 
> credential provider within the wasb/azure filesystem. Now that HADOOP-12555 
> has introduced the ability to protect the access keys for wasb we suddenly 
> need access to wasb to get the database keys which initiates the attempt to 
> get the access keys from wasb - since there is a provider path configured for 
> sqoop.
> For such integrations, those in which it doesn't make sense to protect the 
> access keys inside the thing that we need the keys to access, we need a 
> solution to avoid this recursion - other than dictating what filesystems can 
> be used by other components.
> This patch proposes the ability to scrub the configured provider path of any 
> provider types that would be incompatible with the integration point. In 
> other words, before calling Configuration.getPassword for the access keys to 
> wasb, we can remove any configured providers that require access to wasb.
> This will require some regex expressions that can be used to identify the 
> configuration of such provider uri's within the provider path parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12831) LocalFS/FSOutputSummer NPEs in constructor if bytes per checksum set to 0

2016-02-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169921#comment-15169921
 ] 

Hadoop QA commented on HADOOP-12831:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
2s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 29s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 12s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 11s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 47s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 25s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 6s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 10s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 8s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_95 Failed junit tests | hadoop.fs.TestSymlinkLocalFSFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12790205/HADOOP-12831.002.patch
 |
| JIRA Issue | HADOOP-12831 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux dd5e668108a2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 

[jira] [Commented] (HADOOP-12847) hadoop daemonlog should support https and SPNEGO for Kerberized cluster

2016-02-26 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169907#comment-15169907
 ] 

Wei-Chiu Chuang commented on HADOOP-12847:
--

Thanks [~cnauroth] and [~xyao] for comments.
How do you think about support both http and https? Should the tool use an 
option to specify the protocol, or let it attempt to connect to https and then 
retry http?
For authentication, {{KerberosAuthenticator}} automatically falls back to 
simple, so it should not require additional configuration. 

On thing to consider is that https connection may need additional parameters to 
supply the path to trust store file.

> hadoop daemonlog should support https and SPNEGO for Kerberized cluster
> ---
>
> Key: HADOOP-12847
> URL: https://issues.apache.org/jira/browse/HADOOP-12847
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12847.001.patch
>
>
> {{hadoop daemonlog}} is a simple, yet useful tool for debugging.
> However, it does not support https, nor does it support a Kerberized Hadoop 
> cluster.
> Using {{AuthenticatedURL}}, it will be able to support SPNEGO negotiation 
> with a Kerberized name node web ui. It will also fall back to simple 
> authentication if the cluster is not Kerberized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12848) Configuration.ParsedTimeDuration.unitFor(String) should do more careful parsing

2016-02-26 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169841#comment-15169841
 ] 

Ray Chiang commented on HADOOP-12848:
-

Might want to reserve "bacons" for ns if we're keeping with the swine theme.

> Configuration.ParsedTimeDuration.unitFor(String) should do more careful 
> parsing
> ---
>
> Key: HADOOP-12848
> URL: https://issues.apache.org/jira/browse/HADOOP-12848
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>
> The unit parsing code is very loosey-goosey.  For example, "2 pigs" will 
> parse as 2 seconds.  "3 hams" will parse as 3 milliseconds.  We might want to 
> tighten that up a bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12846) Credential Provider Recursive Dependencies

2016-02-26 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169821#comment-15169821
 ] 

Larry McCay commented on HADOOP-12846:
--

[~cnauroth] - if someone does map a custom scheme name - does the default one 
not work at all?
I wondering whether we could - at least in the near term - just say that 
provider URIs need to use the default scheme names.

> Credential Provider Recursive Dependencies
> --
>
> Key: HADOOP-12846
> URL: https://issues.apache.org/jira/browse/HADOOP-12846
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Larry McCay
>Assignee: Larry McCay
>
> There are a few credential provider integration points in which the use of a 
> certain type of provider in a certain filesystem causes a recursive infinite 
> loop. 
> For instance, a component such as sqoop can be protecting a db password in a 
> credential provider within the wasb/azure filesystem. Now that HADOOP-12555 
> has introduced the ability to protect the access keys for wasb we suddenly 
> need access to wasb to get the database keys which initiates the attempt to 
> get the access keys from wasb - since there is a provider path configured for 
> sqoop.
> For such integrations, those in which it doesn't make sense to protect the 
> access keys inside the thing that we need the keys to access, we need a 
> solution to avoid this recursion - other than dictating what filesystems can 
> be used by other components.
> This patch proposes the ability to scrub the configured provider path of any 
> provider types that would be incompatible with the integration point. In 
> other words, before calling Configuration.getPassword for the access keys to 
> wasb, we can remove any configured providers that require access to wasb.
> This will require some regex expressions that can be used to identify the 
> configuration of such provider uri's within the provider path parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12848) Configuration.ParsedTimeDuration.unitFor(String) should do more careful parsing

2016-02-26 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169776#comment-15169776
 ] 

Robert Kanter commented on HADOOP-12848:


I was discussing with [~templedf] offline and we decided that having a unit of 
pulses or beats would be good for any of the heartbeat configs.  I'm sure there 
are many other thematic units we can use for many of the configs.

> Configuration.ParsedTimeDuration.unitFor(String) should do more careful 
> parsing
> ---
>
> Key: HADOOP-12848
> URL: https://issues.apache.org/jira/browse/HADOOP-12848
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>
> The unit parsing code is very loosey-goosey.  For example, "2 pigs" will 
> parse as 2 seconds.  "3 hams" will parse as 3 milliseconds.  We might want to 
> tighten that up a bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12848) Configuration.ParsedTimeDuration.unitFor(String) should do more careful parsing

2016-02-26 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169766#comment-15169766
 ] 

Chris Nauroth commented on HADOOP-12848:


I like the idea of "pigs" and "hams" as standardized units of measurement for 
time.  It actually makes sense: a whole pig is multiple hams.

> Configuration.ParsedTimeDuration.unitFor(String) should do more careful 
> parsing
> ---
>
> Key: HADOOP-12848
> URL: https://issues.apache.org/jira/browse/HADOOP-12848
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>
> The unit parsing code is very loosey-goosey.  For example, "2 pigs" will 
> parse as 2 seconds.  "3 hams" will parse as 3 milliseconds.  We might want to 
> tighten that up a bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12846) Credential Provider Recursive Dependencies

2016-02-26 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169750#comment-15169750
 ] 

Chris Nauroth commented on HADOOP-12846:


[~lmccay], thank you for picking this up.

bq. This will require some regex expressions that can be used to identify the 
configuration of such provider uri's within the provider path parameter.

Unfortunately, I don't think regex matching alone would solve the problem 
completely.  The challenge is that our configuration magic allows different 
deployments to use different scheme names in file system URIs to refer to the 
same {{FileSystem}} implementation.

In the example you gave, the "wasb" scheme maps to 
{{org.apache.hadoop.fs.azure.NativeAzureFileSystem}}.  This is the default 
scheme as defined within the code of {{NativeAzureFileSystem}}.  However, it's 
also possible that someone has used custom configuration to map a different 
scheme name to the same class.  For example:

{code}

  fs.customfs.impl
  org.apache.hadoop.fs.azure.NativeAzureFileSystem

{code}

With that configuration in place, a credential provider URI with 
"jceks://customfs/..." would hit the same problem, but a regex match against 
"wasb" wouldn't exclude it.

I think a complete solution would somehow have to figure out the real mapping 
of scheme to file system class from configuration, and then do the exclusion 
based on class instead of scheme.  The method 
{{FileSystem#getFileSystemClass(String, Configuration)}} might be helpful here.

> Credential Provider Recursive Dependencies
> --
>
> Key: HADOOP-12846
> URL: https://issues.apache.org/jira/browse/HADOOP-12846
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Larry McCay
>Assignee: Larry McCay
>
> There are a few credential provider integration points in which the use of a 
> certain type of provider in a certain filesystem causes a recursive infinite 
> loop. 
> For instance, a component such as sqoop can be protecting a db password in a 
> credential provider within the wasb/azure filesystem. Now that HADOOP-12555 
> has introduced the ability to protect the access keys for wasb we suddenly 
> need access to wasb to get the database keys which initiates the attempt to 
> get the access keys from wasb - since there is a provider path configured for 
> sqoop.
> For such integrations, those in which it doesn't make sense to protect the 
> access keys inside the thing that we need the keys to access, we need a 
> solution to avoid this recursion - other than dictating what filesystems can 
> be used by other components.
> This patch proposes the ability to scrub the configured provider path of any 
> provider types that would be incompatible with the integration point. In 
> other words, before calling Configuration.getPassword for the access keys to 
> wasb, we can remove any configured providers that require access to wasb.
> This will require some regex expressions that can be used to identify the 
> configuration of such provider uri's within the provider path parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12556) KafkaSink jar files are created but not copied to target dist

2016-02-26 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169727#comment-15169727
 ] 

Chris Nauroth commented on HADOOP-12556:


I'm slightly in favor of option 2: keep it as one big dir.  That gives an easy 
out if someone decides they really do want the whole world by putting 
share/hadoop/tools/lib/* on the classpath.  OTOH, I suppose we could come up 
with a "whole world" shell profile that walks a more granular directory 
structure and gathers everything.

In general, I really like the idea of using shell profiles to solve this 
problem.  We still have a gap in that we don't have equivalent functionality on 
Windows.  I have a hunch that it won't be feasible to offer all of the rich 
features of the full shell rewrite in cmd, but maybe we can do just enough to 
support classpath customization through profiles.

> KafkaSink jar files are created but not copied to target dist
> -
>
> Key: HADOOP-12556
> URL: https://issues.apache.org/jira/browse/HADOOP-12556
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Babak Behzad
>Assignee: Babak Behzad
> Attachments: HADOOP-12556.patch
>
>
> There is a hadoop-kafka artifact missing from hadoop-tools-dist's pom.xml 
> which was causing the compiled Kafka jar files not to be copied to the target 
> dist directory. The new patch adds this in order to complete this fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12847) hadoop daemonlog should support https and SPNEGO for Kerberized cluster

2016-02-26 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169725#comment-15169725
 ] 

Xiaoyu Yao commented on HADOOP-12847:
-

Thanks [~cnauroth] for the heads up. Yes, we have discussed this issue offline 
last year. Somehow I forgot to open a ticket for it. It's good see 
[~jojochuang] reported it and proposed the fix recently. Let's fix it here.

> hadoop daemonlog should support https and SPNEGO for Kerberized cluster
> ---
>
> Key: HADOOP-12847
> URL: https://issues.apache.org/jira/browse/HADOOP-12847
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12847.001.patch
>
>
> {{hadoop daemonlog}} is a simple, yet useful tool for debugging.
> However, it does not support https, nor does it support a Kerberized Hadoop 
> cluster.
> Using {{AuthenticatedURL}}, it will be able to support SPNEGO negotiation 
> with a Kerberized name node web ui. It will also fall back to simple 
> authentication if the cluster is not Kerberized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12846) Credential Provider Recursive Dependencies

2016-02-26 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169716#comment-15169716
 ] 

Larry McCay commented on HADOOP-12846:
--

Given a configuration (command line) such as -D 
hadoop.security.credential.provider.path=jceks://wasb/user/hrt_qa/sqoopdbpasswd.jceks
 this would result in a infinite loop. With a call to 
ProviderUtils.excludeCredentialProviderTypes(",?jceks://wasb.*,?") this would 
result in a new configuration object that had no providers in it and therefore 
no incompatibility at the integration point that looks up access keys. It would 
therefore avoid the infinite loop.

I will have a patch available at some point later today or this evening.

> Credential Provider Recursive Dependencies
> --
>
> Key: HADOOP-12846
> URL: https://issues.apache.org/jira/browse/HADOOP-12846
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Larry McCay
>Assignee: Larry McCay
>
> There are a few credential provider integration points in which the use of a 
> certain type of provider in a certain filesystem causes a recursive infinite 
> loop. 
> For instance, a component such as sqoop can be protecting a db password in a 
> credential provider within the wasb/azure filesystem. Now that HADOOP-12555 
> has introduced the ability to protect the access keys for wasb we suddenly 
> need access to wasb to get the database keys which initiates the attempt to 
> get the access keys from wasb - since there is a provider path configured for 
> sqoop.
> For such integrations, those in which it doesn't make sense to protect the 
> access keys inside the thing that we need the keys to access, we need a 
> solution to avoid this recursion - other than dictating what filesystems can 
> be used by other components.
> This patch proposes the ability to scrub the configured provider path of any 
> provider types that would be incompatible with the integration point. In 
> other words, before calling Configuration.getPassword for the access keys to 
> wasb, we can remove any configured providers that require access to wasb.
> This will require some regex expressions that can be used to identify the 
> configuration of such provider uri's within the provider path parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12848) Configuration.ParsedTimeDuration.unitFor(String) should do more careful parsing

2016-02-26 Thread Daniel Templeton (JIRA)
Daniel Templeton created HADOOP-12848:
-

 Summary: Configuration.ParsedTimeDuration.unitFor(String) should 
do more careful parsing
 Key: HADOOP-12848
 URL: https://issues.apache.org/jira/browse/HADOOP-12848
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.9.0
Reporter: Daniel Templeton
Assignee: Daniel Templeton


The unit parsing code is very loosey-goosey.  For example, "2 pigs" will parse 
as 2 seconds.  "3 hams" will parse as 3 milliseconds.  We might want to tighten 
that up a bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12709) Deprecate s3:// in branch-2,; cut from trunk

2016-02-26 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169698#comment-15169698
 ] 

Chris Nauroth commented on HADOOP-12709:


I'm in favor of the proposal to remove s3 from trunk.  I haven't had time to 
review the patch yet, but I'll put it in my queue.

> Deprecate s3:// in branch-2,; cut from trunk
> 
>
> Key: HADOOP-12709
> URL: https://issues.apache.org/jira/browse/HADOOP-12709
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
> Attachments: HADOOP-12709.000.patch
>
>
> The fact that s3:// was broken in Hadoop 2.7 *and nobody noticed until now* 
> shows that it's not being used. while invaluable at the time, s3n and 
> especially s3a render it obsolete except for reading existing data.
> I propose
> # Mark Java source as {{@deprecated}}
> # Warn the first time in a JVM that an S3 instance is created, "deprecated 
> -will be removed in future releases"
> # in Hadoop trunk we really cut it. Maybe have an attic project (external?) 
> which holds it for anyone who still wants it. Or: retain the code but remove 
> the {{fs.s3.impl}} config option, so you have to explicitly add it for use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12831) LocalFS/FSOutputSummer NPEs in constructor if bytes per checksum set to 0

2016-02-26 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-12831:
---
Attachment: HADOOP-12831.002.patch

The v2 patch is to check the config key in {{ChecksumFileSystem}} to narrow 
down the change, as [~gtCarrera9] suggested.

> LocalFS/FSOutputSummer NPEs in constructor if bytes per checksum  set to 0
> --
>
> Key: HADOOP-12831
> URL: https://issues.apache.org/jira/browse/HADOOP-12831
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: HADOOP-12831.000.patch, HADOOP-12831.001.patch, 
> HADOOP-12831.002.patch
>
>
> If you set the number of bytes per checksum to zero, 
> {code}
> conf.setInt(LocalFileSystemConfigKeys.LOCAL_FS_BYTES_PER_CHECKSUM_KEY, 0)
> {code}
> then create a "file://" instance, you get to see a stack trace



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12841) Update s3-related properties in core-default.xml

2016-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169626#comment-15169626
 ] 

Hudson commented on HADOOP-12841:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9375 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9375/])
HADOOP-12841. Update s3-related properties in core-default.xml. (lei: rev 
2093acf6b659d5a271b7e97f9b64652d7cf01eef)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml


> Update s3-related properties in core-default.xml
> 
>
> Key: HADOOP-12841
> URL: https://issues.apache.org/jira/browse/HADOOP-12841
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Fix For: 3.0.0, 2.9.0
>
> Attachments: HADOOP-12841.001.patch
>
>
> HADOOP-11670 deprecated 
> {{fs.s3a.awsAccessKeyId}}/{{fs.s3a.awsSecretAccessKey}} in favor of 
> {{fs.s3a.access.key}}/{{fs.s3a.secret.key}} in the code, but did not update 
> core-default.xml. Also, a few S3 related properties are missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12847) hadoop daemonlog should support https and SPNEGO for Kerberized cluster

2016-02-26 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169612#comment-15169612
 ] 

Chris Nauroth commented on HADOOP-12847:


[~jojochuang], thank you for posting the patch.

[~xyao], did you already have a patch in progress on an earlier JIRA for this, 
or am I mistaken?  I just want to figure out if there is a duplicate issue.  If 
not, then we can help code review this one.

> hadoop daemonlog should support https and SPNEGO for Kerberized cluster
> ---
>
> Key: HADOOP-12847
> URL: https://issues.apache.org/jira/browse/HADOOP-12847
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12847.001.patch
>
>
> {{hadoop daemonlog}} is a simple, yet useful tool for debugging.
> However, it does not support https, nor does it support a Kerberized Hadoop 
> cluster.
> Using {{AuthenticatedURL}}, it will be able to support SPNEGO negotiation 
> with a Kerberized name node web ui. It will also fall back to simple 
> authentication if the cluster is not Kerberized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12827) WebHdfs socket timeouts should be configurable

2016-02-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169594#comment-15169594
 ] 

Hadoop QA commented on HADOOP-12827:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 23s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 26s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 32s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
46s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 32s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 17s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 19s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 23s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 20s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 53s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_72. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 19s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_72. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 4s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 25s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
43s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} 

[jira] [Commented] (HADOOP-12841) Update s3-related properties in core-default.xml

2016-02-26 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169540#comment-15169540
 ] 

Wei-Chiu Chuang commented on HADOOP-12841:
--

Thanks for the review and commit!

> Update s3-related properties in core-default.xml
> 
>
> Key: HADOOP-12841
> URL: https://issues.apache.org/jira/browse/HADOOP-12841
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Fix For: 3.0.0, 2.9.0
>
> Attachments: HADOOP-12841.001.patch
>
>
> HADOOP-11670 deprecated 
> {{fs.s3a.awsAccessKeyId}}/{{fs.s3a.awsSecretAccessKey}} in favor of 
> {{fs.s3a.access.key}}/{{fs.s3a.secret.key}} in the code, but did not update 
> core-default.xml. Also, a few S3 related properties are missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12841) Update s3-related properties in core-default.xml

2016-02-26 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-12841:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: 2.9.0
  3.0.0
Target Version/s: 3.0.0
Tags: s3
  Status: Resolved  (was: Patch Available)

+1. LGTM. Committed to trunk and branch-2.

The test failure are not related, this is a document change.

Thanks a lot for the work, [~jojochuang].

> Update s3-related properties in core-default.xml
> 
>
> Key: HADOOP-12841
> URL: https://issues.apache.org/jira/browse/HADOOP-12841
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Fix For: 3.0.0, 2.9.0
>
> Attachments: HADOOP-12841.001.patch
>
>
> HADOOP-11670 deprecated 
> {{fs.s3a.awsAccessKeyId}}/{{fs.s3a.awsSecretAccessKey}} in favor of 
> {{fs.s3a.access.key}}/{{fs.s3a.secret.key}} in the code, but did not update 
> core-default.xml. Also, a few S3 related properties are missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12837) FileStatus.getModificationTime not working on S3

2016-02-26 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169523#comment-15169523
 ] 

Chris Nauroth commented on HADOOP-12837:


Yeah, I'm afraid there really isn't a feasible workaround at the file system 
layer right now.  I think you're on the right track in trying to explore 
different filtering techniques on your data set so that it doesn't rely on 
mtime.

> FileStatus.getModificationTime not working on S3
> 
>
> Key: HADOOP-12837
> URL: https://issues.apache.org/jira/browse/HADOOP-12837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Jagdish Kewat
>
> Hi Team,
> We have observed an issue with the FileStatus.getModificationTime() API on S3 
> filesystem. The method always returns 0.
> I googled for this however couldn't find any solution as such which would fit 
> in my scheme of things. S3FileStatus seems to be an option however I would be 
> using this API on HDFS as well as S3 both so can't go for it.
> I tried to run the job on:
> * Release label:emr-4.2.0
> * Hadoop distribution:Amazon 2.6.0
> * Hadoop Common jar: hadoop-common-2.6.0.jar
> Please advise if any patch or fix available for this.
> Thanks,
> Jagdish



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12827) WebHdfs socket timeouts should be configurable

2016-02-26 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169506#comment-15169506
 ] 

Xiaoyu Yao commented on HADOOP-12827:
-

There is a typo in my suggested revision of hdfs-default.xml, the correct one 
should be:

1. In hdfs-dfault.xml, can rephrase the following into something like "Value is 
recommended to followed by a unit specifier If no unit specifier is given, 
the default will be milliseconds."

> WebHdfs socket timeouts should be configurable
> --
>
> Key: HADOOP-12827
> URL: https://issues.apache.org/jira/browse/HADOOP-12827
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
> Environment: all
>Reporter: Austin Donnelly
>Assignee: Austin Donnelly
>  Labels: easyfix, newbie
> Attachments: HADOOP-12827.001.patch, HADOOP-12827.002.patch, 
> HADOOP-12827.002.patch, HADOOP-12827.002.patch
>
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> WebHdfs client connections use sockets with fixed timeouts of 60 seconds to 
> connect, and 60 seconds for reads.
> This is a problem because I am trying to use WebHdfs to access an archive 
> storage system which can take minutes to hours to return the requested data 
> over WebHdfs.
> The fix is to add new configuration file options to allow these 60s defaults 
> to be customised in hdfs-site.xml.
> If the new configuration options are not present, the behavior is unchanged 
> from before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12767) update apache httpclient version to the latest 4.5 for security

2016-02-26 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169458#comment-15169458
 ] 

Wei-Chiu Chuang commented on HADOOP-12767:
--

Hi thanks again for the work.
If 003 patch is for branch-2, please rename the patch as 
HADOOP-12767-branch-2.004.patch, so that Yetus applies the patch against 
branch-2.

> update apache httpclient version to the latest 4.5 for security
> ---
>
> Key: HADOOP-12767
> URL: https://issues.apache.org/jira/browse/HADOOP-12767
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Artem Aliev
>Assignee: Artem Aliev
> Attachments: HADOOP-12767.001.patch, HADOOP-12767.002.patch, 
> HADOOP-12767.003.patch, HADOOP-12767.004.patch
>
>
> Various SSL security fixes are needed.  See:  CVE-2012-6153, CVE-2011-4461, 
> CVE-2014-3577, CVE-2015-5262.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12767) update apache httpclient version to the latest 4.5 for security

2016-02-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169416#comment-15169416
 ] 

Hadoop QA commented on HADOOP-12767:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 58s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 42s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
5s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 31s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
35s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped branch modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 24s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
4s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 48s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 7m 9s {color} 
| {color:red} root-jdk1.8.0_72 with JDK v1.8.0_72 generated 26 new + 739 
unchanged - 0 fixed = 765 total (was 739) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 40s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 13m 49s 
{color} | {color:red} root-jdk1.7.0_95 with JDK v1.7.0_95 generated 26 new + 
735 unchanged - 0 fixed = 761 total (was 735) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 40s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 4s 
{color} | {color:red} root: patch generated 4 new + 20 unchanged - 0 fixed = 24 
total (was 20) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patch modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | 

[jira] [Updated] (HADOOP-12847) hadoop daemonlog should support https and SPNEGO for Kerberized cluster

2016-02-26 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12847:
-
Attachment: HADOOP-12847.001.patch

Rev01: this is an initial patch that uses {{AuthenticatedURL}} to perform 
SPENGO negotiation with a Kerberized cluster. 

I still need to work out a good way to be backward-compatible (i.e. support 
http) and also need to have a better exception handling.

But I have tested it locally and it successfully connects to a Kerberized 
cluster and set log level correctly.

> hadoop daemonlog should support https and SPNEGO for Kerberized cluster
> ---
>
> Key: HADOOP-12847
> URL: https://issues.apache.org/jira/browse/HADOOP-12847
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12847.001.patch
>
>
> {{hadoop daemonlog}} is a simple, yet useful tool for debugging.
> However, it does not support https, nor does it support a Kerberized Hadoop 
> cluster.
> Using {{AuthenticatedURL}}, it will be able to support SPNEGO negotiation 
> with a Kerberized name node web ui. It will also fall back to simple 
> authentication if the cluster is not Kerberized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HADOOP-12847) hadoop daemonlog should support https and SPNEGO for Kerberized cluster

2016-02-26 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-12847 started by Wei-Chiu Chuang.

> hadoop daemonlog should support https and SPNEGO for Kerberized cluster
> ---
>
> Key: HADOOP-12847
> URL: https://issues.apache.org/jira/browse/HADOOP-12847
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12847.001.patch
>
>
> {{hadoop daemonlog}} is a simple, yet useful tool for debugging.
> However, it does not support https, nor does it support a Kerberized Hadoop 
> cluster.
> Using {{AuthenticatedURL}}, it will be able to support SPNEGO negotiation 
> with a Kerberized name node web ui. It will also fall back to simple 
> authentication if the cluster is not Kerberized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12847) hadoop daemonlog should support https and SPNEGO for Kerberized cluster

2016-02-26 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12847:
-
Summary: hadoop daemonlog should support https and SPNEGO for Kerberized 
cluster  (was: hadoop daemonlog should support https for Kerberized cluster)

> hadoop daemonlog should support https and SPNEGO for Kerberized cluster
> ---
>
> Key: HADOOP-12847
> URL: https://issues.apache.org/jira/browse/HADOOP-12847
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>
> {{hadoop daemonlog}} is a simple, yet useful tool for debugging.
> However, it does not support https, nor does it support a Kerberized Hadoop 
> cluster.
> Using {{AuthenticatedURL}}, it will be able to support SPNEGO negotiation 
> with a Kerberized name node web ui. It will also fall back to simple 
> authentication if the cluster is not Kerberized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12847) hadoop daemonlog should support https for Kerberized cluster

2016-02-26 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-12847:


 Summary: hadoop daemonlog should support https for Kerberized 
cluster
 Key: HADOOP-12847
 URL: https://issues.apache.org/jira/browse/HADOOP-12847
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang


{{hadoop daemonlog}} is a simple, yet useful tool for debugging.
However, it does not support https, nor does it support a Kerberized Hadoop 
cluster.

Using {{AuthenticatedURL}}, it will be able to support SPNEGO negotiation with 
a Kerberized name node web ui. It will also fall back to simple authentication 
if the cluster is not Kerberized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12846) Credential Provider Recursive Dependencies

2016-02-26 Thread Larry McCay (JIRA)
Larry McCay created HADOOP-12846:


 Summary: Credential Provider Recursive Dependencies
 Key: HADOOP-12846
 URL: https://issues.apache.org/jira/browse/HADOOP-12846
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Larry McCay
Assignee: Larry McCay


There are a few credential provider integration points in which the use of a 
certain type of provider in a certain filesystem causes a recursive infinite 
loop. 

For instance, a component such as sqoop can be protecting a db password in a 
credential provider within the wasb/azure filesystem. Now that HADOOP-12555 has 
introduced the ability to protect the access keys for wasb we suddenly need 
access to wasb to get the database keys which initiates the attempt to get the 
access keys from wasb - since there is a provider path configured for sqoop.

For such integrations, those in which it doesn't make sense to protect the 
access keys inside the thing that we need the keys to access, we need a 
solution to avoid this recursion - other than dictating what filesystems can be 
used by other components.

This patch proposes the ability to scrub the configured provider path of any 
provider types that would be incompatible with the integration point. In other 
words, before calling Configuration.getPassword for the access keys to wasb, we 
can remove any configured providers that require access to wasb.

This will require some regex expressions that can be used to identify the 
configuration of such provider uri's within the provider path parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12556) KafkaSink jar files are created but not copied to target dist

2016-02-26 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169320#comment-15169320
 ] 

Allen Wittenauer commented on HADOOP-12556:
---

cc: [~cnauroth] since I think he'll be interested as well.

The more and more I think about this situation and others (e.g., HADOOP-12721), 
the more and more I'm inclined hadoop-tools-dist needs to be revisited now that 
it's getting more and more case-specific content.  With the existence of shell 
profiles in trunk, this gets significantly easier...

Two potential options:
* break apart hadoop-tools-dist into multiple directories. create a shell 
profile that sucks that functionalities entire dir.
* keep hadoop-tools-dist as one big dir (thus making it bw compat, but still 
potentially messy). build a tool that creates shell profiles based upon the 
maven dependency trees to list the specific jars needed by that functionality.

In both cases, to activate, simply copy shellprofile.d/foo.sh.example to 
HADOOP_CONFIG_DIR/shellprofile.d/foo.sh.  Or uncomment.  Or whatever.  In the 
end, the shellprofile will add only the necessary components to the classpath.  
And since it is being done via shell profiles a) those jars will always be 
present, b) it doesn't interfere with any of the other env var settings that 
users typically modify, c) should be bw compat with the entire universe so long 
as they honor the output of the various classpath commands



> KafkaSink jar files are created but not copied to target dist
> -
>
> Key: HADOOP-12556
> URL: https://issues.apache.org/jira/browse/HADOOP-12556
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Babak Behzad
>Assignee: Babak Behzad
> Attachments: HADOOP-12556.patch
>
>
> There is a hadoop-kafka artifact missing from hadoop-tools-dist's pom.xml 
> which was causing the compiled Kafka jar files not to be copied to the target 
> dist directory. The new patch adds this in order to complete this fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12767) update apache httpclient version to the latest 4.5 for security

2016-02-26 Thread Artem Aliev (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169298#comment-15169298
 ] 

Artem Aliev commented on HADOOP-12767:
--

[~jojochuang], Thank you. The 003 patch was for branch-2.7. I use 2.7.1 hadoop. 
The 004 patch is for the trunk.


> update apache httpclient version to the latest 4.5 for security
> ---
>
> Key: HADOOP-12767
> URL: https://issues.apache.org/jira/browse/HADOOP-12767
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Artem Aliev
>Assignee: Artem Aliev
> Attachments: HADOOP-12767.001.patch, HADOOP-12767.002.patch, 
> HADOOP-12767.003.patch, HADOOP-12767.004.patch
>
>
> Various SSL security fixes are needed.  See:  CVE-2012-6153, CVE-2011-4461, 
> CVE-2014-3577, CVE-2015-5262.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8818) Should use equals() rather than == to compare String or Text in MD5MD5CRC32FileChecksum and TFileDumper

2016-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169297#comment-15169297
 ] 

Hudson commented on HADOOP-8818:


FAILURE: Integrated in Hadoop-trunk-Commit #9373 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9373/])
Moved HADOOP-8818 from 3.0.0 to 2.7.3 in CHANGES.txt. (aajisaka: rev 
f0de733ca04332aed7b455b34e8c954588600a24)
* hadoop-common-project/hadoop-common/CHANGES.txt


> Should use equals() rather than == to compare String or Text in 
> MD5MD5CRC32FileChecksum and TFileDumper
> ---
>
> Key: HADOOP-8818
> URL: https://issues.apache.org/jira/browse/HADOOP-8818
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, io
>Affects Versions: 3.0.0
>Reporter: Brandon Li
>Assignee: Brandon Li
>Priority: Minor
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HADOOP-8818.patch
>
>
> Should use equals() rather than == to compare String or Text in 
> MD5MD5CRC32FileChecksum and TFileDumper.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12767) update apache httpclient version to the latest 4.5 for security

2016-02-26 Thread Artem Aliev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Artem Aliev reassigned HADOOP-12767:


Assignee: Artem Aliev  (was: Wei-Chiu Chuang)

> update apache httpclient version to the latest 4.5 for security
> ---
>
> Key: HADOOP-12767
> URL: https://issues.apache.org/jira/browse/HADOOP-12767
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Artem Aliev
>Assignee: Artem Aliev
> Attachments: HADOOP-12767.001.patch, HADOOP-12767.002.patch, 
> HADOOP-12767.003.patch, HADOOP-12767.004.patch
>
>
> Various SSL security fixes are needed.  See:  CVE-2012-6153, CVE-2011-4461, 
> CVE-2014-3577, CVE-2015-5262.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12767) update apache httpclient version to the latest 4.5 for security

2016-02-26 Thread Artem Aliev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Artem Aliev updated HADOOP-12767:
-
Attachment: HADOOP-12767.004.patch

> update apache httpclient version to the latest 4.5 for security
> ---
>
> Key: HADOOP-12767
> URL: https://issues.apache.org/jira/browse/HADOOP-12767
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Artem Aliev
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12767.001.patch, HADOOP-12767.002.patch, 
> HADOOP-12767.003.patch, HADOOP-12767.004.patch
>
>
> Various SSL security fixes are needed.  See:  CVE-2012-6153, CVE-2011-4461, 
> CVE-2014-3577, CVE-2015-5262.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12844) S3A fails on IOException

2016-02-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169289#comment-15169289
 ] 

Hadoop QA commented on HADOOP-12844:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 9s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 8s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 10s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m 31s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12790151/HADOOP-12844.001.patch
 |
| JIRA Issue | HADOOP-12844 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 14b3643a6bdf 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 

[jira] [Commented] (HADOOP-12843) Fix findbugs warnings in hadoop-common (branch-2)

2016-02-26 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169285#comment-15169285
 ] 

Akira AJISAKA commented on HADOOP-12843:


Also, 1 warning was fixed in trunk by HADOOP-9121. I'll pull this as well.

> Fix findbugs warnings in hadoop-common (branch-2)
> -
>
> Key: HADOOP-12843
> URL: https://issues.apache.org/jira/browse/HADOOP-12843
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>  Labels: newbie
> Attachments: findbugsHtml.html
>
>
> There are 5 findbugs warnings in branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12843) Fix findbugs warnings in hadoop-common (branch-2)

2016-02-26 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA reassigned HADOOP-12843:
--

Assignee: Akira AJISAKA

> Fix findbugs warnings in hadoop-common (branch-2)
> -
>
> Key: HADOOP-12843
> URL: https://issues.apache.org/jira/browse/HADOOP-12843
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>  Labels: newbie
> Attachments: findbugsHtml.html
>
>
> There are 5 findbugs warnings in branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12843) Fix findbugs warnings in hadoop-common (branch-2)

2016-02-26 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169273#comment-15169273
 ] 

Akira AJISAKA commented on HADOOP-12843:


Backported HADOOP-8818 to branch-2, 2.8, and 2.7. Now there are 4 findbugs 
warnings.

> Fix findbugs warnings in hadoop-common (branch-2)
> -
>
> Key: HADOOP-12843
> URL: https://issues.apache.org/jira/browse/HADOOP-12843
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira AJISAKA
>  Labels: newbie
> Attachments: findbugsHtml.html
>
>
> There are 5 findbugs warnings in branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12845) Improve Openssl library finding on RedHat system

2016-02-26 Thread Sebastien Barrier (JIRA)
Sebastien Barrier created HADOOP-12845:
--

 Summary: Improve Openssl library finding on RedHat system
 Key: HADOOP-12845
 URL: https://issues.apache.org/jira/browse/HADOOP-12845
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.2
Reporter: Sebastien Barrier
Priority: Minor


The issue is related to [https://issues.apache.org/jira/browse/HADOOP-11216].

In the BUILDING.txt it's specified "Use -Drequire.openssl to fail the build if 
libcrypto.so is not found".

On RedHat system (Fedora/Centos/...) the /usr/lib64/libcrypto.so is a link 
provided by openssl-devel RPM package which is fine on a build/development 
host,  but devel packages are not supposed to be installed on Production 
servers (Hadoop Cluster) and the openssl RPM package don't include that link 
which is a problem.

# hadoop checknative -a
...
openssl: false Cannot load libcrypto.so (libcrypto.so: cannot open shared 
object file: No such file or directory)!

There's only /usr/lib64/libcrypto.so.10 but no /usr/lib64/libcrypto.so

Also trying to compile with "-Drequire.openssl 
-Dopenssl.lib=/usr/lib64/libcrypto.so.10" failed.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8818) Should use equals() rather than == to compare String or Text in MD5MD5CRC32FileChecksum and TFileDumper

2016-02-26 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-8818:
--
Fix Version/s: (was: 3.0.0)
   2.7.3
   2.8.0

Backported this to branch-2, branch-2.8, and branch-2.7.

> Should use equals() rather than == to compare String or Text in 
> MD5MD5CRC32FileChecksum and TFileDumper
> ---
>
> Key: HADOOP-8818
> URL: https://issues.apache.org/jira/browse/HADOOP-8818
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, io
>Affects Versions: 3.0.0
>Reporter: Brandon Li
>Assignee: Brandon Li
>Priority: Minor
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HADOOP-8818.patch
>
>
> Should use equals() rather than == to compare String or Text in 
> MD5MD5CRC32FileChecksum and TFileDumper.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8818) Should use equals() rather than == to compare String or Text in MD5MD5CRC32FileChecksum and TFileDumper

2016-02-26 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169258#comment-15169258
 ] 

Akira AJISAKA commented on HADOOP-8818:
---

Backporting this issue to branch-2, 2.8, 2.7 to fix HADOOP-12843.

> Should use equals() rather than == to compare String or Text in 
> MD5MD5CRC32FileChecksum and TFileDumper
> ---
>
> Key: HADOOP-8818
> URL: https://issues.apache.org/jira/browse/HADOOP-8818
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, io
>Affects Versions: 3.0.0
>Reporter: Brandon Li
>Assignee: Brandon Li
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HADOOP-8818.patch
>
>
> Should use equals() rather than == to compare String or Text in 
> MD5MD5CRC32FileChecksum and TFileDumper.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12844) S3A fails on IOException

2016-02-26 Thread Pieter Reuse (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pieter Reuse updated HADOOP-12844:
--
Status: Patch Available  (was: Open)

> S3A fails on IOException
> 
>
> Key: HADOOP-12844
> URL: https://issues.apache.org/jira/browse/HADOOP-12844
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2, 2.7.1
>Reporter: Pieter Reuse
>Assignee: Pieter Reuse
> Attachments: HADOOP-12844.001.patch
>
>
> This simple patch catches IOExceptions in S3AInputStream.read(byte[] buf, int 
> off, int len) and reopens the connection on the same location as it was 
> before the exception.
> This is similar to the functionality introduced in S3N in 
> [HADOOP-6254|https://issues.apache.org/jira/browse/HADOOP-6254], for exactly 
> the same reason.
> Patch developed in cooperation with [~emres].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12843) Fix findbugs warnings in hadoop-common (branch-2)

2016-02-26 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169251#comment-15169251
 ] 

Akira AJISAKA commented on HADOOP-12843:


1 warning was fixed in trunk by HADOOP-8818. I'll pull this to branch-2, 2.8, 
and 2.7.

> Fix findbugs warnings in hadoop-common (branch-2)
> -
>
> Key: HADOOP-12843
> URL: https://issues.apache.org/jira/browse/HADOOP-12843
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira AJISAKA
>  Labels: newbie
> Attachments: findbugsHtml.html
>
>
> There are 5 findbugs warnings in branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12827) WebHdfs socket timeouts should be configurable

2016-02-26 Thread Austin Donnelly (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Austin Donnelly updated HADOOP-12827:
-
Attachment: HADOOP-12827.002.patch

> WebHdfs socket timeouts should be configurable
> --
>
> Key: HADOOP-12827
> URL: https://issues.apache.org/jira/browse/HADOOP-12827
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
> Environment: all
>Reporter: Austin Donnelly
>Assignee: Austin Donnelly
>  Labels: easyfix, newbie
> Attachments: HADOOP-12827.001.patch, HADOOP-12827.002.patch, 
> HADOOP-12827.002.patch, HADOOP-12827.002.patch
>
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> WebHdfs client connections use sockets with fixed timeouts of 60 seconds to 
> connect, and 60 seconds for reads.
> This is a problem because I am trying to use WebHdfs to access an archive 
> storage system which can take minutes to hours to return the requested data 
> over WebHdfs.
> The fix is to add new configuration file options to allow these 60s defaults 
> to be customised in hdfs-site.xml.
> If the new configuration options are not present, the behavior is unchanged 
> from before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12827) WebHdfs socket timeouts should be configurable

2016-02-26 Thread Austin Donnelly (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Austin Donnelly updated HADOOP-12827:
-
Status: Patch Available  (was: Open)

The test failures are unexpected, as they are not in tests which I changed.  Is 
this perhaps due to noisy tests?
The failures were due to Bind() failing due to port-in-use problem: is this due 
to failure to isolate tests which are expected to be independent?
Resubmitting the same patch to see if the same tests fail deterministically.

> WebHdfs socket timeouts should be configurable
> --
>
> Key: HADOOP-12827
> URL: https://issues.apache.org/jira/browse/HADOOP-12827
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
> Environment: all
>Reporter: Austin Donnelly
>Assignee: Austin Donnelly
>  Labels: easyfix, newbie
> Attachments: HADOOP-12827.001.patch, HADOOP-12827.002.patch, 
> HADOOP-12827.002.patch, HADOOP-12827.002.patch
>
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> WebHdfs client connections use sockets with fixed timeouts of 60 seconds to 
> connect, and 60 seconds for reads.
> This is a problem because I am trying to use WebHdfs to access an archive 
> storage system which can take minutes to hours to return the requested data 
> over WebHdfs.
> The fix is to add new configuration file options to allow these 60s defaults 
> to be customised in hdfs-site.xml.
> If the new configuration options are not present, the behavior is unchanged 
> from before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12844) S3A fails on IOException

2016-02-26 Thread Pieter Reuse (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pieter Reuse updated HADOOP-12844:
--
Attachment: HADOOP-12844.001.patch

> S3A fails on IOException
> 
>
> Key: HADOOP-12844
> URL: https://issues.apache.org/jira/browse/HADOOP-12844
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.1, 2.7.2
>Reporter: Pieter Reuse
>Assignee: Pieter Reuse
> Attachments: HADOOP-12844.001.patch
>
>
> This simple patch catches IOExceptions in S3AInputStream.read(byte[] buf, int 
> off, int len) and reopens the connection on the same location as it was 
> before the exception.
> This is similar to the functionality introduced in S3N in 
> [HADOOP-6254|https://issues.apache.org/jira/browse/HADOOP-6254], for exactly 
> the same reason.
> Patch developed in cooperation with [~emres].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12827) WebHdfs socket timeouts should be configurable

2016-02-26 Thread Austin Donnelly (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Austin Donnelly updated HADOOP-12827:
-
Status: Open  (was: Patch Available)

> WebHdfs socket timeouts should be configurable
> --
>
> Key: HADOOP-12827
> URL: https://issues.apache.org/jira/browse/HADOOP-12827
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
> Environment: all
>Reporter: Austin Donnelly
>Assignee: Austin Donnelly
>  Labels: easyfix, newbie
> Attachments: HADOOP-12827.001.patch, HADOOP-12827.002.patch, 
> HADOOP-12827.002.patch
>
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> WebHdfs client connections use sockets with fixed timeouts of 60 seconds to 
> connect, and 60 seconds for reads.
> This is a problem because I am trying to use WebHdfs to access an archive 
> storage system which can take minutes to hours to return the requested data 
> over WebHdfs.
> The fix is to add new configuration file options to allow these 60s defaults 
> to be customised in hdfs-site.xml.
> If the new configuration options are not present, the behavior is unchanged 
> from before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12844) S3A fails on IOException

2016-02-26 Thread Pieter Reuse (JIRA)
Pieter Reuse created HADOOP-12844:
-

 Summary: S3A fails on IOException
 Key: HADOOP-12844
 URL: https://issues.apache.org/jira/browse/HADOOP-12844
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 2.7.2, 2.7.1
Reporter: Pieter Reuse
Assignee: Pieter Reuse


This simple patch catches IOExceptions in S3AInputStream.read(byte[] buf, int 
off, int len) and reopens the connection on the same location as it was before 
the exception.
This is similar to the functionality introduced in S3N in 
[HADOOP-6254|https://issues.apache.org/jira/browse/HADOOP-6254], for exactly 
the same reason.

Patch developed in cooperation with [~emres].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12622) RetryPolicies (other than FailoverOnNetworkExceptionRetry) should put on retry failed reason or the log from RMProxy's retry could be very misleading.

2016-02-26 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15169221#comment-15169221
 ] 

Jian He commented on HADOOP-12622:
--

lgtm, thanks Junping !

> RetryPolicies (other than FailoverOnNetworkExceptionRetry) should put on 
> retry failed reason or the log from RMProxy's retry could be very misleading.
> --
>
> Key: HADOOP-12622
> URL: https://issues.apache.org/jira/browse/HADOOP-12622
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auto-failover
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Critical
> Attachments: HADOOP-12622-v2.patch, HADOOP-12622-v3.1.patch, 
> HADOOP-12622-v3.patch, HADOOP-12622-v4.patch, HADOOP-12622-v5.patch, 
> HADOOP-12622-v6.patch, HADOOP-12622.patch
>
>
> In debugging a NM retry connection to RM (non-HA), the NM log during RM down 
> time is very misleading:
> {noformat}
> 2015-12-07 11:37:14,098 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 0 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:15,099 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 1 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:16,101 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 2 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:17,103 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 3 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:18,105 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 4 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:19,107 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 5 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:20,109 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 6 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:21,112 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 7 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:22,113 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 8 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:23,115 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 9 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:54,120 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 0 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:55,121 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 1 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:56,123 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 2 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:57,125 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 3 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:58,126 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 4 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:59,128 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 5 time(s); retry policy is 
> 

[jira] [Commented] (HADOOP-12711) Remove dependency on commons-httpclient for ServletUtil

2016-02-26 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168990#comment-15168990
 ] 

Wei-Chiu Chuang commented on HADOOP-12711:
--

Thanks again for the joint effort! [~iwasakims]

> Remove dependency on commons-httpclient for ServletUtil
> ---
>
> Key: HADOOP-12711
> URL: https://issues.apache.org/jira/browse/HADOOP-12711
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Fix For: 2.8.0
>
> Attachments: HADOOP-12711-branch-2.002.patch, 
> HADOOP-12711-branch-2.003.patch, HADOOP-12711.001.patch, 
> HADOOP-12711.002.patch, HADOOP-12711.branch-2.004.patch
>
>
> This is a branch-2 only change, as ServletUtil for trunk removes the code 
> that depends on commons-httpclient.
> We need to retire the use of commons-httpclient in Hadoop to address the 
> security concern in CVE-2012-5783 
> http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-5783.
> {noformat}
> import org.apache.commons.httpclient.URIException;
> import org.apache.commons.httpclient.util.URIUtil;
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11996) Native erasure coder facilities based on ISA-L

2016-02-26 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168859#comment-15168859
 ] 

Kai Zheng commented on HADOOP-11996:


Hi [~cmccabe],

While working on the change, realized the following change may complicate the 
{{dynamic function loading}} things, so I'm not sure about it now. I thought 
it's not a rare case that functions defined in multiple *.h header files are 
implemented in a *.c source file. Maybe a better name for {{erasure_code.c}} so 
it can contain those GF related functions as well? But I have no idea what's 
the better.
bq. Put the functions described in gf_util.h in a source code file named 
gf_util.c, not in erasure_code.c

Wdyt about this? Thanks.

> Native erasure coder facilities based on ISA-L
> --
>
> Key: HADOOP-11996
> URL: https://issues.apache.org/jira/browse/HADOOP-11996
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11996-initial.patch, HADOOP-11996-v2.patch, 
> HADOOP-11996-v3.patch, HADOOP-11996-v4.patch, HADOOP-11996-v5.patch, 
> HADOOP-11996-v6.patch, HADOOP-11996-v7.patch
>
>
> While working on HADOOP-11540 and etc., it was found useful to write the 
> basic facilities based on Intel ISA-L library separately from JNI stuff. It's 
> also easy to debug and troubleshooting, as no JNI or Java stuffs are involved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12622) RetryPolicies (other than FailoverOnNetworkExceptionRetry) should put on retry failed reason or the log from RMProxy's retry could be very misleading.

2016-02-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168856#comment-15168856
 ] 

Hadoop QA commented on HADOOP-12622:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 33s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 31s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 43s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} hadoop-common-project/hadoop-common: patch 
generated 0 new + 44 unchanged - 2 fixed = 44 total (was 46) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 51s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 8s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 42s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12790107/HADOOP-12622-v6.patch 
|
| JIRA Issue | HADOOP-12622 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 789339129c8d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 

[jira] [Commented] (HADOOP-12837) FileStatus.getModificationTime not working on S3

2016-02-26 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168806#comment-15168806
 ] 

Steve Loughran commented on HADOOP-12837:
-

Afraid this isn't going to work for an object store, they're not real 
filesystems. HADOOP-9545 proposes exposing that and adding some object store 
specific stuff, but it's not in yet.

I would avoid doing a rename() against object store files too, that is often 
implemented client side...

> FileStatus.getModificationTime not working on S3
> 
>
> Key: HADOOP-12837
> URL: https://issues.apache.org/jira/browse/HADOOP-12837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Jagdish Kewat
>
> Hi Team,
> We have observed an issue with the FileStatus.getModificationTime() API on S3 
> filesystem. The method always returns 0.
> I googled for this however couldn't find any solution as such which would fit 
> in my scheme of things. S3FileStatus seems to be an option however I would be 
> using this API on HDFS as well as S3 both so can't go for it.
> I tried to run the job on:
> * Release label:emr-4.2.0
> * Hadoop distribution:Amazon 2.6.0
> * Hadoop Common jar: hadoop-common-2.6.0.jar
> Please advise if any patch or fix available for this.
> Thanks,
> Jagdish



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12556) KafkaSink jar files are created but not copied to target dist

2016-02-26 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168800#comment-15168800
 ] 

Steve Loughran commented on HADOOP-12556:
-

I'd like it so that if you explicitly ask for hadoop-kafka then you get 
everything you need. My own concerns are to make sure that if you depended on 
core stuff (hadoop-client), you don't get another set of classpath problems. 
And we have so much cruft (why does junit get into mapreduce/lib? Because it 
did in the past and if you pull it now, things break ... )

You're talking about hadoop-tools-dist, aren't you? If so, where do the JARs 
end up on a release build?

> KafkaSink jar files are created but not copied to target dist
> -
>
> Key: HADOOP-12556
> URL: https://issues.apache.org/jira/browse/HADOOP-12556
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Babak Behzad
>Assignee: Babak Behzad
> Attachments: HADOOP-12556.patch
>
>
> There is a hadoop-kafka artifact missing from hadoop-tools-dist's pom.xml 
> which was causing the compiled Kafka jar files not to be copied to the target 
> dist directory. The new patch adds this in order to complete this fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12622) RetryPolicies (other than FailoverOnNetworkExceptionRetry) should put on retry failed reason or the log from RMProxy's retry could be very misleading.

2016-02-26 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-12622:

Attachment: HADOOP-12622-v6.patch

> RetryPolicies (other than FailoverOnNetworkExceptionRetry) should put on 
> retry failed reason or the log from RMProxy's retry could be very misleading.
> --
>
> Key: HADOOP-12622
> URL: https://issues.apache.org/jira/browse/HADOOP-12622
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auto-failover
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Critical
> Attachments: HADOOP-12622-v2.patch, HADOOP-12622-v3.1.patch, 
> HADOOP-12622-v3.patch, HADOOP-12622-v4.patch, HADOOP-12622-v5.patch, 
> HADOOP-12622-v6.patch, HADOOP-12622.patch
>
>
> In debugging a NM retry connection to RM (non-HA), the NM log during RM down 
> time is very misleading:
> {noformat}
> 2015-12-07 11:37:14,098 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 0 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:15,099 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 1 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:16,101 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 2 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:17,103 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 3 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:18,105 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 4 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:19,107 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 5 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:20,109 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 6 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:21,112 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 7 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:22,113 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 8 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:23,115 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 9 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:54,120 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 0 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:55,121 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 1 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:56,123 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 2 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:57,125 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 3 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:58,126 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 4 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:59,128 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 5 time(s); retry policy is 
> 

[jira] [Commented] (HADOOP-12622) RetryPolicies (other than FailoverOnNetworkExceptionRetry) should put on retry failed reason or the log from RMProxy's retry could be very misleading.

2016-02-26 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168799#comment-15168799
 ] 

Junping Du commented on HADOOP-12622:
-

Thanks for review and comments, Jian. Upload v6 patch to incorporate this 
comment.

> RetryPolicies (other than FailoverOnNetworkExceptionRetry) should put on 
> retry failed reason or the log from RMProxy's retry could be very misleading.
> --
>
> Key: HADOOP-12622
> URL: https://issues.apache.org/jira/browse/HADOOP-12622
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auto-failover
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Critical
> Attachments: HADOOP-12622-v2.patch, HADOOP-12622-v3.1.patch, 
> HADOOP-12622-v3.patch, HADOOP-12622-v4.patch, HADOOP-12622-v5.patch, 
> HADOOP-12622-v6.patch, HADOOP-12622.patch
>
>
> In debugging a NM retry connection to RM (non-HA), the NM log during RM down 
> time is very misleading:
> {noformat}
> 2015-12-07 11:37:14,098 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 0 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:15,099 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 1 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:16,101 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 2 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:17,103 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 3 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:18,105 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 4 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:19,107 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 5 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:20,109 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 6 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:21,112 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 7 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:22,113 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 8 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:23,115 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 9 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:54,120 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 0 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:55,121 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 1 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:56,123 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 2 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:57,125 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 3 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:58,126 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 4 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:59,128 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 

[jira] [Commented] (HADOOP-10315) Log the original exception when getGroups() fail in UGI.

2016-02-26 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168784#comment-15168784
 ] 

Steve Loughran commented on HADOOP-10315:
-

I think we need to differentiate "not in -ve cache" from "some other problem". 
One merits a stack trace, the other: not. For that, maybe a different IOE 
subclass for the cache miss?

> Log the original exception when getGroups() fail in UGI.
> 
>
> Key: HADOOP-10315
> URL: https://issues.apache.org/jira/browse/HADOOP-10315
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.23.10, 2.2.0
>Reporter: Kihwal Lee
>Assignee: Ted Yu
> Attachments: HADOOP-10315.v1.patch
>
>
> In UserGroupInformation, getGroupNames() swallows the original exception. 
> There have been many occasions that more information on the original 
> exception could have helped.
> {code}
>   public synchronized String[] getGroupNames() {
> ensureInitialized();
> try {
>   List result = groups.getGroups(getShortUserName());
>   return result.toArray(new String[result.size()]);
> } catch (IOException ie) {
>   LOG.warn("No groups available for user " + getShortUserName());
>   return new String[0];
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-12840) UGI to log@ debug stack traces when failing to find groups for a user

2016-02-26 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-12840.
-
Resolution: Duplicate

duplicate of HADOOP-10315

> UGI to log@ debug stack traces when failing to find groups for a user
> -
>
> Key: HADOOP-12840
> URL: https://issues.apache.org/jira/browse/HADOOP-12840
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Priority: Minor
>
> If {{UGI.getGroupNames()}} catches an IOE raised by 
> {{groups.getGroups(getShortUserName())}} then it simply logs @ debug "No 
> groups available for user". The text from the caught exception and stack 
> trace are not printed.
> One IOException raised is the explicit "user not in groups" exception, but 
> there could be other causes too —if that happens the entire problem will be 
> missed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12842) LocalFileSystem checksum file creation fails when source filename contains a colon

2016-02-26 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-12842:
--
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

> LocalFileSystem checksum file creation fails when source filename contains a 
> colon
> --
>
> Key: HADOOP-12842
> URL: https://issues.apache.org/jira/browse/HADOOP-12842
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.4
>Reporter: Plamen Jeliazkov
>Assignee: Plamen Jeliazkov
>Priority: Minor
> Attachments: HADOOP-12842_trunk.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> In most FileSystems you can create a file with a colon character in it, 
> including HDFS. If you try to use the LocalFileSystem implementation (which 
> extends ChecksumFileSystem) to create a file with a colon character in it you 
> get a URISyntaxException during the creation of the checksum file because of 
> the use of {code}new Path(path, checksumFile){code} where checksumFile will 
> be considered as a relative path during URI parsing due to starting with a 
> "." and containing a ":" in the path.  
> Running the following test inside TestLocalFileSystem causes the failure:
> {code}
> @Test
>   public void testColonFilePath() throws Exception {
> FileSystem fs = fileSys;
> Path file = new Path(TEST_ROOT_DIR + Path.SEPARATOR + "fileWith:InIt");
> fs.delete(file, true);
> FSDataOutputStream out = fs.create(file);
> try {
>   out.write("text1".getBytes());
> } finally {
>   out.close();
> }
> }
> {code}
> With the following stack trace:
> {code}
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: .fileWith:InIt.crc
>   at java.net.URI.checkPath(URI.java:1804)
>   at java.net.URI.(URI.java:752)
>   at org.apache.hadoop.fs.Path.initialize(Path.java:201)
>   at org.apache.hadoop.fs.Path.(Path.java:170)
>   at org.apache.hadoop.fs.Path.(Path.java:92)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.getChecksumFile(ChecksumFileSystem.java:88)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.(ChecksumFileSystem.java:397)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:456)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:435)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:921)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:902)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:798)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:787)
>   at 
> org.apache.hadoop.fs.TestLocalFileSystem.testColonFilePath(TestLocalFileSystem.java:625)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12842) LocalFileSystem checksum file creation fails when source filename contains a colon

2016-02-26 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168637#comment-15168637
 ] 

Masatake Iwasaki commented on HADOOP-12842:
---

I'm going to close this as duplicate of HADOOP-3257. There is HADOOP-7945 for 
the enforcement.

> LocalFileSystem checksum file creation fails when source filename contains a 
> colon
> --
>
> Key: HADOOP-12842
> URL: https://issues.apache.org/jira/browse/HADOOP-12842
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.4
>Reporter: Plamen Jeliazkov
>Assignee: Plamen Jeliazkov
>Priority: Minor
> Attachments: HADOOP-12842_trunk.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> In most FileSystems you can create a file with a colon character in it, 
> including HDFS. If you try to use the LocalFileSystem implementation (which 
> extends ChecksumFileSystem) to create a file with a colon character in it you 
> get a URISyntaxException during the creation of the checksum file because of 
> the use of {code}new Path(path, checksumFile){code} where checksumFile will 
> be considered as a relative path during URI parsing due to starting with a 
> "." and containing a ":" in the path.  
> Running the following test inside TestLocalFileSystem causes the failure:
> {code}
> @Test
>   public void testColonFilePath() throws Exception {
> FileSystem fs = fileSys;
> Path file = new Path(TEST_ROOT_DIR + Path.SEPARATOR + "fileWith:InIt");
> fs.delete(file, true);
> FSDataOutputStream out = fs.create(file);
> try {
>   out.write("text1".getBytes());
> } finally {
>   out.close();
> }
> }
> {code}
> With the following stack trace:
> {code}
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: .fileWith:InIt.crc
>   at java.net.URI.checkPath(URI.java:1804)
>   at java.net.URI.(URI.java:752)
>   at org.apache.hadoop.fs.Path.initialize(Path.java:201)
>   at org.apache.hadoop.fs.Path.(Path.java:170)
>   at org.apache.hadoop.fs.Path.(Path.java:92)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.getChecksumFile(ChecksumFileSystem.java:88)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.(ChecksumFileSystem.java:397)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:456)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:435)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:921)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:902)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:798)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:787)
>   at 
> org.apache.hadoop.fs.TestLocalFileSystem.testColonFilePath(TestLocalFileSystem.java:625)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12552) Fix undeclared/unused dependency to httpclient

2016-02-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168634#comment-15168634
 ] 

Hadoop QA commented on HADOOP-12552:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 47s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 57s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 20s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 45s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 32s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 37s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 11s 
{color} | {color:green} hadoop-openstack in the patch passed with JDK 
v1.8.0_72. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 54s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 14s 
{color} | {color:green} hadoop-openstack in the patch passed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 32s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12790080/HADOOP-12552.002.patch
 |
| JIRA Issue | HADOOP-12552 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit