[jira] [Updated] (HADOOP-12544) Erasure Coding: create dummy raw coder

2015-11-03 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HADOOP-12544:

Attachment: HADOOP-12544.1.patch

Hi [~zhz], please help to review the patch. Thanks!

> Erasure Coding: create dummy raw coder
> --
>
> Key: HADOOP-12544
> URL: https://issues.apache.org/jira/browse/HADOOP-12544
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Rui Li
> Attachments: HADOOP-12544.1.patch
>
>
> Create a dummy raw coder which does no computation and simply returns zero 
> bytes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12542) TestDNS fails on Windows after HADOOP-12437.

2015-11-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986947#comment-14986947
 ] 

Hudson commented on HADOOP-12542:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #1355 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1355/])
HADOOP-12542. TestDNS fails on Windows after HADOOP-12437. Contributed 
(cnauroth: rev 957f0311a160afb40dbb0619f455445b4f5d1e32)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestDNS.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> TestDNS fails on Windows after HADOOP-12437.
> 
>
> Key: HADOOP-12542
> URL: https://issues.apache.org/jira/browse/HADOOP-12542
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 2.8.0
>
> Attachments: HADOOP-12542.001.patch
>
>
> HADOOP-12437 added several new tests covering functionality of resolving host 
> names based on an alternate network interface.  These tests are failing on 
> Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12473) distcp's ignoring failures should be mutually exclusive with the atomic option

2015-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986966#comment-14986966
 ] 

Hadoop QA commented on HADOOP-12473:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 5s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 49s 
{color} | {color:green} hadoop-distcp in the patch passed with JDK v1.8.0_60. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 8s 
{color} | {color:green} hadoop-distcp in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
25s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 0s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-03 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12770269/HADOOP-12473.002.patch
 |
| JIRA Issue | HADOOP-12473 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile  |
| uname | Linux e780ee62a63b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/apache-yetus-1a9afee/precommit/personality/hadoop.sh
 |
| git revision | trunk / 957f031 |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_60 

[jira] [Updated] (HADOOP-12544) Erasure Coding: create dummy raw coder

2015-11-03 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HADOOP-12544:

Status: Patch Available  (was: Open)

> Erasure Coding: create dummy raw coder
> --
>
> Key: HADOOP-12544
> URL: https://issues.apache.org/jira/browse/HADOOP-12544
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Rui Li
> Attachments: HADOOP-12544.1.patch
>
>
> Create a dummy raw coder which does no computation and simply returns zero 
> bytes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12544) Erasure Coding: create dummy raw coder

2015-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986982#comment-14986982
 ] 

Hadoop QA commented on HADOOP-12544:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 5s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
57s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 16s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 8s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 17s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 13s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 13s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 14s 
{color} | {color:red} Patch generated 1 new checkstyle issues in 
hadoop-common-project/hadoop-common (total was 0, now 1). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 10s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 31s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 29s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_60 Failed junit tests | hadoop.ipc.TestDecayRpcScheduler |
|   | hadoop.metrics2.sink.TestFileSink |
| JDK v1.7.0_79 Failed junit tests | hadoop.metrics2.sink.TestFileSink |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-03 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12770274/HADOOP-12544.1.patch |
| JIRA Issue | HADOOP-12544 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile  |
| uname | Linux 1941f4367338 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/apache-yetus-1a9afee/precommit/personality/hadoop.sh
 

[jira] [Commented] (HADOOP-12437) Allow SecurityUtil to lookup alternate hostnames

2015-11-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986946#comment-14986946
 ] 

Hudson commented on HADOOP-12437:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #1355 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1355/])
HADOOP-12542. TestDNS fails on Windows after HADOOP-12437. Contributed 
(cnauroth: rev 957f0311a160afb40dbb0619f455445b4f5d1e32)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestDNS.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Allow SecurityUtil to lookup alternate hostnames 
> -
>
> Key: HADOOP-12437
> URL: https://issues.apache.org/jira/browse/HADOOP-12437
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net, security
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.8.0
>
> Attachments: HADOOP-12437.04.patch, HADOOP-12437.05.patch, 
> HDFS-9109.01.patch, HDFS-9109.02.patch, HDFS-9109.03.patch
>
>
> The configuration setting {{dfs.datanode.dns.interface}} lets the DataNode 
> select its hostname by doing a reverse lookup of IP addresses on the specific 
> network interface. This does not work {{when /etc/hosts}} is used to setup 
> alternate hostnames, since {{DNS#reverseDns}} only queries the DNS servers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12487) DomainSocket.close() assumes incorrect Linux behaviour

2015-11-03 Thread Alan Burlison (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986989#comment-14986989
 ] 

Alan Burlison commented on HADOOP-12487:


>From the long conversation on the Linux mailing list is seems that BSD has a 
>kernel race in this area and is switching to the Linux semantics. There was 
>talk of using revoke(2) on BSD as an alternative to shutdown(2) on Linux but 
>it's not clear if that works on listening sockets or not. I don't know what 
>happens on AIX or OSX.

There is possibly a cross-platform way of shutting down the DomainSocket 
sockets which is to set the CloseableReferenceCount 'closed' flag and then do a 
series of dummy connect(2)/close(2) to the socket to knock any other threads 
off the accept(2), have them check the closed flag and terminate. However 
that's a fairly large change and I think would need investigation to make sure 
it worked as expected on all the platforms of interest before making the change.

> DomainSocket.close() assumes incorrect Linux behaviour
> --
>
> Key: HADOOP-12487
> URL: https://issues.apache.org/jira/browse/HADOOP-12487
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: net
>Affects Versions: 2.7.1
> Environment: Linux Solaris
>Reporter: Alan Burlison
>Assignee: Alan Burlison
> Attachments: shutdown.c
>
>
> I'm getting a test failure in TestDomainSocket.java, in the 
> testSocketAcceptAndClose test. That test creates a socket which one thread 
> waits on in DomainSocket.accept() whilst a second thread sleeps for a short 
> time before closing the same socket with DomainSocket.close().
> DomainSocket.close() first calls shutdown0() on the socket before closing 
> close0() - both those are thin wrappers around the corresponding libc socket 
> calls. DomainSocket.close() contains the following comment, explaining the 
> logic involved:
> {code}
>   // Calling shutdown on the socket will interrupt blocking system
>   // calls like accept, write, and read that are going on in a
>   // different thread.
> {code}
> Unfortunately that relies on non-standards-compliant Linux behaviour. I've 
> written a simple C test case that replicates the scenario above:
> # ThreadA opens, binds, listens and accepts on a socket, waiting for 
> connections.
> # Some time later ThreadB calls shutdown on the socket ThreadA is waiting in 
> accept on.
> Here is what happens:
> On Linux, the shutdown call in ThreadB succeeds and the accept call in 
> ThreadA returns with EINVAL.
> On Solaris, the shutdown call in ThreadB fails and returns ENOTCONN. ThreadA 
> continues to wait in accept.
> Relevant POSIX manpages:
> http://pubs.opengroup.org/onlinepubs/9699919799/functions/accept.html
> http://pubs.opengroup.org/onlinepubs/9699919799/functions/shutdown.html
> The POSIX shutdown manpage says:
> "The shutdown() function shall cause all or part of a full-duplex connection 
> on the socket associated with the file descriptor socket to be shut down."
> ...
> "\[ENOTCONN] The socket is not connected."
> Page 229 & 303 of "UNIX System V Network Programming" say:
> "shutdown can only be called on sockets that have been previously connected"
> "The socket \[passed to accept that] fd refers to does not participate in the 
> connection. It remains available to receive further connect indications"
> That is pretty clear, sockets being waited on with accept are not connected 
> by definition. Nor is it the accept socket connected when a client connects 
> to it, it is the socket returned by accept that is connected to the client. 
> Therefore the Solaris behaviour of failing the shutdown call is correct.
> In order to get the required behaviour of ThreadB causing ThreadA to exit the 
> accept call with an error, the correct way is for ThreadB to call close on 
> the socket that ThreadA is waiting on in accept.
> On Solaris, calling close in ThreadB succeeds, and the accept call in ThreadA 
> fails and returns EBADF.
> On Linux, calling close in ThreadB succeeds but ThreadA continues to wait in 
> accept until there is an incoming connection. That accept returns 
> successfully. However subsequent accept calls on the same socket return EBADF.
> The Linux behaviour is fundamentally broken in three places:
> # Allowing shutdown to succeed on an unconnected socket is incorrect.  
> # Returning a successful accept on a closed file descriptor is incorrect, 
> especially as future accept calls on the same socket fail.
> # Once shutdown has been called on the socket, calling close on the socket 
> fails with EBADF. That is incorrect, shutdown should just prevent further IO 
> on the socket, it should not close it.
> The real issue though is that there's no single way of doing this that works 
> on both 

[jira] [Commented] (HADOOP-12544) Erasure Coding: create dummy raw coder

2015-11-03 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14987172#comment-14987172
 ] 

Kai Zheng commented on HADOOP-12544:


I thought the check style issue should be fixed. The failed unit tests can be 
looked elsewhere since they're not related.

> Erasure Coding: create dummy raw coder
> --
>
> Key: HADOOP-12544
> URL: https://issues.apache.org/jira/browse/HADOOP-12544
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Rui Li
> Attachments: HADOOP-12544.1.patch
>
>
> Create a dummy raw coder which does no computation and simply returns zero 
> bytes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11996) Native erasure coder basic facilities with an illustration sample

2015-11-03 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11996:
---
Attachment: HADOOP-11996-v3.patch

Updated the patch based on the latest HADOOP-11887 patch.

> Native erasure coder basic facilities with an illustration sample
> -
>
> Key: HADOOP-11996
> URL: https://issues.apache.org/jira/browse/HADOOP-11996
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11996-initial.patch, HADOOP-11996-v2.patch, 
> HADOOP-11996-v3.patch
>
>
> While working on HADOOP-11540 and etc., it was found useful to write the 
> basic facilities based on Intel ISA-L library separately from JNI stuff so 
> they can be utilized to compose a useful sample coder. Such sample coder can 
> serve as a good illustration for how to use the ISA-L library, meanwhile it's 
> easy to debug and troubleshooting, as no JNI or Java stuffs are involved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12542) TestDNS fails on Windows after HADOOP-12437.

2015-11-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14987029#comment-14987029
 ] 

Hudson commented on HADOOP-12542:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #566 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/566/])
HADOOP-12542. TestDNS fails on Windows after HADOOP-12437. Contributed 
(cnauroth: rev 957f0311a160afb40dbb0619f455445b4f5d1e32)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestDNS.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> TestDNS fails on Windows after HADOOP-12437.
> 
>
> Key: HADOOP-12542
> URL: https://issues.apache.org/jira/browse/HADOOP-12542
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 2.8.0
>
> Attachments: HADOOP-12542.001.patch
>
>
> HADOOP-12437 added several new tests covering functionality of resolving host 
> names based on an alternate network interface.  These tests are failing on 
> Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12541) make re2j dependency consistent

2015-11-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14987028#comment-14987028
 ] 

Hudson commented on HADOOP-12541:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #566 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/566/])
HADOOP-12541. make re2j dependency consistent (Matthew Paduano via aw) (aw: rev 
6e0d35323505cc68dbd963b8628b89ee04af2f2b)
* hadoop-common-project/hadoop-common/pom.xml
* hadoop-project/pom.xml
* hadoop-common-project/hadoop-common/CHANGES.txt


> make re2j dependency consistent
> ---
>
> Key: HADOOP-12541
> URL: https://issues.apache.org/jira/browse/HADOOP-12541
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Matthew Paduano
> Fix For: 3.0.0
>
> Attachments: HADOOP-12541.01.patch
>
>
> Make the re2j dependency consistent with other parts of Hadoop.  Seeing some 
> weird/rare failures with older versions of maven that appear to be related to 
> this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12437) Allow SecurityUtil to lookup alternate hostnames

2015-11-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14987026#comment-14987026
 ] 

Hudson commented on HADOOP-12437:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #566 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/566/])
HADOOP-12542. TestDNS fails on Windows after HADOOP-12437. Contributed 
(cnauroth: rev 957f0311a160afb40dbb0619f455445b4f5d1e32)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestDNS.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Allow SecurityUtil to lookup alternate hostnames 
> -
>
> Key: HADOOP-12437
> URL: https://issues.apache.org/jira/browse/HADOOP-12437
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net, security
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.8.0
>
> Attachments: HADOOP-12437.04.patch, HADOOP-12437.05.patch, 
> HDFS-9109.01.patch, HDFS-9109.02.patch, HDFS-9109.03.patch
>
>
> The configuration setting {{dfs.datanode.dns.interface}} lets the DataNode 
> select its hostname by doing a reverse lookup of IP addresses on the specific 
> network interface. This does not work {{when /etc/hosts}} is used to setup 
> alternate hostnames, since {{DNS#reverseDns}} only queries the DNS servers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12541) make re2j dependency consistent

2015-11-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14987038#comment-14987038
 ] 

Hudson commented on HADOOP-12541:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2562 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2562/])
HADOOP-12541. make re2j dependency consistent (Matthew Paduano via aw) (aw: rev 
6e0d35323505cc68dbd963b8628b89ee04af2f2b)
* hadoop-common-project/hadoop-common/pom.xml
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-project/pom.xml


> make re2j dependency consistent
> ---
>
> Key: HADOOP-12541
> URL: https://issues.apache.org/jira/browse/HADOOP-12541
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Matthew Paduano
> Fix For: 3.0.0
>
> Attachments: HADOOP-12541.01.patch
>
>
> Make the re2j dependency consistent with other parts of Hadoop.  Seeing some 
> weird/rare failures with older versions of maven that appear to be related to 
> this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12437) Allow SecurityUtil to lookup alternate hostnames

2015-11-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14987037#comment-14987037
 ] 

Hudson commented on HADOOP-12437:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2562 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2562/])
HADOOP-12542. TestDNS fails on Windows after HADOOP-12437. Contributed 
(cnauroth: rev 957f0311a160afb40dbb0619f455445b4f5d1e32)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestDNS.java


> Allow SecurityUtil to lookup alternate hostnames 
> -
>
> Key: HADOOP-12437
> URL: https://issues.apache.org/jira/browse/HADOOP-12437
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net, security
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.8.0
>
> Attachments: HADOOP-12437.04.patch, HADOOP-12437.05.patch, 
> HDFS-9109.01.patch, HDFS-9109.02.patch, HDFS-9109.03.patch
>
>
> The configuration setting {{dfs.datanode.dns.interface}} lets the DataNode 
> select its hostname by doing a reverse lookup of IP addresses on the specific 
> network interface. This does not work {{when /etc/hosts}} is used to setup 
> alternate hostnames, since {{DNS#reverseDns}} only queries the DNS servers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12542) TestDNS fails on Windows after HADOOP-12437.

2015-11-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14987039#comment-14987039
 ] 

Hudson commented on HADOOP-12542:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2562 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2562/])
HADOOP-12542. TestDNS fails on Windows after HADOOP-12437. Contributed 
(cnauroth: rev 957f0311a160afb40dbb0619f455445b4f5d1e32)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestDNS.java


> TestDNS fails on Windows after HADOOP-12437.
> 
>
> Key: HADOOP-12542
> URL: https://issues.apache.org/jira/browse/HADOOP-12542
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 2.8.0
>
> Attachments: HADOOP-12542.001.patch
>
>
> HADOOP-12437 added several new tests covering functionality of resolving host 
> names based on an alternate network interface.  These tests are failing on 
> Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12544) Erasure Coding: create dummy raw coder

2015-11-03 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HADOOP-12544:

Attachment: HADOOP-12544.2.patch

Update patch to fix checkstyle.

> Erasure Coding: create dummy raw coder
> --
>
> Key: HADOOP-12544
> URL: https://issues.apache.org/jira/browse/HADOOP-12544
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Rui Li
> Attachments: HADOOP-12544.1.patch, HADOOP-12544.2.patch
>
>
> Create a dummy raw coder which does no computation and simply returns zero 
> bytes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12544) Erasure Coding: create dummy raw coder

2015-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14987249#comment-14987249
 ] 

Hadoop QA commented on HADOOP-12544:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 5s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 37s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 23s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 34s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 30s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 44s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_60. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 39s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 5s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_60 Failed junit tests | hadoop.metrics2.impl.TestMetricsSystemImpl 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-03 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12770313/HADOOP-12544.2.patch |
| JIRA Issue | HADOOP-12544 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile  |
| uname | Linux a044bcf12f76 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/apache-yetus-1a9afee/precommit/personality/hadoop.sh
 |
| git revision | trunk / 957f031 |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_60 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79 |
| 

[jira] [Commented] (HADOOP-12542) TestDNS fails on Windows after HADOOP-12437.

2015-11-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14987034#comment-14987034
 ] 

Hudson commented on HADOOP-12542:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #632 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/632/])
HADOOP-12542. TestDNS fails on Windows after HADOOP-12437. Contributed 
(cnauroth: rev 957f0311a160afb40dbb0619f455445b4f5d1e32)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestDNS.java


> TestDNS fails on Windows after HADOOP-12437.
> 
>
> Key: HADOOP-12542
> URL: https://issues.apache.org/jira/browse/HADOOP-12542
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 2.8.0
>
> Attachments: HADOOP-12542.001.patch
>
>
> HADOOP-12437 added several new tests covering functionality of resolving host 
> names based on an alternate network interface.  These tests are failing on 
> Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12437) Allow SecurityUtil to lookup alternate hostnames

2015-11-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14987033#comment-14987033
 ] 

Hudson commented on HADOOP-12437:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #632 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/632/])
HADOOP-12542. TestDNS fails on Windows after HADOOP-12437. Contributed 
(cnauroth: rev 957f0311a160afb40dbb0619f455445b4f5d1e32)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestDNS.java


> Allow SecurityUtil to lookup alternate hostnames 
> -
>
> Key: HADOOP-12437
> URL: https://issues.apache.org/jira/browse/HADOOP-12437
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net, security
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.8.0
>
> Attachments: HADOOP-12437.04.patch, HADOOP-12437.05.patch, 
> HDFS-9109.01.patch, HDFS-9109.02.patch, HDFS-9109.03.patch
>
>
> The configuration setting {{dfs.datanode.dns.interface}} lets the DataNode 
> select its hostname by doing a reverse lookup of IP addresses on the specific 
> network interface. This does not work {{when /etc/hosts}} is used to setup 
> alternate hostnames, since {{DNS#reverseDns}} only queries the DNS servers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12545) Hadoop Javadoc has broken link for AccessControlList, ImpersonationProvider, DefaultImpersonationProvider and DistCp

2015-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14987461#comment-14987461
 ] 

Hadoop QA commented on HADOOP-12545:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 5s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
9s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 35s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 25s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
47s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 38s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 38s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 33s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 48s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 25s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_60 Failed junit tests | hadoop.fs.shell.TestCopyPreserveFlag |
|   | hadoop.ha.TestZKFailoverController |
|   | hadoop.crypto.key.TestValueQueue |
| JDK v1.7.0_79 Failed junit tests | hadoop.fs.shell.TestCopyPreserveFlag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-03 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12770331/HADOOP-12545-01.patch 
|
| JIRA Issue | HADOOP-12545 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile  |
| uname | Linux 53648da7a59b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Updated] (HADOOP-12540) TestAzureFileSystemInstrumentation#testClientErrorMetrics fails intermittently due to assumption that a lease error will be thrown.

2015-11-03 Thread Gaurav Kanade (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gaurav Kanade updated HADOOP-12540:
---
Attachment: HADOOP-12540.01.patch

> TestAzureFileSystemInstrumentation#testClientErrorMetrics fails 
> intermittently due to assumption that a lease error will be thrown.
> ---
>
> Key: HADOOP-12540
> URL: https://issues.apache.org/jira/browse/HADOOP-12540
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure, test
>Reporter: Chris Nauroth
>Assignee: Gaurav Kanade
> Attachments: HADOOP-12540.01.patch
>
>
> HADOOP-12508 changed the behavior of an Azure Storage lease violation during 
> deletes.  It appears that 
> {{TestAzureFileSystemInstrumentation#testClientErrorMetrics}} is partly 
> dependent on the old behavior for simulating an error to be tracked by the 
> metrics system.  I am seeing intermittent failures in this test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12546) Improve TestKMS

2015-11-03 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14987880#comment-14987880
 ] 

Daniel Templeton commented on HADOOP-12546:
---

This JIRA will make HADOOP-12509 easier to track down next time it shows up.

> Improve TestKMS
> ---
>
> Key: HADOOP-12546
> URL: https://issues.apache.org/jira/browse/HADOOP-12546
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HADOOP-12546.001.patch
>
>
> The TestKMS class has some issues:
> * It swallows some exceptions' stack traces
> * It swallows some exceptions altogether
> * Some of the tests aren't as tight as they could be
> * Asserts lack messages
> * Code style is a bit hodgepodge
> This JIRA is to clean all that up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12546) Improve TestKMS

2015-11-03 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HADOOP-12546:
--
Attachment: HADOOP-12546.001.patch

> Improve TestKMS
> ---
>
> Key: HADOOP-12546
> URL: https://issues.apache.org/jira/browse/HADOOP-12546
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HADOOP-12546.001.patch
>
>
> The TestKMS class has some issues:
> * It swallows some exceptions' stack traces
> * It swallows some exceptions altogether
> * Some of the tests aren't as tight as they could be
> * Asserts lack messages
> * Code style is a bit hodgepodge
> This JIRA is to clean all that up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12546) Improve TestKMS

2015-11-03 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HADOOP-12546:
--
Status: Patch Available  (was: Open)

> Improve TestKMS
> ---
>
> Key: HADOOP-12546
> URL: https://issues.apache.org/jira/browse/HADOOP-12546
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HADOOP-12546.001.patch
>
>
> The TestKMS class has some issues:
> * It swallows some exceptions' stack traces
> * It swallows some exceptions altogether
> * Some of the tests aren't as tight as they could be
> * Asserts lack messages
> * Code style is a bit hodgepodge
> This JIRA is to clean all that up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12540) TestAzureFileSystemInstrumentation#testClientErrorMetrics fails intermittently due to assumption that a lease error will be thrown.

2015-11-03 Thread Gaurav Kanade (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gaurav Kanade updated HADOOP-12540:
---
Status: Patch Available  (was: Open)

> TestAzureFileSystemInstrumentation#testClientErrorMetrics fails 
> intermittently due to assumption that a lease error will be thrown.
> ---
>
> Key: HADOOP-12540
> URL: https://issues.apache.org/jira/browse/HADOOP-12540
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure, test
>Reporter: Chris Nauroth
>Assignee: Gaurav Kanade
>
> HADOOP-12508 changed the behavior of an Azure Storage lease violation during 
> deletes.  It appears that 
> {{TestAzureFileSystemInstrumentation#testClientErrorMetrics}} is partly 
> dependent on the old behavior for simulating an error to be tracked by the 
> metrics system.  I am seeing intermittent failures in this test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12053) Harfs defaulturiport should be Zero ( should not -1)

2015-11-03 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14987803#comment-14987803
 ] 

Chris Nauroth commented on HADOOP-12053:


[~brahmareddy], thank you for the additional details.

If the conclusion is that HADOOP-12304 already fixed the root cause, then I 
think we'd defer this out of 2.7.2.  I don't think I'd immediately resolve it 
as a duplicate, because there still might be value in committing the patch as a 
general code improvement.  (One of Gera's earlier comments describes how the 
current code is confusing for subclass implementors.)

If however there is a test that can show a problem still exists, then perhaps 
there is another fix we can do for 2.7.2.

> Harfs defaulturiport should be Zero ( should not -1)
> 
>
> Key: HADOOP-12053
> URL: https://issues.apache.org/jira/browse/HADOOP-12053
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Brahma Reddy Battula
>Assignee: Gera Shegalov
>Priority: Critical
> Attachments: HADOOP-12053.001.patch, HADOOP-12053.002.patch, 
> HADOOP-12053.003.patch
>
>
> The harfs overrides the "getUriDefaultPort" method of AbstractFilesystem, and 
> returns "-1" . But "-1" can't pass the "checkPath" method when the 
> {{fs.defaultfs}} is setted without port(like hdfs://hacluster)
>  *Test Code :* 
> {code}
> for (FileStatus file : files) {
>   String[] edges = file.getPath().getName().split("-");
>   if (applicationId.toString().compareTo(edges[0]) >= 0 && 
> applicationId.toString().compareTo(edges[1]) <= 0) {
> Path harPath = new Path("har://" + 
> file.getPath().toUri().getPath());
> harPath = harPath.getFileSystem(conf).makeQualified(harPath);
> remoteAppDir = LogAggregationUtils.getRemoteAppLogDir(
> harPath, applicationId, appOwner,
> LogAggregationUtils.getRemoteNodeLogDirSuffix(conf));
> if 
> (FileContext.getFileContext(remoteAppDir.toUri()).util().exists(remoteAppDir))
>  {
> remoteDirSet.add(remoteAppDir);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12546) Improve TestKMS

2015-11-03 Thread Daniel Templeton (JIRA)
Daniel Templeton created HADOOP-12546:
-

 Summary: Improve TestKMS
 Key: HADOOP-12546
 URL: https://issues.apache.org/jira/browse/HADOOP-12546
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Affects Versions: 2.7.1
Reporter: Daniel Templeton
Assignee: Daniel Templeton


The TestKMS class has some issues:

* It swallows some exceptions' stack traces
* It swallows some exceptions altogether
* Some of the tests aren't as tight as they could be
* Asserts lack messages
* Code style is a bit hodgepodge

This JIRA is to clean all that up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12547) Remove hadoop-pipes

2015-11-03 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-12547:
-

 Summary: Remove hadoop-pipes
 Key: HADOOP-12547
 URL: https://issues.apache.org/jira/browse/HADOOP-12547
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor


Development appears to have stopped on hadoop-pipes upstream for the last few 
years, aside from very basic maintenance.  Hadoop streaming seems to be a 
better alternative, since it supports more programming languages and is better 
implemented.

There were no responses to a message on the mailing list asking for users of 
Hadoop pipes... and in my experience, I have never seen anyone use this.  We 
should remove it to reduce our maintenance burden and build times.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12534) User document for SFTP File System

2015-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14987692#comment-14987692
 ] 

Hadoop QA commented on HADOOP-12534:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 6s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 43s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 22s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
1s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 11s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 11s 
{color} | {color:red} branch/hadoop-project no findbugs output file 
(hadoop-project/target/findbugsXml.xml) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 34s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 24s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 10s 
{color} | {color:red} patch/hadoop-project no findbugs output file 
(hadoop-project/target/findbugsXml.xml) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 33s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_60. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 9s 
{color} | {color:green} hadoop-project in the patch passed with JDK v1.8.0_60. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 39s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 10s 
{color} | {color:green} hadoop-project in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | 

[jira] [Commented] (HADOOP-12540) TestAzureFileSystemInstrumentation#testClientErrorMetrics fails intermittently due to assumption that a lease error will be thrown.

2015-11-03 Thread Gaurav Kanade (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14987883#comment-14987883
 ] 

Gaurav Kanade commented on HADOOP-12540:


The patch provides a new way to induce client error thus ensuring test 
functions as expected. Instead of relying on the delete mechanism we simulate 
the error now by creating a file, then acquiring a short lease on it and then 
trying to write on it without the lease


> TestAzureFileSystemInstrumentation#testClientErrorMetrics fails 
> intermittently due to assumption that a lease error will be thrown.
> ---
>
> Key: HADOOP-12540
> URL: https://issues.apache.org/jira/browse/HADOOP-12540
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure, test
>Reporter: Chris Nauroth
>Assignee: Gaurav Kanade
> Attachments: HADOOP-12540.01.patch
>
>
> HADOOP-12508 changed the behavior of an Azure Storage lease violation during 
> deletes.  It appears that 
> {{TestAzureFileSystemInstrumentation#testClientErrorMetrics}} is partly 
> dependent on the old behavior for simulating an error to be tracked by the 
> metrics system.  I am seeing intermittent failures in this test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12548) read s3 creds from a file

2015-11-03 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14988170#comment-14988170
 ] 

Allen Wittenauer commented on HADOOP-12548:
---

Makes sense. 

> read s3 creds from a file
> -
>
> Key: HADOOP-12548
> URL: https://issues.apache.org/jira/browse/HADOOP-12548
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Reporter: Allen Wittenauer
>
> It would be good if we could read s3 creds from a file rather than via a java 
> property.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12525) Support Identity API v3 authentication for OpenStack Swift

2015-11-03 Thread ramtin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramtin updated HADOOP-12525:

Attachment: HADOOP-12525.002.patch

Fix testEmptyUsername UT failure.

> Support Identity API v3 authentication for OpenStack Swift
> --
>
> Key: HADOOP-12525
> URL: https://issues.apache.org/jira/browse/HADOOP-12525
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/swift
>Reporter: ramtin
>Assignee: ramtin
>Priority: Critical
> Attachments: HADOOP-12525.001.patch, HADOOP-12525.002.patch
>
>
> Support all request types of [Identity (Keystone) API 
> v3|http://developer.openstack.org/api-ref-identity-v3.html#authenticate] 
> authentication for vendor neutral OpenStack



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12540) TestAzureFileSystemInstrumentation#testClientErrorMetrics fails intermittently due to assumption that a lease error will be thrown.

2015-11-03 Thread Gaurav Kanade (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gaurav Kanade updated HADOOP-12540:
---
Attachment: HADOOP-12540.01.patch

> TestAzureFileSystemInstrumentation#testClientErrorMetrics fails 
> intermittently due to assumption that a lease error will be thrown.
> ---
>
> Key: HADOOP-12540
> URL: https://issues.apache.org/jira/browse/HADOOP-12540
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure, test
>Reporter: Chris Nauroth
>Assignee: Gaurav Kanade
> Attachments: HADOOP-12540.01.patch
>
>
> HADOOP-12508 changed the behavior of an Azure Storage lease violation during 
> deletes.  It appears that 
> {{TestAzureFileSystemInstrumentation#testClientErrorMetrics}} is partly 
> dependent on the old behavior for simulating an error to be tracked by the 
> metrics system.  I am seeing intermittent failures in this test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12540) TestAzureFileSystemInstrumentation#testClientErrorMetrics fails intermittently due to assumption that a lease error will be thrown.

2015-11-03 Thread Gaurav Kanade (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gaurav Kanade updated HADOOP-12540:
---
Attachment: (was: HADOOP-12540.01.patch)

> TestAzureFileSystemInstrumentation#testClientErrorMetrics fails 
> intermittently due to assumption that a lease error will be thrown.
> ---
>
> Key: HADOOP-12540
> URL: https://issues.apache.org/jira/browse/HADOOP-12540
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure, test
>Reporter: Chris Nauroth
>Assignee: Gaurav Kanade
> Attachments: HADOOP-12540.01.patch
>
>
> HADOOP-12508 changed the behavior of an Azure Storage lease violation during 
> deletes.  It appears that 
> {{TestAzureFileSystemInstrumentation#testClientErrorMetrics}} is partly 
> dependent on the old behavior for simulating an error to be tracked by the 
> metrics system.  I am seeing intermittent failures in this test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12547) Remove hadoop-pipes

2015-11-03 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14988229#comment-14988229
 ] 

Colin Patrick McCabe commented on HADOOP-12547:
---

Thank you for the perspective, [~aw].  It's true that you have been around for 
longer than me.  However, it's also true that in about 4 years of supporting 
customer Hadoop deployments I have never, once, seen anyone use or ask about 
Hadoop Pipes.  We've gotten requests for some pretty obscure things-- like 
adding a feature or fixing a bug in fuse_dfs, supporting the old obsolete MR1 
framework, or even preparing native code patches for decades-old versions of 
AIX, even running Hadoop on JVMs that I'm convinced most people have never 
heard of.  But __never__ for pipes.

That stack overflow post looks like a newbie stumbling into Hadoop for the 
first time and trying to follow a tutorial from more than 5 years ago... and 
failing, because this stuff hasn't been maintained-- and won't be maintained in 
the future.  That's hardly a ringing endorsement of keeping this around.  
Anyway, nobody is proposing removing this from 2.6 or any branch-2 release... 
only from trunk.

bq. Pipes was written primarily for Yahoo!'s search team. It was provided as a 
way for C code to interface with MapReduce without requiring significant 
rewrites. It was definitely in use before I left Yahoo! but I haven't kept 
track of whether it is still being used. My guess is no, given most of that 
team has left/was shipped over to Microsoft.

[~daryn], [~kihwal], do you have any perspective on this?  Is there any reason 
to keep this around in trunk / branch-3.0?  If we are going to keep this, I 
would like to see some unit tests, documentation, and actual maintenance.

> Remove hadoop-pipes
> ---
>
> Key: HADOOP-12547
> URL: https://issues.apache.org/jira/browse/HADOOP-12547
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
>
> Development appears to have stopped on hadoop-pipes upstream for the last few 
> years, aside from very basic maintenance.  Hadoop streaming seems to be a 
> better alternative, since it supports more programming languages and is 
> better implemented.
> There were no responses to a message on the mailing list asking for users of 
> Hadoop pipes... and in my experience, I have never seen anyone use this.  We 
> should remove it to reduce our maintenance burden and build times.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12548) read s3 creds from a file

2015-11-03 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14988263#comment-14988263
 ] 

Larry McCay commented on HADOOP-12548:
--

Understood.

The problem comes down to putting it in a file in clear text.
Even when it is protected with file permissions it is often flagged as clear 
text and therefore an issue.
A keystore isn't clear text though the real security still requires file 
permissions but does usually pass the test.

A credential server that authenticated users with kerberos would be secure 
though.
The CredentialProvider API is a path to get there.

I can lend a hand there if you'd like to go in that direction.

> read s3 creds from a file
> -
>
> Key: HADOOP-12548
> URL: https://issues.apache.org/jira/browse/HADOOP-12548
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Reporter: Allen Wittenauer
>
> It would be good if we could read s3 creds from a file rather than via a java 
> property.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12525) Support Identity API v3 authentication for OpenStack Swift

2015-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14988344#comment-14988344
 ] 

Hadoop QA commented on HADOOP-12525:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 6s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
8s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 8s 
{color} | {color:red} Patch generated 22 new checkstyle issues in 
hadoop-tools/hadoop-openstack (total was 250, now 264). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 12s 
{color} | {color:green} hadoop-openstack in the patch passed with JDK 
v1.8.0_60. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 13s 
{color} | {color:green} hadoop-openstack in the patch passed with JDK 
v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
25s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 9m 41s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-03 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12770411/HADOOP-12525.002.patch
 |
| JIRA Issue | HADOOP-12525 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile  site  mvnsite  |
| uname | Linux 97036f01417a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 

[jira] [Updated] (HADOOP-10351) Unit test TestSwiftFileSystemLsOperations#testListEmptyRoot and testListNonEmptyRoot failure.

2015-11-03 Thread ramtin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramtin updated HADOOP-10351:

Status: Patch Available  (was: Open)

> Unit test TestSwiftFileSystemLsOperations#testListEmptyRoot and 
> testListNonEmptyRoot failure.
> -
>
> Key: HADOOP-10351
> URL: https://issues.apache.org/jira/browse/HADOOP-10351
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs/swift, test
>Affects Versions: 2.3.0
>Reporter: Jinghui Wang
> Attachments: HADOOP-10351.patch
>
>
> TestSwiftFileSystemLsOperations#testListEmptyRoot and testLisNontEmptyRoot 
> fails because the unit test TestFSMainOperationsSwift creates the testing 
> directory test.build.dir through its parent class. But during the parent 
> classes tearDown, only the test.build.dir/test directory is deleted leaving 
> the test.build.dir in the container. However, tests 
> TestSwiftFileSystemLsOperations#testListEmptyRoot and testListEmptyRoot do 
> not expect the directory to exists in the container thus causing the failure.
>   
> TestSwiftFileSystemLsOperations.testListEmptyRoot:126->Assert.assertEquals:472->Assert.assertEquals:128->Assert.failNotEquals:647->Assert.fail:93
>  Non-empty root/[00] SwiftFileStatus{ path=swift://container1.service/home; 
> isDirectory=true; length=0; blocksize=33554432; 
> modification_time=1392850893440}
>  expected:<0> but was:<1>
>   
> TestSwiftFileSystemLsOperations.testListNonEmptyRoot:137->Assert.assertEquals:472->Assert.assertEquals:128->Assert.failNotEquals:647->Assert.fail:93
>  Wrong #of root children/[00] SwiftFileStatus{ 
> path=swift://container1.service/home; isDirectory=true; length=0; 
> blocksize=33554432; modification_time=1392850893440}
> [01] SwiftFileStatus{ path=swift://patchtest.softlayer/test; 
> isDirectory=true; length=0; blocksize=33554432; 
> modification_time=1392851462990}
>  expected:<1> but was:<2>



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12540) TestAzureFileSystemInstrumentation#testClientErrorMetrics fails intermittently due to assumption that a lease error will be thrown.

2015-11-03 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12540:
---
Status: Open  (was: Patch Available)

Hi [~gouravk].  Yes, I think this will work fine overall.  A few nitpicks:

# The attached patch file appears to be UTF-16.  Please make sure to save as 
ASCII in the next revision.  I tested the current patch revision by running 
{{iconv -f UTF-16 -t ASCII HADOOP-12540.01.patch}}.
# Please remove the {{System.out}} call, which I assume is left over from 
debugging.
# There is a small chance of a resource leak if {{fs.create}} succeeds, but 
then {{testAccount.acquireShortLease}} throws an exception.  I recommend moving 
both the {{fs.create}} and the {{testAccount.acquireShortLease}} within the 
first {{try}} block.

Thank you!

> TestAzureFileSystemInstrumentation#testClientErrorMetrics fails 
> intermittently due to assumption that a lease error will be thrown.
> ---
>
> Key: HADOOP-12540
> URL: https://issues.apache.org/jira/browse/HADOOP-12540
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure, test
>Reporter: Chris Nauroth
>Assignee: Gaurav Kanade
> Attachments: HADOOP-12540.01.patch
>
>
> HADOOP-12508 changed the behavior of an Azure Storage lease violation during 
> deletes.  It appears that 
> {{TestAzureFileSystemInstrumentation#testClientErrorMetrics}} is partly 
> dependent on the old behavior for simulating an error to be tracked by the 
> metrics system.  I am seeing intermittent failures in this test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12537) s3a: Add flag for session ID to allow Amazon STS temporary credentials

2015-11-03 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-12537:
---
Attachment: HADOOP-12537.diff

Attaching a patch with tests that I've run against all S3 and STS endpoints, 
including combinations in different regions. I also tested this from the CLI 
and verified other authentication methods still work. 
TestAutomaticProxyPortSelection fails, but it was failing prior to my patch and 
all other tests in hadoop-aws pass. I moved the test into its own class, am 
doing clean up with @After, and skip the test if test.sts.enabled isn't set. 

The STS jar is not required during the build - only when running tests. The 
feature assumes the user is going to get the temporary credentials somewhere 
else. The test has to them itself.

> s3a: Add flag for session ID to allow Amazon STS temporary credentials
> --
>
> Key: HADOOP-12537
> URL: https://issues.apache.org/jira/browse/HADOOP-12537
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 2.7.1
>Reporter: Sean Mackrory
>Priority: Minor
> Attachments: HADOOP-12537.diff, HADOOP-12537.diff
>
>
> Amazon STS allows you to issue temporary access key id / secret key pairs for 
> your a user / role. However, using these credentials also requires specifying 
> a session ID. There is currently no such configuration property or the 
> required code to pass it through to the API (at least not that I can find) in 
> any of the S3 connectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12537) s3a: Add flag for session ID to allow Amazon STS temporary credentials

2015-11-03 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-12537:
---
Status: Patch Available  (was: Open)

> s3a: Add flag for session ID to allow Amazon STS temporary credentials
> --
>
> Key: HADOOP-12537
> URL: https://issues.apache.org/jira/browse/HADOOP-12537
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 2.7.1
>Reporter: Sean Mackrory
>Priority: Minor
> Attachments: HADOOP-12537.diff, HADOOP-12537.diff
>
>
> Amazon STS allows you to issue temporary access key id / secret key pairs for 
> your a user / role. However, using these credentials also requires specifying 
> a session ID. There is currently no such configuration property or the 
> required code to pass it through to the API (at least not that I can find) in 
> any of the S3 connectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9657) NetUtils.wrapException to have special handling for 0.0.0.0 addresses and :0 ports

2015-11-03 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-9657:

Target Version/s: 2.8.0  (was: 2.7.2)

Moving this improvement out of 2.7.2 and from future maintenance lines.

> NetUtils.wrapException to have special handling for 0.0.0.0 addresses and :0 
> ports
> --
>
> Key: HADOOP-9657
> URL: https://issues.apache.org/jira/browse/HADOOP-9657
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net
>Affects Versions: 2.7.0
>Reporter: Steve Loughran
>Assignee: Varun Saxena
>Priority: Minor
> Attachments: HADOOP-9657.01.patch, HADOOP-9657.02.patch
>
>
> when an exception is wrapped, it may look like {{0.0.0.0:0 failed on 
> connection exception: java.net.ConnectException: Connection refused; For more 
> details see:  http://wiki.apache.org/hadoop/ConnectionRefused}}
> We should recognise all zero ip addresses and 0 ports and flag them as "your 
> configuration of the endpoint is wrong", as it is clearly the case



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8602) Passive mode support for FTPFileSystem

2015-11-03 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-8602:

Target Version/s: 2.8.0  (was: 2.0.0-alpha, 2.7.2)

Moving this improvement out of 2.7.2 and from future maintenance lines.

[~steve_l], bump on behalf of the contributor if you are still looking at this.

> Passive mode support for FTPFileSystem
> --
>
> Key: HADOOP-8602
> URL: https://issues.apache.org/jira/browse/HADOOP-8602
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 1.0.3, 2.0.0-alpha
>Reporter: Nemon Lou
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-8602.004.patch, HADOOP-8602.005.patch, 
> HADOOP-8602.006.patch, HADOOP-8602.007.patch, HADOOP-8602.008.patch, 
> HADOOP-8602.009.patch, HADOOP-8602.patch, HADOOP-8602.patch, HADOOP-8602.patch
>
>
>  FTPFileSystem uses active mode for default data connection mode.We shall be 
> able to choose passive mode when active mode doesn't work (firewall for 
> example).
>  My thoughts is to add an option "fs.ftp.data.connection.mode" in 
> core-site.xml.Since FTPClient(in org.apache.commons.net.ftp package) already 
> supports passive mode, we just need to add a few code in FTPFileSystem 
> .connect() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10351) Unit test TestSwiftFileSystemLsOperations#testListEmptyRoot and testListNonEmptyRoot failure.

2015-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14988365#comment-14988365
 ] 

Hadoop QA commented on HADOOP-10351:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 6s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
0s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
8s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 12s 
{color} | {color:green} hadoop-openstack in the patch passed with JDK 
v1.8.0_60. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 12s 
{color} | {color:green} hadoop-openstack in the patch passed with JDK 
v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 8m 37s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-03 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12648841/HADOOP-10351.patch |
| JIRA Issue | HADOOP-10351 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile  |
| uname | Linux 3d5f8e84581f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/apache-yetus-1a9afee/precommit/personality/hadoop.sh
 |
| git revision | trunk / dac0463 |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_60 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79 |
| findbugs | v3.0.0 |
| JDK v1.7.0_79  Test Results | 

[jira] [Updated] (HADOOP-12545) Hadoop Javadoc has broken link for AccessControlList, ImpersonationProvider, DefaultImpersonationProvider and DistCp

2015-11-03 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-12545:
-
Fix Version/s: (was: 2.7.2)

[~arshad.mohammad],  FYI, you should use target-version for your intention. 
Fix-version is set at commit time. I'm fixing this for now.

> Hadoop Javadoc has broken link for AccessControlList, ImpersonationProvider, 
> DefaultImpersonationProvider and DistCp
> 
>
> Key: HADOOP-12545
> URL: https://issues.apache.org/jira/browse/HADOOP-12545
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Arshad Mohammad
>Assignee: Arshad Mohammad
> Attachments: HADOOP-12545-01.patch
>
>
> 1) open hadoop-2.7.1\share\doc\hadoop\api\index.html
> 2) Click on "All Classes"
> 3) Click on "AccessControlList", The page shows "This page can’t be displayed"
> Same error for DistCp, ImpersonationProvider and DefaultImpersonationProvider 
> also.
> Javadoc generated from Trunk has the same problem



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12547) Remove hadoop-pipes

2015-11-03 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14988032#comment-14988032
 ] 

Allen Wittenauer commented on HADOOP-12547:
---

As someone who has been around a lot longer than Colin, let me fill in some 
blanks.

Pipes was written primarily for Yahoo!'s search team.  It was provided as a way 
for C code to interface with MapReduce without requiring significant rewrites.  
It was definitely in use before I left Yahoo! but I haven't kept track of 
whether it is still being used.  My guess is no, given most of that team has 
left/was shipped over to Microsoft.

Even so, there are definitely references out on the Internet in the last year 
to people using Pipes if one actually bothers to look for them. e.g., 
http://stackoverflow.com/questions/28573127/hadoop-pipes-wordcount-example-nullpointerexception-in-localjobrunner
 , which features a comment made about Hadoop 2.6 about 5 days ago.

> Remove hadoop-pipes
> ---
>
> Key: HADOOP-12547
> URL: https://issues.apache.org/jira/browse/HADOOP-12547
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
>
> Development appears to have stopped on hadoop-pipes upstream for the last few 
> years, aside from very basic maintenance.  Hadoop streaming seems to be a 
> better alternative, since it supports more programming languages and is 
> better implemented.
> There were no responses to a message on the mailing list asking for users of 
> Hadoop pipes... and in my experience, I have never seen anyone use this.  We 
> should remove it to reduce our maintenance burden and build times.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12547) Remove hadoop-pipes

2015-11-03 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14987959#comment-14987959
 ] 

Andrew Wang commented on HADOOP-12547:
--

I don't think we can remove in branch-2, but let's do this for trunk.

> Remove hadoop-pipes
> ---
>
> Key: HADOOP-12547
> URL: https://issues.apache.org/jira/browse/HADOOP-12547
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
>
> Development appears to have stopped on hadoop-pipes upstream for the last few 
> years, aside from very basic maintenance.  Hadoop streaming seems to be a 
> better alternative, since it supports more programming languages and is 
> better implemented.
> There were no responses to a message on the mailing list asking for users of 
> Hadoop pipes... and in my experience, I have never seen anyone use this.  We 
> should remove it to reduce our maintenance burden and build times.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12546) Improve TestKMS

2015-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14988022#comment-14988022
 ] 

Hadoop QA commented on HADOOP-12546:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 7s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
47s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 30s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 10s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
8s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 30s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 42s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
7s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 49s 
{color} | {color:green} hadoop-kms in the patch passed with JDK v1.8.0_60. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 52s 
{color} | {color:green} hadoop-kms in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 44s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-03 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12770383/HADOOP-12546.001.patch
 |
| JIRA Issue | HADOOP-12546 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile  |
| uname | Linux 522d670cc4bf 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/apache-yetus-1a9afee/precommit/personality/hadoop.sh
 |
| git revision | trunk / 0783184 |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_60 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79 |
| findbugs | v3.0.0 |
| JDK v1.7.0_79  Test Results | 

[jira] [Updated] (HADOOP-11822) mark org.apache.hadoop.security.ssl.SSLFactory as @Public

2015-11-03 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-11822:
-
Target Version/s: 2.8.0  (was: 2.7.2)

Moving improvements out of 2.7 maintenance releases.

> mark org.apache.hadoop.security.ssl.SSLFactory as @Public
> -
>
> Key: HADOOP-11822
> URL: https://issues.apache.org/jira/browse/HADOOP-11822
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> the {{org.apache.hadoop.security.ssl.SSLFactory}} is tagged as Private, yet 
> it is needed to talk SPNEGO to the RM once it is in secure mode, as well as 
> to other services in a a YARN cluster.
> I propose changing it to @Public, @Evolving



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12548) read s3 creds from a file

2015-11-03 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-12548:
-

 Summary: read s3 creds from a file
 Key: HADOOP-12548
 URL: https://issues.apache.org/jira/browse/HADOOP-12548
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs/s3
Reporter: Allen Wittenauer


It would be good if we could read s3 creds from a file rather than via a java 
property.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12547) Remove hadoop-pipes

2015-11-03 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14988009#comment-14988009
 ] 

Chris Nauroth commented on HADOOP-12547:


HADOOP-12518 is a recent patch for the hadoop-pipes build, targeted to 3.0.0.  
That implies that someone might be interested in keeping it.  [~aw], would you 
please comment, since you filed HADOOP-12518?

I agree that if we proceed with removing it, it would have to be done in 
trunk/3.0.0 only on grounds of backward compatibility.

> Remove hadoop-pipes
> ---
>
> Key: HADOOP-12547
> URL: https://issues.apache.org/jira/browse/HADOOP-12547
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
>
> Development appears to have stopped on hadoop-pipes upstream for the last few 
> years, aside from very basic maintenance.  Hadoop streaming seems to be a 
> better alternative, since it supports more programming languages and is 
> better implemented.
> There were no responses to a message on the mailing list asking for users of 
> Hadoop pipes... and in my experience, I have never seen anyone use this.  We 
> should remove it to reduce our maintenance burden and build times.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12548) read s3 creds from a file

2015-11-03 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14988140#comment-14988140
 ] 

Steve Loughran commented on HADOOP-12548:
-

not very generic. what about making it possible/easy for tools to read in an 
XML property file at the end of a URL; file:// and hdfs:// would then be 
examples

> read s3 creds from a file
> -
>
> Key: HADOOP-12548
> URL: https://issues.apache.org/jira/browse/HADOOP-12548
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Reporter: Allen Wittenauer
>
> It would be good if we could read s3 creds from a file rather than via a java 
> property.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12548) read s3 creds from a file

2015-11-03 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14988445#comment-14988445
 ] 

Allen Wittenauer commented on HADOOP-12548:
---

Help is always appreciated, esp since I'm pretty swamped with non-build (ha ha) 
issues right now.

> read s3 creds from a file
> -
>
> Key: HADOOP-12548
> URL: https://issues.apache.org/jira/browse/HADOOP-12548
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Reporter: Allen Wittenauer
>
> It would be good if we could read s3 creds from a file rather than via a java 
> property.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12526) [Branch-2] there are duplicate dependency definitions in pom's

2015-11-03 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-12526:
-
Target Version/s: 2.7.3  (was: 2.8.0, 2.7.2, 2.6.3)

Moving out all non-critical / non-blocker issues that didn't make it out of 
2.7.2 into 2.7.3. Please revert back if you disagree.

> [Branch-2] there are duplicate dependency definitions in pom's
> --
>
> Key: HADOOP-12526
> URL: https://issues.apache.org/jira/browse/HADOOP-12526
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.8.0, 2.7.1, 2.6.2
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: HADOOP-12526-branch-2.001.patch, 
> HADOOP-12526-branch-2.6.001.patch
>
>
> There are several places where dependencies are defined multiple times within 
> pom's, and are causing maven build warnings. They should be fixed. This is 
> specific to branch-2.6.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11957) if an IOException error is thrown in DomainSocket.close we go into infinite loop.

2015-11-03 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-11957:
-
Target Version/s: 2.7.3  (was: 2.7.2)

Moving out all non-critical / non-blocker issues that didn't make it out of 
2.7.2 into 2.7.3. Please revert back if you disagree.

> if an IOException error is thrown in DomainSocket.close we go into infinite 
> loop.
> -
>
> Key: HADOOP-11957
> URL: https://issues.apache.org/jira/browse/HADOOP-11957
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 2.7.1
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: HADOOP-11957.001.patch
>
>
> if an IOException error is thrown in DomainSocket.close we go into infinite 
> loop.
> Issue : If the shutdown0(fd) call throws an IOException we break out of the 
> if shutdown call but will continue to loop in the while loop infinitely since 
> we have no way of decrementing the counter. Please scroll down and see the 
> comment marked with Bug Bug to see where the issue is.
> {code:title=DomainSocket.java}
>   @Override
>   public void close() throws IOException {
> // Set the closed bit on this DomainSocket
> int count = 0;
> try {
>   count = refCount.setClosed();
> } catch (ClosedChannelException e) {
>   // Someone else already closed the DomainSocket.
>   return;
> }
> // Wait for all references to go away
> boolean didShutdown = false;
> boolean interrupted = false;
> while (count > 0) {
>   if (!didShutdown) {
> try {
>   // Calling shutdown on the socket will interrupt blocking system
>   // calls like accept, write, and read that are going on in a
>   // different thread.
>   shutdown0(fd);
> } catch (IOException e) {
>   LOG.error("shutdown error: ", e);
> }
> didShutdown = true; 
> // *BUG BUG* <-- Here the code will never exit the loop
> // if the count is greater then 0. we need to break out
> // of the while loop in case of IOException Error
>   }
>   try {
> Thread.sleep(10);
>   } catch (InterruptedException e) {
> interrupted = true;
>   }
>   count = refCount.getReferenceCount();
> }
> // At this point, nobody has a reference to the file descriptor, 
> // and nobody will be able to get one in the future either.
> // We now call close(2) on the file descriptor.
> // After this point, the file descriptor number will be reused by 
> // something else.  Although this DomainSocket object continues to hold 
> // the old file descriptor number (it's a final field), we never use it 
> // again because this DomainSocket is closed.
> close0(fd);
> if (interrupted) {
>   Thread.currentThread().interrupt();
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10829) Iteration on CredentialProviderFactory.serviceLoader is thread-unsafe

2015-11-03 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-10829:
-
Target Version/s: 2.7.3  (was: 2.7.2)

Moving out all non-critical / non-blocker issues that didn't make it out of 
2.7.2 into 2.7.3. Please revert back if you disagree.

> Iteration on CredentialProviderFactory.serviceLoader  is thread-unsafe
> --
>
> Key: HADOOP-10829
> URL: https://issues.apache.org/jira/browse/HADOOP-10829
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Benoy Antony
>Assignee: Benoy Antony
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10829.patch, HADOOP-10829.patch
>
>
> CredentialProviderFactory uses _ServiceLoader_ framework to load 
> _CredentialProviderFactory_
> {code}
>   private static final ServiceLoader serviceLoader 
> =
>   ServiceLoader.load(CredentialProviderFactory.class);
> {code}
> The _ServiceLoader_ framework does lazy initialization of services which 
> makes it thread unsafe. If accessed from multiple threads, it is better to 
> synchronize the access.
> Similar synchronization has been done while loading compression codec 
> providers via HADOOP-8406. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12125) Retrying UnknownHostException on a proxy does not actually retry hostname resolution

2015-11-03 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-12125:
-
Target Version/s: 2.7.3  (was: 2.7.2)

Moving out all non-critical / non-blocker issues that didn't make it out of 
2.7.2 into 2.7.3. Please revert back if you disagree.

> Retrying UnknownHostException on a proxy does not actually retry hostname 
> resolution
> 
>
> Key: HADOOP-12125
> URL: https://issues.apache.org/jira/browse/HADOOP-12125
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: Jason Lowe
>
> When RetryInvocationHandler attempts to retry an UnknownHostException the 
> hostname fails to be resolved again.  The InetSocketAddress in the 
> ConnectionId has cached the fact that the hostname is unresolvable, and when 
> the proxy tries to setup a new Connection object with that ConnectionId it 
> checks if the (cached) resolution result is unresolved and immediately throws.
> The end result is we sleep and retry for no benefit.  The hostname resolution 
> is never attempted again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12548) read s3 creds from a file

2015-11-03 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14988453#comment-14988453
 ] 

Larry McCay commented on HADOOP-12548:
--

:)
If you can point me to the code that is consuming the java property now then I 
can look at what is involved.
I'll through together a one pager for adding the credential provider API and 
tying it into relevant config, etc.
We may also want to provide a env variable over java property - at least they 
won't show up in ps output.

> read s3 creds from a file
> -
>
> Key: HADOOP-12548
> URL: https://issues.apache.org/jira/browse/HADOOP-12548
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Reporter: Allen Wittenauer
>
> It would be good if we could read s3 creds from a file rather than via a java 
> property.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12540) TestAzureFileSystemInstrumentation#testClientErrorMetrics fails intermittently due to assumption that a lease error will be thrown.

2015-11-03 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14988474#comment-14988474
 ] 

Chris Nauroth commented on HADOOP-12540:


{code}
String leaseID = "";
{code}

If the attempt to create the file or acquire the lease fails with an exception, 
then {{leaseID}} will remain the empty string.  Then, in the {{finally}} block, 
it would call {{testAccount.releaseLease}} with an empty string.  Is that 
legal?  I was expecting to see {{leaseID}} set to {{null}} before the {{try}} 
block, and then the {{finally}} block guards the {{releaseLease}} with a check 
for {{null}} before making the call.

I'll be +1 (pending Jenkins) after that's addressed.  Thanks!

> TestAzureFileSystemInstrumentation#testClientErrorMetrics fails 
> intermittently due to assumption that a lease error will be thrown.
> ---
>
> Key: HADOOP-12540
> URL: https://issues.apache.org/jira/browse/HADOOP-12540
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure, test
>Reporter: Chris Nauroth
>Assignee: Gaurav Kanade
> Attachments: HADOOP-12540.01.patch, HADOOP-12540.02.patch
>
>
> HADOOP-12508 changed the behavior of an Azure Storage lease violation during 
> deletes.  It appears that 
> {{TestAzureFileSystemInstrumentation#testClientErrorMetrics}} is partly 
> dependent on the old behavior for simulating an error to be tracked by the 
> metrics system.  I am seeing intermittent failures in this test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12548) read s3 creds from a file

2015-11-03 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14988486#comment-14988486
 ] 

Allen Wittenauer commented on HADOOP-12548:
---

Awesome, thanks!  Looks like Chris beat me to the background bits. haha.

bq. We may also want to provide a env variable over java property - at least 
they won't show up in ps output.

They do, however, show up in /proc on Linux

> read s3 creds from a file
> -
>
> Key: HADOOP-12548
> URL: https://issues.apache.org/jira/browse/HADOOP-12548
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Reporter: Allen Wittenauer
>
> It would be good if we could read s3 creds from a file rather than via a java 
> property.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12544) Erasure Coding: create dummy raw coder

2015-11-03 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14988503#comment-14988503
 ] 

Zhe Zhang commented on HADOOP-12544:


Thanks Rui! The patch looks good overall. A tiny suggestion is that we can add 
a little more explanation in the Javadoc that these dummy classes are for 
testing purpose, to isolate performance issues. Otherwise +1.

> Erasure Coding: create dummy raw coder
> --
>
> Key: HADOOP-12544
> URL: https://issues.apache.org/jira/browse/HADOOP-12544
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Rui Li
> Attachments: HADOOP-12544.1.patch, HADOOP-12544.2.patch
>
>
> Create a dummy raw coder which does no computation and simply returns zero 
> bytes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10420) Add support to Swift-FS to support tempAuth

2015-11-03 Thread ramtin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramtin updated HADOOP-10420:

Attachment: HADOOP-10420-014.patch

+1 (non binding)
Just, provided a new patch to document ".use.get.auth" property

> Add support to Swift-FS to support tempAuth
> ---
>
> Key: HADOOP-10420
> URL: https://issues.apache.org/jira/browse/HADOOP-10420
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/swift, tools
>Affects Versions: 2.3.0
>Reporter: Jinghui Wang
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10420-002.patch, HADOOP-10420-003.patch, 
> HADOOP-10420-004.patch, HADOOP-10420-005.patch, HADOOP-10420-006.patch, 
> HADOOP-10420-007.patch, HADOOP-10420-008.patch, HADOOP-10420-009.patch, 
> HADOOP-10420-010.patch, HADOOP-10420-011.patch, HADOOP-10420-012.patch, 
> HADOOP-10420-013.patch, HADOOP-10420-014.patch, HADOOP-10420.patch
>
>
> Currently, hadoop-openstack Swift FS supports keystone authentication. The 
> attached patch adds support for tempAuth. Users will be able to configure 
> which authentication to use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11887) Introduce Intel ISA-L erasure coding library for the native support

2015-11-03 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14988535#comment-14988535
 ] 

Colin Patrick McCabe commented on HADOOP-11887:
---

Thanks for this, [~drankye].  It looks good to me.  +1.  I'll commit tomorrow 
if there's no more comments.  It's probably easier to do any refinements in a 
follow-on JIRA.


> Introduce Intel ISA-L erasure coding library for the native support
> ---
>
> Key: HADOOP-11887
> URL: https://issues.apache.org/jira/browse/HADOOP-11887
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11887-HDFS-7285-v6.patch, HADOOP-11887-v1.patch, 
> HADOOP-11887-v10, HADOOP-11887-v2.patch, HADOOP-11887-v3.patch, 
> HADOOP-11887-v4.patch, HADOOP-11887-v5.patch, HADOOP-11887-v5.patch, 
> HADOOP-11887-v7.patch, HADOOP-11887-v8.patch, HADOOP-11887-v9.patch
>
>
> This is to introduce Intel ISA-L erasure coding library for the native 
> support, via dynamic loading mechanism (dynamic module, like *.so in *nix and 
> *.dll on Windows).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12548) read s3 creds from a file

2015-11-03 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14988544#comment-14988544
 ] 

Allen Wittenauer commented on HADOOP-12548:
---

This also sounds like a perfect example to use for a "real world" example of 
credentials.  Because even though I know the credential commands exist, I know 
I have no clue as to how to actually use them, never mind to connect to S3.

> read s3 creds from a file
> -
>
> Key: HADOOP-12548
> URL: https://issues.apache.org/jira/browse/HADOOP-12548
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Reporter: Allen Wittenauer
>
> It would be good if we could read s3 creds from a file rather than via a java 
> property.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12549) Extend HDFS-7456 default generically to all pattern lookups

2015-11-03 Thread Harsh J (JIRA)
Harsh J created HADOOP-12549:


 Summary: Extend HDFS-7456 default generically to all pattern 
lookups
 Key: HADOOP-12549
 URL: https://issues.apache.org/jira/browse/HADOOP-12549
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc, security
Affects Versions: 2.7.1
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor


In HDFS-7546 we added a hdfs-default.xml property to bring back the regular 
behaviour of trusting all principals (as was the case before HADOOP-9789). 
However, the change only targeted HDFS users and also only those that used the 
default-loading mechanism of Configuration class (i.e. not {{new 
Configuration(false)}} users).

I'd like to propose adding the same default to the generic RPC client code 
also, so the default affects all form of clients equally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12540) TestAzureFileSystemInstrumentation#testClientErrorMetrics fails intermittently due to assumption that a lease error will be thrown.

2015-11-03 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12540:
---
Status: Patch Available  (was: Open)

> TestAzureFileSystemInstrumentation#testClientErrorMetrics fails 
> intermittently due to assumption that a lease error will be thrown.
> ---
>
> Key: HADOOP-12540
> URL: https://issues.apache.org/jira/browse/HADOOP-12540
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure, test
>Reporter: Chris Nauroth
>Assignee: Gaurav Kanade
> Attachments: HADOOP-12540.01.patch, HADOOP-12540.02.patch
>
>
> HADOOP-12508 changed the behavior of an Azure Storage lease violation during 
> deletes.  It appears that 
> {{TestAzureFileSystemInstrumentation#testClientErrorMetrics}} is partly 
> dependent on the old behavior for simulating an error to be tracked by the 
> metrics system.  I am seeing intermittent failures in this test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12548) read s3 creds from a file

2015-11-03 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14988483#comment-14988483
 ] 

Chris Nauroth commented on HADOOP-12548:


Also, this documentation page describes the current configuration setup, in 
case you're looking for something more human readable.

http://hadoop.apache.org/docs/r2.7.1/hadoop-aws/tools/hadoop-aws/index.html

> read s3 creds from a file
> -
>
> Key: HADOOP-12548
> URL: https://issues.apache.org/jira/browse/HADOOP-12548
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Reporter: Allen Wittenauer
>
> It would be good if we could read s3 creds from a file rather than via a java 
> property.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12548) read s3 creds from a file

2015-11-03 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14988482#comment-14988482
 ] 

Chris Nauroth commented on HADOOP-12548:


Hi [~lmccay].  Thanks for jumping in!

The current codebase uses this class to encapsulate retrieving the S3 
credentials:

https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3/S3Credentials.java

Here is where you can see the s3n file system using that class:

https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.java#L79-L80

The s3a file system does it a little differently, without going through the 
{{S3Credentials}} class:

https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L160-L161

IMO, we don't need to worry about looking at the s3 file system, only s3n and 
s3a.  Maybe Allen can confirm or deny that though as the requester.

> read s3 creds from a file
> -
>
> Key: HADOOP-12548
> URL: https://issues.apache.org/jira/browse/HADOOP-12548
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Reporter: Allen Wittenauer
>
> It would be good if we could read s3 creds from a file rather than via a java 
> property.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12540) TestAzureFileSystemInstrumentation#testClientErrorMetrics fails intermittently due to assumption that a lease error will be thrown.

2015-11-03 Thread Gaurav Kanade (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gaurav Kanade updated HADOOP-12540:
---
Attachment: HADOOP-12540.03.patch

> TestAzureFileSystemInstrumentation#testClientErrorMetrics fails 
> intermittently due to assumption that a lease error will be thrown.
> ---
>
> Key: HADOOP-12540
> URL: https://issues.apache.org/jira/browse/HADOOP-12540
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure, test
>Reporter: Chris Nauroth
>Assignee: Gaurav Kanade
> Attachments: HADOOP-12540.01.patch, HADOOP-12540.02.patch, 
> HADOOP-12540.03.patch
>
>
> HADOOP-12508 changed the behavior of an Azure Storage lease violation during 
> deletes.  It appears that 
> {{TestAzureFileSystemInstrumentation#testClientErrorMetrics}} is partly 
> dependent on the old behavior for simulating an error to be tracked by the 
> metrics system.  I am seeing intermittent failures in this test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12540) TestAzureFileSystemInstrumentation#testClientErrorMetrics fails intermittently due to assumption that a lease error will be thrown.

2015-11-03 Thread Gaurav Kanade (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14988501#comment-14988501
 ] 

Gaurav Kanade commented on HADOOP-12540:


Done, yes that was missing, addressed it now. The new v03 patch should have the 
fix

> TestAzureFileSystemInstrumentation#testClientErrorMetrics fails 
> intermittently due to assumption that a lease error will be thrown.
> ---
>
> Key: HADOOP-12540
> URL: https://issues.apache.org/jira/browse/HADOOP-12540
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure, test
>Reporter: Chris Nauroth
>Assignee: Gaurav Kanade
> Attachments: HADOOP-12540.01.patch, HADOOP-12540.02.patch, 
> HADOOP-12540.03.patch
>
>
> HADOOP-12508 changed the behavior of an Azure Storage lease violation during 
> deletes.  It appears that 
> {{TestAzureFileSystemInstrumentation#testClientErrorMetrics}} is partly 
> dependent on the old behavior for simulating an error to be tracked by the 
> metrics system.  I am seeing intermittent failures in this test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12540) TestAzureFileSystemInstrumentation#testClientErrorMetrics fails intermittently due to assumption that a lease error will be thrown.

2015-11-03 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14988498#comment-14988498
 ] 

Chris Nauroth commented on HADOOP-12540:


I see patch v02 now sets {{leaseID}} to {{null}}, but then there is still an 
unguarded {{releaseLease}} in the {{finally}} block.  Similar to my last 
question, is it legal to pass {{null}} to the {{releaseLease}} call?  
Intuitively, I would think not, so I was really looking for this in the 
{{finally}} block:

{code}
if (leaseID != null) {
  testAccount.releaseLease(leaseID, fileName);
}
{code}

BTW, I noticed you deleted the old v02 patch and uploaded a new one with the 
same name.  Instead, would you please leave the old files there and use a new 
file name with the revision number incremented?  We prefer to maintain an 
unaltered history of the patch revisions.

Thanks again!

> TestAzureFileSystemInstrumentation#testClientErrorMetrics fails 
> intermittently due to assumption that a lease error will be thrown.
> ---
>
> Key: HADOOP-12540
> URL: https://issues.apache.org/jira/browse/HADOOP-12540
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure, test
>Reporter: Chris Nauroth
>Assignee: Gaurav Kanade
> Attachments: HADOOP-12540.01.patch, HADOOP-12540.02.patch, 
> HADOOP-12540.03.patch
>
>
> HADOOP-12508 changed the behavior of an Azure Storage lease violation during 
> deletes.  It appears that 
> {{TestAzureFileSystemInstrumentation#testClientErrorMetrics}} is partly 
> dependent on the old behavior for simulating an error to be tracked by the 
> metrics system.  I am seeing intermittent failures in this test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-10420) Add support to Swift-FS to support tempAuth

2015-11-03 Thread Jim VanOosten (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim VanOosten reassigned HADOOP-10420:
--

Assignee: Jim VanOosten

> Add support to Swift-FS to support tempAuth
> ---
>
> Key: HADOOP-10420
> URL: https://issues.apache.org/jira/browse/HADOOP-10420
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/swift, tools
>Affects Versions: 2.3.0
>Reporter: Jinghui Wang
>Assignee: Jim VanOosten
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10420-002.patch, HADOOP-10420-003.patch, 
> HADOOP-10420-004.patch, HADOOP-10420-005.patch, HADOOP-10420-006.patch, 
> HADOOP-10420-007.patch, HADOOP-10420-008.patch, HADOOP-10420-009.patch, 
> HADOOP-10420-010.patch, HADOOP-10420-011.patch, HADOOP-10420-012.patch, 
> HADOOP-10420-013.patch, HADOOP-10420-014.patch, HADOOP-10420.patch
>
>
> Currently, hadoop-openstack Swift FS supports keystone authentication. The 
> attached patch adds support for tempAuth. Users will be able to configure 
> which authentication to use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12543) Path#isRoot() returns true if the path is ".".

2015-11-03 Thread Kazuho Fujii (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14988648#comment-14988648
 ] 

Kazuho Fujii commented on HADOOP-12543:
---

Hi, thanks for the comment.

{{new Path(new Path("/tmp"), new Path(".."))}} equals {{new Path("/")}}. {{new 
Path("/").isRoot()}} equals true. This is OK.

The behavior for relative path is misreading. {{new Path("..").getParent()}} 
equals {{new Path(".")}}.  {{new Path(".").isRoot()}} always return true in the 
current implementation.

A Path class object doesn't know the working directory. It can not say whether 
it repesents the root directory for relative path.

Just seeing the method name and the comment, I misunderstood it returns true 
iif the path is "/". 

> Path#isRoot() returns true if the path is ".".
> --
>
> Key: HADOOP-12543
> URL: https://issues.apache.org/jira/browse/HADOOP-12543
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Kazuho Fujii
> Attachments: HADOOP-12543.001.patch
>
>
> {{Path#isRoot()}} method is expected to return true if and only if the path 
> represents the root of a file system. But, it returns true in the case where 
> the path is ".". This is because {{getParent()}} method returns null when the 
> path in URI is empty.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11540) Raw Reed-Solomon coder using Intel ISA-L library

2015-11-03 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11540:
---
Attachment: HADOOP-11540-v2.patch

Updated the patch based on the latest HADOOP-11996 patch.

> Raw Reed-Solomon coder using Intel ISA-L library
> 
>
> Key: HADOOP-11540
> URL: https://issues.apache.org/jira/browse/HADOOP-11540
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Zhe Zhang
>Assignee: Kai Zheng
> Attachments: HADOOP-11540-initial.patch, HADOOP-11540-v1.patch, 
> HADOOP-11540-v2.patch, Native Erasure Coder Performance - Intel ISAL-v1.pdf
>
>
> This is to provide RS codec implementation using Intel ISA-L library for 
> encoding and decoding.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12548) read s3 creds from a file

2015-11-03 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14988540#comment-14988540
 ] 

Chris Nauroth commented on HADOOP-12548:


bq. Looks like secretAccessKey is already using CredentialProvider API in fs 
S3Credentials.

Oh, you're right!  I missed that.

There is still a gap for s3a, which is using plain {{Configuration#get}} to 
retrieve its key.

> read s3 creds from a file
> -
>
> Key: HADOOP-12548
> URL: https://issues.apache.org/jira/browse/HADOOP-12548
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Reporter: Allen Wittenauer
>
> It would be good if we could read s3 creds from a file rather than via a java 
> property.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12540) TestAzureFileSystemInstrumentation#testClientErrorMetrics fails intermittently due to assumption that a lease error will be thrown.

2015-11-03 Thread Gaurav Kanade (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14988439#comment-14988439
 ] 

Gaurav Kanade commented on HADOOP-12540:


Thanks [~cnauroth] for the review. Addressed your comments and re-submitting 
patch now

> TestAzureFileSystemInstrumentation#testClientErrorMetrics fails 
> intermittently due to assumption that a lease error will be thrown.
> ---
>
> Key: HADOOP-12540
> URL: https://issues.apache.org/jira/browse/HADOOP-12540
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure, test
>Reporter: Chris Nauroth
>Assignee: Gaurav Kanade
> Attachments: HADOOP-12540.01.patch, HADOOP-12540.02.patch
>
>
> HADOOP-12508 changed the behavior of an Azure Storage lease violation during 
> deletes.  It appears that 
> {{TestAzureFileSystemInstrumentation#testClientErrorMetrics}} is partly 
> dependent on the old behavior for simulating an error to be tracked by the 
> metrics system.  I am seeing intermittent failures in this test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8065) discp should have an option to compress data while copying.

2015-11-03 Thread Stephen Veiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Veiss updated HADOOP-8065:
--
Status: Patch Available  (was: Open)

We needed this feature internally, so I updated the patch for CDH 5.4 and from 
there to current trunk. I've attached the trunk version of the patch.

This also includes some tests, and doesn't allow a compression codec to be 
specified for an update, as there's no easy way to tell if a file is different 
between source and destination if one side is compressed.

> discp should have an option to compress data while copying.
> ---
>
> Key: HADOOP-8065
> URL: https://issues.apache.org/jira/browse/HADOOP-8065
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 0.20.2
>Reporter: Suresh Antony
>Priority: Minor
>  Labels: distcp
> Fix For: 0.20.2
>
> Attachments: HADOOP-8065-trunk_2015-11-03.patch, 
> patch.distcp.2012-02-10
>
>
> We would like compress the data while transferring from our source system to 
> target system. One way to do this is to write a map/reduce job to compress 
> that after/before being transferred. This looks inefficient. 
> Since distcp already reading writing data it would be better if it can 
> accomplish while doing this. 
> Flip side of this is that distcp -update option can not check file size 
> before copying data. It can only check for the existence of file. 
> So I propose if -compress option is given then file size is not checked.
> Also when we copy file appropriate extension needs to be added to file 
> depending on compression type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12540) TestAzureFileSystemInstrumentation#testClientErrorMetrics fails intermittently due to assumption that a lease error will be thrown.

2015-11-03 Thread Gaurav Kanade (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gaurav Kanade updated HADOOP-12540:
---
Attachment: HADOOP-12540.02.patch

> TestAzureFileSystemInstrumentation#testClientErrorMetrics fails 
> intermittently due to assumption that a lease error will be thrown.
> ---
>
> Key: HADOOP-12540
> URL: https://issues.apache.org/jira/browse/HADOOP-12540
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure, test
>Reporter: Chris Nauroth
>Assignee: Gaurav Kanade
> Attachments: HADOOP-12540.01.patch, HADOOP-12540.02.patch
>
>
> HADOOP-12508 changed the behavior of an Azure Storage lease violation during 
> deletes.  It appears that 
> {{TestAzureFileSystemInstrumentation#testClientErrorMetrics}} is partly 
> dependent on the old behavior for simulating an error to be tracked by the 
> metrics system.  I am seeing intermittent failures in this test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8065) discp should have an option to compress data while copying.

2015-11-03 Thread Stephen Veiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Veiss updated HADOOP-8065:
--
Attachment: HADOOP-8065-trunk_2015-11-03.patch

> discp should have an option to compress data while copying.
> ---
>
> Key: HADOOP-8065
> URL: https://issues.apache.org/jira/browse/HADOOP-8065
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 0.20.2
>Reporter: Suresh Antony
>Priority: Minor
>  Labels: distcp
> Fix For: 0.20.2
>
> Attachments: HADOOP-8065-trunk_2015-11-03.patch, 
> patch.distcp.2012-02-10
>
>
> We would like compress the data while transferring from our source system to 
> target system. One way to do this is to write a map/reduce job to compress 
> that after/before being transferred. This looks inefficient. 
> Since distcp already reading writing data it would be better if it can 
> accomplish while doing this. 
> Flip side of this is that distcp -update option can not check file size 
> before copying data. It can only check for the existence of file. 
> So I propose if -compress option is given then file size is not checked.
> Also when we copy file appropriate extension needs to be added to file 
> depending on compression type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12540) TestAzureFileSystemInstrumentation#testClientErrorMetrics fails intermittently due to assumption that a lease error will be thrown.

2015-11-03 Thread Gaurav Kanade (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gaurav Kanade updated HADOOP-12540:
---
Attachment: HADOOP-12540.02.patch

> TestAzureFileSystemInstrumentation#testClientErrorMetrics fails 
> intermittently due to assumption that a lease error will be thrown.
> ---
>
> Key: HADOOP-12540
> URL: https://issues.apache.org/jira/browse/HADOOP-12540
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure, test
>Reporter: Chris Nauroth
>Assignee: Gaurav Kanade
> Attachments: HADOOP-12540.01.patch, HADOOP-12540.02.patch
>
>
> HADOOP-12508 changed the behavior of an Azure Storage lease violation during 
> deletes.  It appears that 
> {{TestAzureFileSystemInstrumentation#testClientErrorMetrics}} is partly 
> dependent on the old behavior for simulating an error to be tracked by the 
> metrics system.  I am seeing intermittent failures in this test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12540) TestAzureFileSystemInstrumentation#testClientErrorMetrics fails intermittently due to assumption that a lease error will be thrown.

2015-11-03 Thread Gaurav Kanade (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gaurav Kanade updated HADOOP-12540:
---
Attachment: (was: HADOOP-12540.02.patch)

> TestAzureFileSystemInstrumentation#testClientErrorMetrics fails 
> intermittently due to assumption that a lease error will be thrown.
> ---
>
> Key: HADOOP-12540
> URL: https://issues.apache.org/jira/browse/HADOOP-12540
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure, test
>Reporter: Chris Nauroth
>Assignee: Gaurav Kanade
> Attachments: HADOOP-12540.01.patch, HADOOP-12540.02.patch
>
>
> HADOOP-12508 changed the behavior of an Azure Storage lease violation during 
> deletes.  It appears that 
> {{TestAzureFileSystemInstrumentation#testClientErrorMetrics}} is partly 
> dependent on the old behavior for simulating an error to be tracked by the 
> metrics system.  I am seeing intermittent failures in this test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12540) TestAzureFileSystemInstrumentation#testClientErrorMetrics fails intermittently due to assumption that a lease error will be thrown.

2015-11-03 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14988505#comment-14988505
 ] 

Chris Nauroth commented on HADOOP-12540:


Patch v03 looks good.  +1 pending Jenkins.

> TestAzureFileSystemInstrumentation#testClientErrorMetrics fails 
> intermittently due to assumption that a lease error will be thrown.
> ---
>
> Key: HADOOP-12540
> URL: https://issues.apache.org/jira/browse/HADOOP-12540
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure, test
>Reporter: Chris Nauroth
>Assignee: Gaurav Kanade
> Attachments: HADOOP-12540.01.patch, HADOOP-12540.02.patch, 
> HADOOP-12540.03.patch
>
>
> HADOOP-12508 changed the behavior of an Azure Storage lease violation during 
> deletes.  It appears that 
> {{TestAzureFileSystemInstrumentation#testClientErrorMetrics}} is partly 
> dependent on the old behavior for simulating an error to be tracked by the 
> metrics system.  I am seeing intermittent failures in this test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12549) Extend HDFS-7456 default generically to all pattern lookups

2015-11-03 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-12549:
-
Status: Patch Available  (was: Open)

> Extend HDFS-7456 default generically to all pattern lookups
> ---
>
> Key: HADOOP-12549
> URL: https://issues.apache.org/jira/browse/HADOOP-12549
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, security
>Affects Versions: 2.7.1
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Minor
> Attachments: HADOOP-12549.patch
>
>
> In HDFS-7546 we added a hdfs-default.xml property to bring back the regular 
> behaviour of trusting all principals (as was the case before HADOOP-9789). 
> However, the change only targeted HDFS users and also only those that used 
> the default-loading mechanism of Configuration class (i.e. not {{new 
> Configuration(false)}} users).
> I'd like to propose adding the same default to the generic RPC client code 
> also, so the default affects all form of clients equally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11887) Introduce Intel ISA-L erasure coding library for the native support

2015-11-03 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14988603#comment-14988603
 ] 

Kai Zheng commented on HADOOP-11887:


Thanks Colin a lot for the more review and the commit is very helpful for 
subsequent tasks. Yes, I can also address refinements in HADOOP-11996 and 
HADOOP-11540 if any, since the both are laid on this.

> Introduce Intel ISA-L erasure coding library for the native support
> ---
>
> Key: HADOOP-11887
> URL: https://issues.apache.org/jira/browse/HADOOP-11887
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11887-HDFS-7285-v6.patch, HADOOP-11887-v1.patch, 
> HADOOP-11887-v10, HADOOP-11887-v2.patch, HADOOP-11887-v3.patch, 
> HADOOP-11887-v4.patch, HADOOP-11887-v5.patch, HADOOP-11887-v5.patch, 
> HADOOP-11887-v7.patch, HADOOP-11887-v8.patch, HADOOP-11887-v9.patch
>
>
> This is to introduce Intel ISA-L erasure coding library for the native 
> support, via dynamic loading mechanism (dynamic module, like *.so in *nix and 
> *.dll on Windows).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12537) s3a: Add flag for session ID to allow Amazon STS temporary credentials

2015-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14988662#comment-14988662
 ] 

Hadoop QA commented on HADOOP-12537:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 5s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 8s 
{color} | {color:blue} The patch file was not named according to hadoop's 
naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute 
for instructions. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
8s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 20s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 7s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
11s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 11s 
{color} | {color:red} branch/hadoop-project no findbugs output file 
(hadoop-project/target/findbugsXml.xml) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
1s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 16s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 6s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 6s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 58s 
{color} | {color:red} Patch generated 2 new checkstyle issues in root (total 
was 62, now 63). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 11s 
{color} | {color:red} patch/hadoop-project no findbugs output file 
(hadoop-project/target/findbugsXml.xml) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 28s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m 38s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_60. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 7s 
{color} | {color:green} hadoop-project in the patch passed with JDK v1.8.0_60. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 11s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.8.0_60. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 24s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} unit 

[jira] [Commented] (HADOOP-12540) TestAzureFileSystemInstrumentation#testClientErrorMetrics fails intermittently due to assumption that a lease error will be thrown.

2015-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14988712#comment-14988712
 ] 

Hadoop QA commented on HADOOP-12540:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
9s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 7s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 21s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
29s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 46s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.0 Server=1.7.0 
Image:test-patch-base-hadoop-date2015-11-04 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12770460/HADOOP-12540.03.patch 
|
| JIRA Issue | HADOOP-12540 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile  |
| uname | Linux 1d30182d439c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/apache-yetus-d0f6847/precommit/personality/hadoop.sh
 |
| git revision | trunk / 194251c |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_66 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79 |
| findbugs | v3.0.0 |
| JDK v1.7.0_79  Test Results | 

[jira] [Updated] (HADOOP-12526) [Branch-2] there are duplicate dependency definitions in pom's

2015-11-03 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HADOOP-12526:
-
Target Version/s: 2.8.0, 2.6.3, 2.7.3  (was: 2.7.3)

I'm fine with moving it to 2.7.3. I added back 2.8.0 and 2.6.3 as we want them 
in those releases too.

> [Branch-2] there are duplicate dependency definitions in pom's
> --
>
> Key: HADOOP-12526
> URL: https://issues.apache.org/jira/browse/HADOOP-12526
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.8.0, 2.7.1, 2.6.2
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: HADOOP-12526-branch-2.001.patch, 
> HADOOP-12526-branch-2.6.001.patch
>
>
> There are several places where dependencies are defined multiple times within 
> pom's, and are causing maven build warnings. They should be fixed. This is 
> specific to branch-2.6.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12545) Hadoop Javadoc has broken link for AccessControlList, ImpersonationProvider, DefaultImpersonationProvider and DistCp

2015-11-03 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-12545:
-
Target Version/s: 2.7.3  (was: 2.7.2)

Moving out all non-critical / non-blocker issues that didn't make it out of 
2.7.2 into 2.7.3. Please revert back if you disagree.

> Hadoop Javadoc has broken link for AccessControlList, ImpersonationProvider, 
> DefaultImpersonationProvider and DistCp
> 
>
> Key: HADOOP-12545
> URL: https://issues.apache.org/jira/browse/HADOOP-12545
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Arshad Mohammad
>Assignee: Arshad Mohammad
> Attachments: HADOOP-12545-01.patch
>
>
> 1) open hadoop-2.7.1\share\doc\hadoop\api\index.html
> 2) Click on "All Classes"
> 3) Click on "AccessControlList", The page shows "This page can’t be displayed"
> Same error for DistCp, ImpersonationProvider and DefaultImpersonationProvider 
> also.
> Javadoc generated from Trunk has the same problem



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12482) Race condition in JMX cache update

2015-11-03 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-12482:
-
Target Version/s: 2.7.3  (was: 2.7.2, 3.0.0)

Moving out all non-critical / non-blocker issues that didn't make it out of 
2.7.2 into 2.7.3. Please revert back if you disagree.

> Race condition in JMX cache update
> --
>
> Key: HADOOP-12482
> URL: https://issues.apache.org/jira/browse/HADOOP-12482
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Tony Wu
>Assignee: Tony Wu
> Attachments: HADOOP-12482.001.patch, HADOOP-12482.002.patch, 
> HADOOP-12482.003.patch, HADOOP-12482.004.patch
>
>
> updateJmxCache() was updated in HADOOP-11301. However the patch introduced a 
> race condition. In updateJmxCache() function in MetricsSourceAdapter.java:
> {code:java}
>   private void updateJmxCache() {
> boolean getAllMetrics = false;
> synchronized (this) {
>   if (Time.now() - jmxCacheTS >= jmxCacheTTL) {
> // temporarilly advance the expiry while updating the cache
> jmxCacheTS = Time.now() + jmxCacheTTL;
> if (lastRecs == null) {
>   getAllMetrics = true;
> }
>   } else {
> return;
>   }
>   if (getAllMetrics) {
> MetricsCollectorImpl builder = new MetricsCollectorImpl();
> getMetrics(builder, true);
>   }
>   updateAttrCache();
>   if (getAllMetrics) {
> updateInfoCache();
>   }
>   jmxCacheTS = Time.now();
>   lastRecs = null; // in case regular interval update is not running
> }
>   }
> {code}
> Notice that getAllMetrics is set to true when:
> # jmxCacheTTL has passed
> # lastRecs == null
> lastRecs is set to null in the same function, but gets reassigned by 
> getMetrics().
> However getMetrics() can be called from a different thread:
> # MetricsSystemImpl.onTimerEvent()
> # MetricsSystemImpl.publishMetricsNow()
> Consider the following sequence:
> # updateJmxCache() is called by getMBeanInfo() from a thread getting cached 
> info. 
> ** lastRecs is set to null.
> # metrics sources is updated with new value/field.
> # getMetrics() is called by publishMetricsNow() or onTimerEvent() from a 
> different thread getting the latest metrics. 
> ** lastRecs is updated (!= null).
> # jmxCacheTTL passed.
> # updateJmxCache() is called again via getMBeanInfo().
> ** However because lastRecs is already updated (!= null), getAllMetrics will 
> not be set to true. So updateInfoCache() is not called and getMBeanInfo() 
> returns the old cached info.
> We ran into this issue on a cluster where a new metric did not get published 
> until much later.
> The case can be made worse by a periodic call to getMetrics() (driven by an 
> external program or script). In such case getMBeanInfo() may never be able to 
> retrieve the new record.
> The desired behavior should be that updateJmxCache() will guarantee to call 
> updateInfoCache() once after jmxCacheTTL, if lastRecs has been set to null by 
> updateJmxCache() itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12208) Wrong token in the authentication header when there is re-authentication request

2015-11-03 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-12208:
-
Target Version/s: 2.7.3  (was: 2.7.2)

Moving out all non-critical / non-blocker issues that didn't make it out of 
2.7.2 into 2.7.3. Please revert back if you disagree.

> Wrong token in the authentication header when there is re-authentication 
> request
> 
>
> Key: HADOOP-12208
> URL: https://issues.apache.org/jira/browse/HADOOP-12208
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/swift
>Affects Versions: 2.6.0
>Reporter: Gil Vernik
>Assignee: Gil Vernik
> Attachments: token-header-fix-0001.patch
>
>
> When authentication token expires, Swift returns 401. In this case the exec 
> method from SwiftRestClient catches this exception and performs another 
> authentication request to renew the token. If authentication successful, exec 
> method retry original request. However, the bug is that retry still uses old 
> token in a header and doesn't update it with a new one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12548) read s3 creds from a file

2015-11-03 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14988525#comment-14988525
 ] 

Larry McCay commented on HADOOP-12548:
--

Looks like secretAccessKey is already using CredentialProvider API in fs 
S3Credentials.

{code}
if (secretAccessKey == null) {
  final char[] pass = conf.getPassword(secretAccessKeyProperty);
  if (pass != null) {
secretAccessKey = (new String(pass)).trim();
  }
}
{code}

> read s3 creds from a file
> -
>
> Key: HADOOP-12548
> URL: https://issues.apache.org/jira/browse/HADOOP-12548
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Reporter: Allen Wittenauer
>
> It would be good if we could read s3 creds from a file rather than via a java 
> property.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12549) Extend HDFS-7456 default generically to all pattern lookups

2015-11-03 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-12549:
-
Attachment: HADOOP-12549.patch

> Extend HDFS-7456 default generically to all pattern lookups
> ---
>
> Key: HADOOP-12549
> URL: https://issues.apache.org/jira/browse/HADOOP-12549
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, security
>Affects Versions: 2.7.1
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Minor
> Attachments: HADOOP-12549.patch
>
>
> In HDFS-7546 we added a hdfs-default.xml property to bring back the regular 
> behaviour of trusting all principals (as was the case before HADOOP-9789). 
> However, the change only targeted HDFS users and also only those that used 
> the default-loading mechanism of Configuration class (i.e. not {{new 
> Configuration(false)}} users).
> I'd like to propose adding the same default to the generic RPC client code 
> also, so the default affects all form of clients equally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10642) Provide option to limit heap memory consumed by dynamic metrics2 metrics

2015-11-03 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HADOOP-10642:

Description: 
User sunweiei provided the following jmap output in HBase 0.96 deployment:
{code}
 num #instances #bytes  class name
--
   1:  14917882 3396492464  [C
   2:   1996994 2118021808  [B
   3:  43341650 1733666000  java.util.LinkedHashMap$Entry
   4:  14453983 1156550896  [Ljava.util.HashMap$Entry;
   5:  14446577  924580928  
org.apache.hadoop.metrics2.lib.Interns$CacheWith2Keys$2
{code}

Heap consumption by Interns$CacheWith2Keys$2 (and indirectly by [C) could be 
due to calls to Interns.info() in DynamicMetricsRegistry which was cloned off 
metrics2/lib/MetricsRegistry.java.

This scenario would arise when large number of regions are tracked through 
metrics2 dynamically.
Interns class doesn't provide API to remove entries in its internal Map.

One solution is to provide an option that allows skipping calls to 
Interns.info() in metrics2/lib/MetricsRegistry.java

  was:
User sunweiei provided the following jmap output in HBase 0.96 deployment:
{code}
 num #instances #bytes  class name
--
   1:  14917882 3396492464  [C
   2:   1996994 2118021808  [B
   3:  43341650 1733666000  java.util.LinkedHashMap$Entry
   4:  14453983 1156550896  [Ljava.util.HashMap$Entry;
   5:  14446577  924580928  
org.apache.hadoop.metrics2.lib.Interns$CacheWith2Keys$2
{code}
Heap consumption by Interns$CacheWith2Keys$2 (and indirectly by [C) could be 
due to calls to Interns.info() in DynamicMetricsRegistry which was cloned off 
metrics2/lib/MetricsRegistry.java.

This scenario would arise when large number of regions are tracked through 
metrics2 dynamically.
Interns class doesn't provide API to remove entries in its internal Map.

One solution is to provide an option that allows skipping calls to 
Interns.info() in metrics2/lib/MetricsRegistry.java


> Provide option to limit heap memory consumed by dynamic metrics2 metrics
> 
>
> Key: HADOOP-10642
> URL: https://issues.apache.org/jira/browse/HADOOP-10642
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Ted Yu
>
> User sunweiei provided the following jmap output in HBase 0.96 deployment:
> {code}
>  num #instances #bytes  class name
> --
>1:  14917882 3396492464  [C
>2:   1996994 2118021808  [B
>3:  43341650 1733666000  java.util.LinkedHashMap$Entry
>4:  14453983 1156550896  [Ljava.util.HashMap$Entry;
>5:  14446577  924580928  
> org.apache.hadoop.metrics2.lib.Interns$CacheWith2Keys$2
> {code}
> Heap consumption by Interns$CacheWith2Keys$2 (and indirectly by [C) could be 
> due to calls to Interns.info() in DynamicMetricsRegistry which was cloned off 
> metrics2/lib/MetricsRegistry.java.
> This scenario would arise when large number of regions are tracked through 
> metrics2 dynamically.
> Interns class doesn't provide API to remove entries in its internal Map.
> One solution is to provide an option that allows skipping calls to 
> Interns.info() in metrics2/lib/MetricsRegistry.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12548) read s3 creds from a file

2015-11-03 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14988671#comment-14988671
 ] 

Larry McCay commented on HADOOP-12548:
--

[~cnauroth] - I can add it to s3a. Are we okay with leaving accessKey the way 
it is or do we want to get that from credentials as well?

[~aw] - what would you like to see for the example?

> read s3 creds from a file
> -
>
> Key: HADOOP-12548
> URL: https://issues.apache.org/jira/browse/HADOOP-12548
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Reporter: Allen Wittenauer
>
> It would be good if we could read s3 creds from a file rather than via a java 
> property.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12544) Erasure Coding: create dummy raw coder

2015-11-03 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HADOOP-12544:

Attachment: HADOOP-12544.3.patch

Thanks Zhe for the suggestions. Update patch to add more doc.

> Erasure Coding: create dummy raw coder
> --
>
> Key: HADOOP-12544
> URL: https://issues.apache.org/jira/browse/HADOOP-12544
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Rui Li
> Attachments: HADOOP-12544.1.patch, HADOOP-12544.2.patch, 
> HADOOP-12544.3.patch
>
>
> Create a dummy raw coder which does no computation and simply returns zero 
> bytes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8065) discp should have an option to compress data while copying.

2015-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14988753#comment-14988753
 ] 

Hadoop QA commented on HADOOP-8065:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
46s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 12s 
{color} | {color:red} Patch generated 7 new checkstyle issues in 
hadoop-tools/hadoop-distcp (total was 145, now 152). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
54s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 50s 
{color} | {color:red} hadoop-tools_hadoop-distcp-jdk1.8.0_60 with JDK v1.8.0_60 
generated 1 new issues (was 51, now 52). {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 12s 
{color} | {color:green} hadoop-distcp in the patch passed with JDK v1.8.0_60. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 38s 
{color} | {color:green} hadoop-distcp in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
31s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 11s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-04 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12770439/HADOOP-8065-trunk_2015-11-03.patch
 |
| JIRA Issue | HADOOP-8065 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile  |
| uname | Linux 47c631376b36 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HADOOP-12544) Erasure Coding: create dummy raw coder

2015-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14988957#comment-14988957
 ] 

Hadoop QA commented on HADOOP-12544:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 7s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 11s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 58s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
53s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 38s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 30s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 25s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_60. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 12s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
26s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 16s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_60 Failed junit tests | hadoop.fs.shell.TestCopyPreserveFlag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-04 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12770495/HADOOP-12544.3.patch |
| JIRA Issue | HADOOP-12544 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile  |
| uname | Linux a74737cdffe4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/apache-yetus-d0f6847/precommit/personality/hadoop.sh
 |
| git revision | trunk / 194251c |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_60 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79 |
| 

[jira] [Commented] (HADOOP-11957) if an IOException error is thrown in DomainSocket.close we go into infinite loop.

2015-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14988836#comment-14988836
 ] 

Hadoop QA commented on HADOOP-11957:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 6s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
52s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 16s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 11s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 19s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 10s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 51s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 12m 17s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 51s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_60 Failed junit tests | hadoop.ipc.TestDecayRpcScheduler |
|   | hadoop.net.unix.TestDomainSocket |
|   | hadoop.metrics2.sink.TestFileSink |
| JDK v1.7.0_79 Failed junit tests | hadoop.net.unix.TestDomainSocket |
|   | hadoop.metrics2.sink.TestFileSink |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-04 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12732085/HADOOP-11957.001.patch
 |
| JIRA Issue | HADOOP-11957 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile  |
| uname | Linux 1ecb046fa009 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Updated] (HADOOP-11684) S3a to use thread pool that blocks clients

2015-11-03 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-11684:
--
Attachment: HADOOP-11684-005.patch

005 patch addresses [~eddyxu]'s comments on the test I added:
- No wildcard static import.
- Use assert instead of runtime exception.
- camelCasify

Left sleep-based implementation because detecting "blocking" is racy no matter 
what.

Ran all the s3a unit tests again. 

> S3a to use thread pool that blocks clients
> --
>
> Key: HADOOP-11684
> URL: https://issues.apache.org/jira/browse/HADOOP-11684
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Thomas Demoor
>Assignee: Thomas Demoor
> Attachments: HADOOP-11684-001.patch, HADOOP-11684-002.patch, 
> HADOOP-11684-003.patch, HADOOP-11684-004.patch, HADOOP-11684-005.patch
>
>
> Currently, if fs.s3a.max.total.tasks are queued and another (part)upload 
> wants to start, a RejectedExecutionException is thrown. 
> We should use a threadpool that blocks clients, nicely throtthling them, 
> rather than throwing an exception. F.i. something similar to 
> https://github.com/apache/incubator-s4/blob/master/subprojects/s4-comm/src/main/java/org/apache/s4/comm/staging/BlockingThreadPoolExecutorService.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10420) Add support to Swift-FS to support tempAuth

2015-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14988804#comment-14988804
 ] 

Hadoop QA commented on HADOOP-10420:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
8s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
7s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 11s 
{color} | {color:green} hadoop-openstack in the patch passed with JDK 
v1.8.0_60. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 13s 
{color} | {color:green} hadoop-openstack in the patch passed with JDK 
v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
25s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 9m 42s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-04 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12770467/HADOOP-10420-014.patch
 |
| JIRA Issue | HADOOP-10420 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile  site  mvnsite  |
| uname | Linux 63ff64730d11 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

  1   2   >