[jira] [Created] (HADOOP-13012) yetus-wrapper should fail sooner when download fails

2016-04-09 Thread Steven Wong (JIRA)
Steven Wong created HADOOP-13012:


 Summary: yetus-wrapper should fail sooner when download fails
 Key: HADOOP-13012
 URL: https://issues.apache.org/jira/browse/HADOOP-13012
 Project: Hadoop Common
  Issue Type: Bug
  Components: yetus
Reporter: Steven Wong
Assignee: Steven Wong
Priority: Minor


When yetus-wrapper cannot download the Yetus tarball (because the download 
server is down, for example), currently it fails during the later gunzip step. 
It's better if it fails right away during the download (curl) step.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-common-trunk-Java8 #1317

2016-04-09 Thread Apache Jenkins Server
See 

Changes:

[epayne] MAPREDUCE-6633. AM should retry map attempts if the reduce task

[kasha] YARN-4927. TestRMHA#testTransitionedToActiveRefreshFail fails with

--
[...truncated 5572 lines...]
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.716 sec - in 
org.apache.hadoop.io.TestDefaultStringifier
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestBloomMapFile
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.725 sec - in 
org.apache.hadoop.io.TestBloomMapFile
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestBytesWritable
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.357 sec - in 
org.apache.hadoop.io.TestBytesWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.erasurecode.rawcoder.TestDummyRawCoder
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.193 sec - in 
org.apache.hadoop.io.erasurecode.rawcoder.TestDummyRawCoder
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.erasurecode.rawcoder.TestRSRawCoderLegacy
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.636 sec - in 
org.apache.hadoop.io.erasurecode.rawcoder.TestRSRawCoderLegacy
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.erasurecode.rawcoder.TestXORRawCoder
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.39 sec - in 
org.apache.hadoop.io.erasurecode.rawcoder.TestXORRawCoder
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.erasurecode.rawcoder.TestRSRawCoder
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.832 sec - in 
org.apache.hadoop.io.erasurecode.rawcoder.TestRSRawCoder
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.erasurecode.TestECSchema
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.132 sec - in 
org.apache.hadoop.io.erasurecode.TestECSchema
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.erasurecode.coder.TestXORCoder
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.432 sec - in 
org.apache.hadoop.io.erasurecode.coder.TestXORCoder
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.erasurecode.coder.TestHHXORErasureCoder
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.111 sec - in 
org.apache.hadoop.io.erasurecode.coder.TestHHXORErasureCoder
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.erasurecode.coder.TestRSErasureCoder
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.8 sec - in 
org.apache.hadoop.io.erasurecode.coder.TestRSErasureCoder
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestWritableUtils
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.369 sec - in 
org.apache.hadoop.io.TestWritableUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestBooleanWritable
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.379 sec - in 
org.apache.hadoop.io.TestBooleanWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestDataByteBuffers
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.748 sec - in 
org.apache.hadoop.io.TestDataByteBuffers
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestVersionedWritable
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.406 sec - in 
org.apache.hadoop.io.TestVersionedWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestEnumSetWritable
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.788 sec - in 
org.apache.hadoop.io.TestEnumSetWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestMapWritable
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0

Jenkins build is back to normal : Hadoop-Common-trunk #2608

2016-04-09 Thread Apache Jenkins Server
See 



[jira] [Created] (HADOOP-13011) Clearly Document the Password Details for Keystore-based Credential Providers

2016-04-09 Thread Larry McCay (JIRA)
Larry McCay created HADOOP-13011:


 Summary: Clearly Document the Password Details for Keystore-based 
Credential Providers
 Key: HADOOP-13011
 URL: https://issues.apache.org/jira/browse/HADOOP-13011
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: documentation
Reporter: Larry McCay
Assignee: Larry McCay
 Fix For: 2.8.0


HADOOP-12942 discusses the unobviousness of the use of a default password for 
the keystores for keystore-based credential providers. This patch adds 
documentation to the CredentialProviderAPI.md that describes the different 
types of credential providers available and the password management details of 
the keystore-based ones.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-13010) Refactor raw erasure coders

2016-04-09 Thread Kai Zheng (JIRA)
Kai Zheng created HADOOP-13010:
--

 Summary: Refactor raw erasure coders
 Key: HADOOP-13010
 URL: https://issues.apache.org/jira/browse/HADOOP-13010
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Fix For: 3.0.0


This will refactor raw erasure coders according to some comments received so 
far.
* As discussed in HADOOP-11540 and suggested by [~cmccabe], better not to rely 
class inheritance to reuse the codes, instead they can be moved to some utility.
* Suggested by [~jingzhao] somewhere quite some time ago, better to have a 
state holder to keep some checking results for later reuse during an 
encode/decode call.

This would not get rid of some inheritance levels as doing so isn't clear yet 
for the moment and also incurs big impact. I do wish the end result by this 
refactoring will make all the levels more clear and easier to follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Common-trunk #2607

2016-04-09 Thread Apache Jenkins Server
See 

Changes:

[stevel] HADOOP-12444 Support lazy seek in S3AInputStream. Rajesh Balamohan via

--
[...truncated 3847 lines...]
Generating 

Building index for all the packages and classes...
Generating 

Generating 

Generating 

Building index for all classes...
Generating 

Generating 

Generating 

Generating 

Generating 

[INFO] Building jar: 

[INFO] 
[INFO] 
[INFO] Building Apache Hadoop MiniKDC 3.0.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-minikdc ---
[INFO] Deleting 

[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-minikdc 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-minikdc ---
[INFO] There are 9 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[WARNING] Unable to locate Source XRef to link to - DISABLED
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-minikdc ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 

[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ 
hadoop-minikdc ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hadoop-minikdc 
---
[INFO] Compiling 2 source files to 

[INFO] 
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ 
hadoop-minikdc ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] 
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
hadoop-minikdc ---
[INFO] Compiling 2 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.17:test (default-test) @ hadoop-minikdc ---
[INFO] Surefire report directory: 


---
 T E S T S
---

---
 T E S T S
---
Running org.apache.hadoop.minikdc.TestChangeOrgNameAndDomain
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.325 sec - in 
org.apache.hadoop.minikdc.TestChangeOrgNameAndDomain
Running org.apache.hadoop.minikdc.TestMiniKdc
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.43 sec - in 
org.apache.hadoop.minikdc.TestMiniKdc

Results :

Tests run: 6, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (default-jar) @ hadoop-minikdc ---
[INFO] Building jar: 

[INFO] 

[jira] [Resolved] (HADOOP-12997) s3a to pass PositionedReadable contract tests, improve readFully perf.

2016-04-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-12997.
-
   Resolution: Fixed
Fix Version/s: 2.8.0

> s3a to pass PositionedReadable contract tests, improve readFully perf.
> --
>
> Key: HADOOP-12997
> URL: https://issues.apache.org/jira/browse/HADOOP-12997
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.8.0
>
>
> Fix s3a so that it passes the new tests in HADOOP-12994
> Also: optimise readFully so that instead of a sequence of seek-read-seek 
> operations, it does an opening seek and retains that position as it loops 
> through the data



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-12976) s3a toString to be meaningful in logs

2016-04-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-12976.
-
   Resolution: Fixed
 Assignee: Steve Loughran
Fix Version/s: 2.8.0

> s3a toString to be meaningful in logs
> -
>
> Key: HADOOP-12976
> URL: https://issues.apache.org/jira/browse/HADOOP-12976
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Trivial
> Fix For: 2.8.0
>
>
> today's toString value is just the object ref; better to include the URL of 
> the FS
> Example:
> {code}
> Cleaning filesystem org.apache.hadoop.fs.s3a.S3AFileSystem@1f069dc1 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11874) s3a can throw spurious IOEs on close()

2016-04-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-11874.
-
   Resolution: Fixed
Fix Version/s: 2.8.0

> s3a can throw spurious IOEs on close()
> --
>
> Key: HADOOP-11874
> URL: https://issues.apache.org/jira/browse/HADOOP-11874
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.8.0
>
>
> from a code review, it's clear that the issue seen in HADOOP-11851 can 
> surface in S3a, though with HADOOP-11570, it's less likely. It will only 
> happen on those cases when abort() isn't called.
> The "clean" close() code path needs to catch IOEs from the wrappedStream and 
> call abort() in that situation too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-13009) add option for lazy open() on s3a

2016-04-09 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13009:
---

 Summary: add option for lazy open() on s3a
 Key: HADOOP-13009
 URL: https://issues.apache.org/jira/browse/HADOOP-13009
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.8.0
Reporter: Steve Loughran


After lazy-seekI want to add a —very much non-default —lazy-open option.

If you look at a trace of what goes on with object store access, there's 
usually a GET at offset 0 (the {{open()}} command, followed by a {{seek()}}. 

If there was a lazy option option, then {{open()}} would set up the instance 
for reading, but not actually talk to the object store —it'd be the first seek 
or read which would hit the service. You'd eliminate one HTTP operation from a 
read sequence, for a faster startup time, especially long-haul.

That's a big break in the normal assumption: if a file isn't there, {{open()}} 
fails, so it'd only work with apps which did open+read, open+seek, or 
opened+positioned readable action back to back. By making it an option people 
can experiment to see what happens —though full testing would need to do some 
fault injection on the first seek/read to see how code handled late failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-12781) Enable fencing for logjam-protected ssh servers

2016-04-09 Thread Olaf Flebbe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Olaf Flebbe resolved HADOOP-12781.
--
Resolution: Fixed

Sorry this patch does not work as intended. Closing it myself

> Enable fencing for logjam-protected ssh servers
> ---
>
> Key: HADOOP-12781
> URL: https://issues.apache.org/jira/browse/HADOOP-12781
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auto-failover
>Affects Versions: 2.7.2
> Environment: If a site uses logjam protected ssh servers no common 
> ciphers can be found and fencing breaks because the fencing process cannot be 
> initiated by zkfc.
>Reporter: Olaf Flebbe
>Assignee: Olaf Flebbe
> Attachments: HADOOP-12781.1.patch
>
>
> Version 0.15.3 of jsch incorporates changes to add ciphers for logjam 
> protection. See http://www.jcraft.com/jsch/ChangeLog.
> Since there are no developer visible changes, updating pom is sufficient.
> Doublechecked in my environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)