[jira] [Commented] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-03-05 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387317#comment-16387317
 ] 

Xiao Chen commented on HADOOP-14445:


Hm, actually the token auth was successful, just that the identifier failed to 
decode properly so log looks odd. Let me play with it a bit...

> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-14445-branch-2.8.002.patch, 
> HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, HADOOP-14445.003.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15291) TestMiniKDC fails with Java 9

2018-03-05 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma reassigned HADOOP-15291:
-

Assignee: Takanobu Asanuma

> TestMiniKDC fails with Java 9
> -
>
> Key: HADOOP-15291
> URL: https://issues.apache.org/jira/browse/HADOOP-15291
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Takanobu Asanuma
>Priority: Major
>
> {noformat}
> [INFO] Running org.apache.hadoop.minikdc.TestMiniKdc
> [ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 3.748 
> s <<< FAILURE! - in org.apache.hadoop.minikdc.TestMiniKdc
> [ERROR] testKerberosLogin(org.apache.hadoop.minikdc.TestMiniKdc)  Time 
> elapsed: 1.301 s  <<< ERROR!
> javax.security.auth.login.LoginException: 
> java.lang.NullPointerException: invalid null input(s)
>   at java.base/java.util.Objects.requireNonNull(Objects.java:246)
>   at 
> java.base/javax.security.auth.Subject$SecureSet.remove(Subject.java:1172)
>   at 
> java.base/java.util.Collections$SynchronizedCollection.remove(Collections.java:2039)
>   at 
> jdk.security.auth/com.sun.security.auth.module.Krb5LoginModule.logout(Krb5LoginModule.java:1193)
>   at 
> java.base/javax.security.auth.login.LoginContext.invoke(LoginContext.java:732)
>   at 
> java.base/javax.security.auth.login.LoginContext.access$000(LoginContext.java:194)
>   at 
> java.base/javax.security.auth.login.LoginContext$4.run(LoginContext.java:665)
>   at 
> java.base/javax.security.auth.login.LoginContext$4.run(LoginContext.java:663)
>   at java.base/java.security.AccessController.doPrivileged(Native Method)
>   at 
> java.base/javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:663)
>   at 
> java.base/javax.security.auth.login.LoginContext.logout(LoginContext.java:613)
>   at 
> org.apache.hadoop.minikdc.TestMiniKdc.testKerberosLogin(TestMiniKdc.java:169)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15291) TestMiniKDC fails with Java 9

2018-03-05 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387265#comment-16387265
 ] 

Takanobu Asanuma commented on HADOOP-15291:
---

Thanks for filing the issue. I would like to work on this jira.

> TestMiniKDC fails with Java 9
> -
>
> Key: HADOOP-15291
> URL: https://issues.apache.org/jira/browse/HADOOP-15291
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Akira Ajisaka
>Priority: Major
>
> {noformat}
> [INFO] Running org.apache.hadoop.minikdc.TestMiniKdc
> [ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 3.748 
> s <<< FAILURE! - in org.apache.hadoop.minikdc.TestMiniKdc
> [ERROR] testKerberosLogin(org.apache.hadoop.minikdc.TestMiniKdc)  Time 
> elapsed: 1.301 s  <<< ERROR!
> javax.security.auth.login.LoginException: 
> java.lang.NullPointerException: invalid null input(s)
>   at java.base/java.util.Objects.requireNonNull(Objects.java:246)
>   at 
> java.base/javax.security.auth.Subject$SecureSet.remove(Subject.java:1172)
>   at 
> java.base/java.util.Collections$SynchronizedCollection.remove(Collections.java:2039)
>   at 
> jdk.security.auth/com.sun.security.auth.module.Krb5LoginModule.logout(Krb5LoginModule.java:1193)
>   at 
> java.base/javax.security.auth.login.LoginContext.invoke(LoginContext.java:732)
>   at 
> java.base/javax.security.auth.login.LoginContext.access$000(LoginContext.java:194)
>   at 
> java.base/javax.security.auth.login.LoginContext$4.run(LoginContext.java:665)
>   at 
> java.base/javax.security.auth.login.LoginContext$4.run(LoginContext.java:663)
>   at java.base/java.security.AccessController.doPrivileged(Native Method)
>   at 
> java.base/javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:663)
>   at 
> java.base/javax.security.auth.login.LoginContext.logout(LoginContext.java:613)
>   at 
> org.apache.hadoop.minikdc.TestMiniKdc.testKerberosLogin(TestMiniKdc.java:169)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15292) Distcp's use of pread is slowing it down.

2018-03-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387238#comment-16387238
 ] 

genericqa commented on HADOOP-15292:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} hadoop-tools/hadoop-distcp: The patch generated 
1 new + 12 unchanged - 0 fixed = 13 total (was 12) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
56s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HADOOP-15292 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913138/HADOOP-15292.000.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3dd3887ac5e8 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 745190e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14261/artifact/out/diff-checkstyle-hadoop-tools_hadoop-distcp.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14261/testReport/ |
| Max. process+thread count | 301 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
| Console outpu

[jira] [Commented] (HADOOP-15287) JDK9 JavaDoc build fails due to one-character underscore identifiers in hadoop-yarn-common

2018-03-05 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387228#comment-16387228
 ] 

Takanobu Asanuma commented on HADOOP-15287:
---

Thanks for the commit, [~ajisakaa]! Now I confirmed {{mvn package -Pdist,native 
-Dtar -DskipTests}} succeeds with java 9. :)

> JDK9 JavaDoc build fails due to one-character underscore identifiers in 
> hadoop-yarn-common
> --
>
> Key: HADOOP-15287
> URL: https://issues.apache.org/jira/browse/HADOOP-15287
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
> Environment: Java 9.0.4, Applied HADOOP-12760 and HDFS-11610
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15287.1.patch, HADOOP-15287.2.patch
>
>
> {{mvn --projects hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
> javadoc:javadoc}} fails.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.0-M1:javadoc (default-cli) 
> on project hadoop-yarn-common: An error has occurred in Javadoc report 
> generation:
> [ERROR] Exit code: 1 - 
> ./hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet/Hamlet.java:50:
>  error: as of release 9, '_' is a keyword, and may not be used as an 
> identifier
> [ERROR]   public class HTML extends EImp implements 
> HamletSpec.HTML {
> [ERROR]   ^
> [ERROR] 
> ./hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet/Hamlet.java:92:
>  error: as of release 9, '_' is a keyword, and may not be used as an 
> identifier
> [ERROR]   return base().$href(href)._();
> [ERROR] ^
> ...
> {noformat}
> FYI: https://bugs.openjdk.java.net/browse/JDK-8061549



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15292) Distcp's use of pread is slowing it down.

2018-03-05 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-15292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387190#comment-16387190
 ] 

Íñigo Goiri commented on HADOOP-15292:
--

Thanks [~virajith] for the patch.
{{TestCopyMapper}} tests this behavior so we can check that this doesn't break.
Not sure if it's worth extending that unit test to track how many times we open 
the stream.
Probably not worth adding metrics but maybe extend the stream in the unit test 
and track how many times we open it.

[~jingzhao] you implemented MAPREDUCE-5899, do you mind double checking that 
this approach is correct?

> Distcp's use of pread is slowing it down.
> -
>
> Key: HADOOP-15292
> URL: https://issues.apache.org/jira/browse/HADOOP-15292
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Virajith Jalaparti
>Priority: Major
> Attachments: HADOOP-15292.000.patch
>
>
> Distcp currently uses positioned-reads (in 
> RetriableFileCopyCommand#copyBytes) when the source offset is > 0. This 
> results in unnecessary overheads (new BlockReader being created on the 
> client-side, multiple readBlock() calls to the Datanodes, each of requires 
> the creation of a BlockSender and an inputstream to the ReplicaInfo).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14927) ITestS3GuardTool failures in testDestroyNoBucket()

2018-03-05 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387187#comment-16387187
 ] 

Aaron Fabbri commented on HADOOP-14927:
---

Ok.. For some reason I do not hit this FNF exception. 
{noformat}
[INFO] Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 90.563 
s - in org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolLocal{noformat}
will follow up with some debugging when I get time.

> ITestS3GuardTool failures in testDestroyNoBucket()
> --
>
> Key: HADOOP-14927
> URL: https://issues.apache.org/jira/browse/HADOOP-14927
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1, 3.0.0-alpha3, 3.1.0
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Minor
> Attachments: HADOOP-14927.001.patch
>
>
> Hit this when testing for the Hadoop 3.0.0-beta1 RC0.
> {noformat}
> hadoop-3.0.0-beta1-src/hadoop-tools/hadoop-aws$ mvn clean verify 
> -Dit.test="ITestS3GuardTool*" -Dtest=none -Ds3guard -Ddynamo
> ...
> Failed tests: 
>   
> ITestS3GuardToolDynamoDB>AbstractS3GuardToolTestBase.testDestroyNoBucket:228 
> Expected an exception, got 0
>   ITestS3GuardToolLocal>AbstractS3GuardToolTestBase.testDestroyNoBucket:228 
> Expected an exception, got 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15292) Distcp's use of pread is slowing it down.

2018-03-05 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HADOOP-15292:

Status: Patch Available  (was: Open)

> Distcp's use of pread is slowing it down.
> -
>
> Key: HADOOP-15292
> URL: https://issues.apache.org/jira/browse/HADOOP-15292
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Virajith Jalaparti
>Priority: Major
> Attachments: HADOOP-15292.000.patch
>
>
> Distcp currently uses positioned-reads (in 
> RetriableFileCopyCommand#copyBytes) when the source offset is > 0. This 
> results in unnecessary overheads (new BlockReader being created on the 
> client-side, multiple readBlock() calls to the Datanodes, each of requires 
> the creation of a BlockSender and an inputstream to the ReplicaInfo).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15292) Distcp's use of pread is slowing it down.

2018-03-05 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387182#comment-16387182
 ] 

Virajith Jalaparti commented on HADOOP-15292:
-

Attached patch fixes this issue by replacing the positioned read with an 
initial seek, and reading the remaining data directly from the open stream.

> Distcp's use of pread is slowing it down.
> -
>
> Key: HADOOP-15292
> URL: https://issues.apache.org/jira/browse/HADOOP-15292
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Virajith Jalaparti
>Priority: Major
> Attachments: HADOOP-15292.000.patch
>
>
> Distcp currently uses positioned-reads (in 
> RetriableFileCopyCommand#copyBytes) when the source offset is > 0. This 
> results in unnecessary overheads (new BlockReader being created on the 
> client-side, multiple readBlock() calls to the Datanodes, each of requires 
> the creation of a BlockSender and an inputstream to the ReplicaInfo).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15292) Distcp's use of pread is slowing it down.

2018-03-05 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HADOOP-15292:

Attachment: HADOOP-15292.000.patch

> Distcp's use of pread is slowing it down.
> -
>
> Key: HADOOP-15292
> URL: https://issues.apache.org/jira/browse/HADOOP-15292
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Virajith Jalaparti
>Priority: Major
> Attachments: HADOOP-15292.000.patch
>
>
> Distcp currently uses positioned-reads (in 
> RetriableFileCopyCommand#copyBytes) when the source offset is > 0. This 
> results in unnecessary overheads (new BlockReader being created on the 
> client-side, multiple readBlock() calls to the Datanodes, each of requires 
> the creation of a BlockSender and an inputstream to the ReplicaInfo).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15292) Distcp's use of pread is slowing it down.

2018-03-05 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HADOOP-15292:

Attachment: HADOOP-15292.000.patch

> Distcp's use of pread is slowing it down.
> -
>
> Key: HADOOP-15292
> URL: https://issues.apache.org/jira/browse/HADOOP-15292
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Virajith Jalaparti
>Priority: Major
> Attachments: HADOOP-15292.000.patch
>
>
> Distcp currently uses positioned-reads (in 
> RetriableFileCopyCommand#copyBytes) when the source offset is > 0. This 
> results in unnecessary overheads (new BlockReader being created on the 
> client-side, multiple readBlock() calls to the Datanodes, each of requires 
> the creation of a BlockSender and an inputstream to the ReplicaInfo).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15292) Distcp's use of pread is slowing it down.

2018-03-05 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HADOOP-15292:

Attachment: (was: HADOOP-15292.000.patch)

> Distcp's use of pread is slowing it down.
> -
>
> Key: HADOOP-15292
> URL: https://issues.apache.org/jira/browse/HADOOP-15292
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Virajith Jalaparti
>Priority: Major
> Attachments: HADOOP-15292.000.patch
>
>
> Distcp currently uses positioned-reads (in 
> RetriableFileCopyCommand#copyBytes) when the source offset is > 0. This 
> results in unnecessary overheads (new BlockReader being created on the 
> client-side, multiple readBlock() calls to the Datanodes, each of requires 
> the creation of a BlockSender and an inputstream to the ReplicaInfo).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15287) JDK9 JavaDoc build fails due to one-character underscore identifiers in hadoop-yarn-common

2018-03-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387177#comment-16387177
 ] 

Hudson commented on HADOOP-15287:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13775 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13775/])
HADOOP-15287. JDK9 JavaDoc build fails due to one-character underscore 
(aajisaka: rev 745190ecdca8f7dfc5eebffdd1c1aa4f86229120)
* (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml


> JDK9 JavaDoc build fails due to one-character underscore identifiers in 
> hadoop-yarn-common
> --
>
> Key: HADOOP-15287
> URL: https://issues.apache.org/jira/browse/HADOOP-15287
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
> Environment: Java 9.0.4, Applied HADOOP-12760 and HDFS-11610
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15287.1.patch, HADOOP-15287.2.patch
>
>
> {{mvn --projects hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
> javadoc:javadoc}} fails.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.0-M1:javadoc (default-cli) 
> on project hadoop-yarn-common: An error has occurred in Javadoc report 
> generation:
> [ERROR] Exit code: 1 - 
> ./hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet/Hamlet.java:50:
>  error: as of release 9, '_' is a keyword, and may not be used as an 
> identifier
> [ERROR]   public class HTML extends EImp implements 
> HamletSpec.HTML {
> [ERROR]   ^
> [ERROR] 
> ./hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet/Hamlet.java:92:
>  error: as of release 9, '_' is a keyword, and may not be used as an 
> identifier
> [ERROR]   return base().$href(href)._();
> [ERROR] ^
> ...
> {noformat}
> FYI: https://bugs.openjdk.java.net/browse/JDK-8061549



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15292) Distcp's use of pread is slowing it down.

2018-03-05 Thread Virajith Jalaparti (JIRA)
Virajith Jalaparti created HADOOP-15292:
---

 Summary: Distcp's use of pread is slowing it down.
 Key: HADOOP-15292
 URL: https://issues.apache.org/jira/browse/HADOOP-15292
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Virajith Jalaparti


Distcp currently uses positioned-reads (in RetriableFileCopyCommand#copyBytes) 
when the source offset is > 0. This results in unnecessary overheads (new 
BlockReader being created on the client-side, multiple readBlock() calls to the 
Datanodes, each of requires the creation of a BlockSender and an inputstream to 
the ReplicaInfo).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15287) JDK9 JavaDoc build fails due to one-character underscore identifiers in hadoop-yarn-common

2018-03-05 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15287:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk. Thanks [~tasanuma0829]!

> JDK9 JavaDoc build fails due to one-character underscore identifiers in 
> hadoop-yarn-common
> --
>
> Key: HADOOP-15287
> URL: https://issues.apache.org/jira/browse/HADOOP-15287
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
> Environment: Java 9.0.4, Applied HADOOP-12760 and HDFS-11610
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15287.1.patch, HADOOP-15287.2.patch
>
>
> {{mvn --projects hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
> javadoc:javadoc}} fails.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.0-M1:javadoc (default-cli) 
> on project hadoop-yarn-common: An error has occurred in Javadoc report 
> generation:
> [ERROR] Exit code: 1 - 
> ./hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet/Hamlet.java:50:
>  error: as of release 9, '_' is a keyword, and may not be used as an 
> identifier
> [ERROR]   public class HTML extends EImp implements 
> HamletSpec.HTML {
> [ERROR]   ^
> [ERROR] 
> ./hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet/Hamlet.java:92:
>  error: as of release 9, '_' is a keyword, and may not be used as an 
> identifier
> [ERROR]   return base().$href(href)._();
> [ERROR] ^
> ...
> {noformat}
> FYI: https://bugs.openjdk.java.net/browse/JDK-8061549



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15287) JDK9 JavaDoc build fails due to one-character underscore identifiers in hadoop-yarn-common

2018-03-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387132#comment-16387132
 ] 

genericqa commented on HADOOP-15287:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
29m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
6s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 19s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HADOOP-15287 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913129/HADOOP-15287.2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  |
| uname | Linux d568ab319cf6 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4971276 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14260/testReport/ |
| Max. process+thread count | 341 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14260/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> JDK9 JavaDoc build fails due to one-character underscore identifiers in 
> hadoop-yarn-common
> --
>
> Key: HADOOP-15287
> URL: https://issues.apache.org/jira/browse/HADOOP-15287
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentati

[jira] [Commented] (HADOOP-15291) TestMiniKDC fails with Java 9

2018-03-05 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387119#comment-16387119
 ] 

Akira Ajisaka commented on HADOOP-15291:


In JDK9, {{Subject$SecureSet.remove(null)}} throws NPE. 
https://bugs.openjdk.java.net/browse/JDK-8173069
This situation happens when {{Krb5LoginModule.logout()}} is called twice.


> TestMiniKDC fails with Java 9
> -
>
> Key: HADOOP-15291
> URL: https://issues.apache.org/jira/browse/HADOOP-15291
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Akira Ajisaka
>Priority: Major
>
> {noformat}
> [INFO] Running org.apache.hadoop.minikdc.TestMiniKdc
> [ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 3.748 
> s <<< FAILURE! - in org.apache.hadoop.minikdc.TestMiniKdc
> [ERROR] testKerberosLogin(org.apache.hadoop.minikdc.TestMiniKdc)  Time 
> elapsed: 1.301 s  <<< ERROR!
> javax.security.auth.login.LoginException: 
> java.lang.NullPointerException: invalid null input(s)
>   at java.base/java.util.Objects.requireNonNull(Objects.java:246)
>   at 
> java.base/javax.security.auth.Subject$SecureSet.remove(Subject.java:1172)
>   at 
> java.base/java.util.Collections$SynchronizedCollection.remove(Collections.java:2039)
>   at 
> jdk.security.auth/com.sun.security.auth.module.Krb5LoginModule.logout(Krb5LoginModule.java:1193)
>   at 
> java.base/javax.security.auth.login.LoginContext.invoke(LoginContext.java:732)
>   at 
> java.base/javax.security.auth.login.LoginContext.access$000(LoginContext.java:194)
>   at 
> java.base/javax.security.auth.login.LoginContext$4.run(LoginContext.java:665)
>   at 
> java.base/javax.security.auth.login.LoginContext$4.run(LoginContext.java:663)
>   at java.base/java.security.AccessController.doPrivileged(Native Method)
>   at 
> java.base/javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:663)
>   at 
> java.base/javax.security.auth.login.LoginContext.logout(LoginContext.java:613)
>   at 
> org.apache.hadoop.minikdc.TestMiniKdc.testKerberosLogin(TestMiniKdc.java:169)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15291) TestMiniKDC fails with Java 9

2018-03-05 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-15291:
--

 Summary: TestMiniKDC fails with Java 9
 Key: HADOOP-15291
 URL: https://issues.apache.org/jira/browse/HADOOP-15291
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: test
Reporter: Akira Ajisaka


{noformat}
[INFO] Running org.apache.hadoop.minikdc.TestMiniKdc
[ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 3.748 s 
<<< FAILURE! - in org.apache.hadoop.minikdc.TestMiniKdc
[ERROR] testKerberosLogin(org.apache.hadoop.minikdc.TestMiniKdc)  Time elapsed: 
1.301 s  <<< ERROR!
javax.security.auth.login.LoginException: 
java.lang.NullPointerException: invalid null input(s)
at java.base/java.util.Objects.requireNonNull(Objects.java:246)
at 
java.base/javax.security.auth.Subject$SecureSet.remove(Subject.java:1172)
at 
java.base/java.util.Collections$SynchronizedCollection.remove(Collections.java:2039)
at 
jdk.security.auth/com.sun.security.auth.module.Krb5LoginModule.logout(Krb5LoginModule.java:1193)
at 
java.base/javax.security.auth.login.LoginContext.invoke(LoginContext.java:732)
at 
java.base/javax.security.auth.login.LoginContext.access$000(LoginContext.java:194)
at 
java.base/javax.security.auth.login.LoginContext$4.run(LoginContext.java:665)
at 
java.base/javax.security.auth.login.LoginContext$4.run(LoginContext.java:663)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at 
java.base/javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:663)
at 
java.base/javax.security.auth.login.LoginContext.logout(LoginContext.java:613)
at 
org.apache.hadoop.minikdc.TestMiniKdc.testKerberosLogin(TestMiniKdc.java:169)
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15287) JDK9 JavaDoc build fails due to one-character underscore identifiers in hadoop-yarn-common

2018-03-05 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387088#comment-16387088
 ] 

Akira Ajisaka commented on HADOOP-15287:


+1 pending Jenkins. Thanks Takanobu.

> JDK9 JavaDoc build fails due to one-character underscore identifiers in 
> hadoop-yarn-common
> --
>
> Key: HADOOP-15287
> URL: https://issues.apache.org/jira/browse/HADOOP-15287
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
> Environment: Java 9.0.4, Applied HADOOP-12760 and HDFS-11610
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-15287.1.patch, HADOOP-15287.2.patch
>
>
> {{mvn --projects hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
> javadoc:javadoc}} fails.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.0-M1:javadoc (default-cli) 
> on project hadoop-yarn-common: An error has occurred in Javadoc report 
> generation:
> [ERROR] Exit code: 1 - 
> ./hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet/Hamlet.java:50:
>  error: as of release 9, '_' is a keyword, and may not be used as an 
> identifier
> [ERROR]   public class HTML extends EImp implements 
> HamletSpec.HTML {
> [ERROR]   ^
> [ERROR] 
> ./hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet/Hamlet.java:92:
>  error: as of release 9, '_' is a keyword, and may not be used as an 
> identifier
> [ERROR]   return base().$href(href)._();
> [ERROR] ^
> ...
> {noformat}
> FYI: https://bugs.openjdk.java.net/browse/JDK-8061549



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15287) JDK9 JavaDoc build fails due to one-character underscore identifiers in hadoop-yarn-common

2018-03-05 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387079#comment-16387079
 ] 

Takanobu Asanuma commented on HADOOP-15287:
---

Thanks for the review, [~ajisakaa]. I uploaded a new patch addressing your 
comments.

> JDK9 JavaDoc build fails due to one-character underscore identifiers in 
> hadoop-yarn-common
> --
>
> Key: HADOOP-15287
> URL: https://issues.apache.org/jira/browse/HADOOP-15287
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
> Environment: Java 9.0.4, Applied HADOOP-12760 and HDFS-11610
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-15287.1.patch, HADOOP-15287.2.patch
>
>
> {{mvn --projects hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
> javadoc:javadoc}} fails.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.0-M1:javadoc (default-cli) 
> on project hadoop-yarn-common: An error has occurred in Javadoc report 
> generation:
> [ERROR] Exit code: 1 - 
> ./hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet/Hamlet.java:50:
>  error: as of release 9, '_' is a keyword, and may not be used as an 
> identifier
> [ERROR]   public class HTML extends EImp implements 
> HamletSpec.HTML {
> [ERROR]   ^
> [ERROR] 
> ./hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet/Hamlet.java:92:
>  error: as of release 9, '_' is a keyword, and may not be used as an 
> identifier
> [ERROR]   return base().$href(href)._();
> [ERROR] ^
> ...
> {noformat}
> FYI: https://bugs.openjdk.java.net/browse/JDK-8061549



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15287) JDK9 JavaDoc build fails due to one-character underscore identifiers in hadoop-yarn-common

2018-03-05 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HADOOP-15287:
--
Attachment: HADOOP-15287.2.patch

> JDK9 JavaDoc build fails due to one-character underscore identifiers in 
> hadoop-yarn-common
> --
>
> Key: HADOOP-15287
> URL: https://issues.apache.org/jira/browse/HADOOP-15287
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
> Environment: Java 9.0.4, Applied HADOOP-12760 and HDFS-11610
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-15287.1.patch, HADOOP-15287.2.patch
>
>
> {{mvn --projects hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
> javadoc:javadoc}} fails.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.0-M1:javadoc (default-cli) 
> on project hadoop-yarn-common: An error has occurred in Javadoc report 
> generation:
> [ERROR] Exit code: 1 - 
> ./hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet/Hamlet.java:50:
>  error: as of release 9, '_' is a keyword, and may not be used as an 
> identifier
> [ERROR]   public class HTML extends EImp implements 
> HamletSpec.HTML {
> [ERROR]   ^
> [ERROR] 
> ./hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet/Hamlet.java:92:
>  error: as of release 9, '_' is a keyword, and may not be used as an 
> identifier
> [ERROR]   return base().$href(href)._();
> [ERROR] ^
> ...
> {noformat}
> FYI: https://bugs.openjdk.java.net/browse/JDK-8061549



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15271) Remove unicode multibyte characters from JavaDoc

2018-03-05 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387066#comment-16387066
 ] 

Takanobu Asanuma commented on HADOOP-15271:
---

Thanks for reviewing and committing it, [~ajisakaa]!

> Remove unicode multibyte characters from JavaDoc
> 
>
> Key: HADOOP-15271
> URL: https://issues.apache.org/jira/browse/HADOOP-15271
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
> Environment: Java 9.0.4, Applied HADOOP-12760 and HDFS-11610
>Reporter: Akira Ajisaka
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15271.1.patch, HADOOP-15271.2.patch
>
>
> {{mvn package -Pdist,native -Dtar -DskipTests}} fails.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.0-M1:jar (module-javadocs) 
> on project hadoop-common: MavenReportException: Error while generating 
> Javadoc: 
> [ERROR] Exit code: 1 - javadoc: warning - The old Doclet and Taglet APIs in 
> the packages
> [ERROR] com.sun.javadoc, com.sun.tools.doclets and their implementations
> [ERROR] are planned to be removed in a future JDK release. These
> [ERROR] components have been superseded by the new APIs in jdk.javadoc.doclet.
> [ERROR] Users are strongly recommended to migrate to the new APIs.
> [ERROR] 
> /home/centos/git/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java:1652:
>  error: unmappable character (0xE2) for encoding US-ASCII
> [ERROR]* closed automatically ???these the marked paths will be deleted 
> as a result.
> [ERROR]   ^
> [ERROR] 
> /home/centos/git/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java:1652:
>  error: unmappable character (0x80) for encoding US-ASCII
> [ERROR]* closed automatically ???these the marked paths will be deleted 
> as a result.
> [ERROR]^
> [ERROR] 
> /home/centos/git/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java:1652:
>  error: unmappable character (0x94) for encoding US-ASCII
> [ERROR]* closed automatically ???these the marked paths will be deleted 
> as a result.
> {noformat}
> JDK9 JavaDoc cannot treat non-ascii characters due to 
> https://bugs.openjdk.java.net/browse/JDK-8188649.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15267) S3A multipart upload fails when SSE-C encryption is enabled

2018-03-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387029#comment-16387029
 ] 

genericqa commented on HADOOP-15267:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 20s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 17s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 2 
new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 45s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
43s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HADOOP-15267 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913119/HADOOP-15267-002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0facaae2c56b 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4971276 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14259/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14259/testReport/ |
| Max. process+thread count | 350 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14259/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |

[jira] [Comment Edited] (HADOOP-15267) S3A multipart upload fails when SSE-C encryption is enabled

2018-03-05 Thread Anis Elleuch (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386962#comment-16386962
 ] 

Anis Elleuch edited comment on HADOOP-15267 at 3/5/18 11:34 PM:


[~ste...@apache.org], I added a new patch HDOPP-15267-002.patch which contains 
the integration test.

I tested with a AWS S3 bucket (vadmeste-hadoop, us-east-1) using the following 
command: 
{{mvn test -Dparallel-tests -Dscale -DtestsThreadCount=8 
-Dtest=ITestS3AHugeFilesSSECDiskBlocks}}

Please take a look.


was (Author: vadmeste):
[~ste...@apache.org] I added a new patch HDOPP-15267-002.patch which contains 
the integration test.

I tested with a AWS S3 bucket (vadmeste-hadoop, us-east-1) using the following 
command: 
{{mvn test -Dparallel-tests -Dscale -DtestsThreadCount=8 
-Dtest=ITestS3AHugeFilesSSECDiskBlocks}}

Please take a look.

> S3A multipart upload fails when SSE-C encryption is enabled
> ---
>
> Key: HADOOP-15267
> URL: https://issues.apache.org/jira/browse/HADOOP-15267
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
> Environment: Hadoop 3.1 Snapshot
>Reporter: Anis Elleuch
>Assignee: Anis Elleuch
>Priority: Critical
> Attachments: HADOOP-15267-001.patch, HADOOP-15267-002.patch
>
>
> When I enable SSE-C encryption in Hadoop 3.1 and set  fs.s3a.multipart.size 
> to 5 Mb, storing data in AWS doesn't work anymore. For example, running the 
> following code:
> {code}
> >>> df1 = spark.read.json('/home/user/people.json')
> >>> df1.write.mode("overwrite").json("s3a://testbucket/people.json")
> {code}
> shows the following exception:
> {code:java}
> com.amazonaws.services.s3.model.AmazonS3Exception: The multipart upload 
> initiate requested encryption. Subsequent part requests must include the 
> appropriate encryption parameters.
> {code}
> After some investigation, I discovered that hadoop-aws doesn't send SSE-C 
> headers in Put Object Part as stated in AWS specification: 
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPart.html]
> {code:java}
> If you requested server-side encryption using a customer-provided encryption 
> key in your initiate multipart upload request, you must provide identical 
> encryption information in each part upload using the following headers.
> {code}
>  
> You can find a patch attached to this issue for a better clarification of the 
> problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15267) S3A multipart upload fails when SSE-C encryption is enabled

2018-03-05 Thread Anis Elleuch (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386962#comment-16386962
 ] 

Anis Elleuch commented on HADOOP-15267:
---

[~ste...@apache.org] I added a new patch HDOPP-15267-002.patch which contains 
the integration test.

I tested with a AWS S3 bucket (vadmeste-hadoop, us-east-1) using the following 
command: 
{{mvn test -Dparallel-tests -Dscale -DtestsThreadCount=8 
-Dtest=ITestS3AHugeFilesSSECDiskBlocks}}

Please take a look.

> S3A multipart upload fails when SSE-C encryption is enabled
> ---
>
> Key: HADOOP-15267
> URL: https://issues.apache.org/jira/browse/HADOOP-15267
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
> Environment: Hadoop 3.1 Snapshot
>Reporter: Anis Elleuch
>Assignee: Anis Elleuch
>Priority: Critical
> Attachments: HADOOP-15267-001.patch, HADOOP-15267-002.patch
>
>
> When I enable SSE-C encryption in Hadoop 3.1 and set  fs.s3a.multipart.size 
> to 5 Mb, storing data in AWS doesn't work anymore. For example, running the 
> following code:
> {code}
> >>> df1 = spark.read.json('/home/user/people.json')
> >>> df1.write.mode("overwrite").json("s3a://testbucket/people.json")
> {code}
> shows the following exception:
> {code:java}
> com.amazonaws.services.s3.model.AmazonS3Exception: The multipart upload 
> initiate requested encryption. Subsequent part requests must include the 
> appropriate encryption parameters.
> {code}
> After some investigation, I discovered that hadoop-aws doesn't send SSE-C 
> headers in Put Object Part as stated in AWS specification: 
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPart.html]
> {code:java}
> If you requested server-side encryption using a customer-provided encryption 
> key in your initiate multipart upload request, you must provide identical 
> encryption information in each part upload using the following headers.
> {code}
>  
> You can find a patch attached to this issue for a better clarification of the 
> problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15271) Remove unicode multibyte characters from JavaDoc

2018-03-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386952#comment-16386952
 ] 

Hudson commented on HADOOP-15271:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13774 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13774/])
HADOOP-15271. Remove unicode multibyte characters from JavaDoc (aajisaka: rev 
49712766314932997af4135e12f20aa05dad58c6)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/pi/package.html


> Remove unicode multibyte characters from JavaDoc
> 
>
> Key: HADOOP-15271
> URL: https://issues.apache.org/jira/browse/HADOOP-15271
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
> Environment: Java 9.0.4, Applied HADOOP-12760 and HDFS-11610
>Reporter: Akira Ajisaka
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15271.1.patch, HADOOP-15271.2.patch
>
>
> {{mvn package -Pdist,native -Dtar -DskipTests}} fails.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.0-M1:jar (module-javadocs) 
> on project hadoop-common: MavenReportException: Error while generating 
> Javadoc: 
> [ERROR] Exit code: 1 - javadoc: warning - The old Doclet and Taglet APIs in 
> the packages
> [ERROR] com.sun.javadoc, com.sun.tools.doclets and their implementations
> [ERROR] are planned to be removed in a future JDK release. These
> [ERROR] components have been superseded by the new APIs in jdk.javadoc.doclet.
> [ERROR] Users are strongly recommended to migrate to the new APIs.
> [ERROR] 
> /home/centos/git/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java:1652:
>  error: unmappable character (0xE2) for encoding US-ASCII
> [ERROR]* closed automatically ???these the marked paths will be deleted 
> as a result.
> [ERROR]   ^
> [ERROR] 
> /home/centos/git/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java:1652:
>  error: unmappable character (0x80) for encoding US-ASCII
> [ERROR]* closed automatically ???these the marked paths will be deleted 
> as a result.
> [ERROR]^
> [ERROR] 
> /home/centos/git/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java:1652:
>  error: unmappable character (0x94) for encoding US-ASCII
> [ERROR]* closed automatically ???these the marked paths will be deleted 
> as a result.
> {noformat}
> JDK9 JavaDoc cannot treat non-ascii characters due to 
> https://bugs.openjdk.java.net/browse/JDK-8188649.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15267) S3A multipart upload fails when SSE-C encryption is enabled

2018-03-05 Thread Anis Elleuch (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anis Elleuch updated HADOOP-15267:
--
Attachment: HADOOP-15267-002.patch

> S3A multipart upload fails when SSE-C encryption is enabled
> ---
>
> Key: HADOOP-15267
> URL: https://issues.apache.org/jira/browse/HADOOP-15267
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
> Environment: Hadoop 3.1 Snapshot
>Reporter: Anis Elleuch
>Assignee: Anis Elleuch
>Priority: Critical
> Attachments: HADOOP-15267-001.patch, HADOOP-15267-002.patch
>
>
> When I enable SSE-C encryption in Hadoop 3.1 and set  fs.s3a.multipart.size 
> to 5 Mb, storing data in AWS doesn't work anymore. For example, running the 
> following code:
> {code}
> >>> df1 = spark.read.json('/home/user/people.json')
> >>> df1.write.mode("overwrite").json("s3a://testbucket/people.json")
> {code}
> shows the following exception:
> {code:java}
> com.amazonaws.services.s3.model.AmazonS3Exception: The multipart upload 
> initiate requested encryption. Subsequent part requests must include the 
> appropriate encryption parameters.
> {code}
> After some investigation, I discovered that hadoop-aws doesn't send SSE-C 
> headers in Put Object Part as stated in AWS specification: 
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPart.html]
> {code:java}
> If you requested server-side encryption using a customer-provided encryption 
> key in your initiate multipart upload request, you must provide identical 
> encryption information in each part upload using the following headers.
> {code}
>  
> You can find a patch attached to this issue for a better clarification of the 
> problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-03-05 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386938#comment-16386938
 ] 

Xiao Chen commented on HADOOP-14445:


Hi [~daryn] and [~shahrs87],
I tried the new-token-kind approach, and it doesn't seem to work because it 
appears the token kind - secret manager is a 1-1 mapping. 
[code|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSAuthenticationFilter.java#L75]

In other words, we cannot 'Duplicating a new KMS_DELEGATION_TOKEN/uri token 
into a single kms-dt/host:port'. The Service field is not part of the token 
identifier, and can be changed as we want. But the Kind *is* part of the 
identifier, so although the renewer can pick it up a token with a changed kind, 
the server won't accept it. Am I missing anything here?

Will address all other comments soon

> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-14445-branch-2.8.002.patch, 
> HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, HADOOP-14445.003.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15271) Remove unicode multibyte characters from JavaDoc

2018-03-05 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386927#comment-16386927
 ] 

Akira Ajisaka edited comment on HADOOP-15271 at 3/5/18 11:12 PM:
-

+1, committed this to trunk. Thanks [~tasanuma0829]!


was (Author: ajisakaa):
Committed this to trunk. Thanks [~tasanuma0829]!

> Remove unicode multibyte characters from JavaDoc
> 
>
> Key: HADOOP-15271
> URL: https://issues.apache.org/jira/browse/HADOOP-15271
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
> Environment: Java 9.0.4, Applied HADOOP-12760 and HDFS-11610
>Reporter: Akira Ajisaka
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15271.1.patch, HADOOP-15271.2.patch
>
>
> {{mvn package -Pdist,native -Dtar -DskipTests}} fails.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.0-M1:jar (module-javadocs) 
> on project hadoop-common: MavenReportException: Error while generating 
> Javadoc: 
> [ERROR] Exit code: 1 - javadoc: warning - The old Doclet and Taglet APIs in 
> the packages
> [ERROR] com.sun.javadoc, com.sun.tools.doclets and their implementations
> [ERROR] are planned to be removed in a future JDK release. These
> [ERROR] components have been superseded by the new APIs in jdk.javadoc.doclet.
> [ERROR] Users are strongly recommended to migrate to the new APIs.
> [ERROR] 
> /home/centos/git/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java:1652:
>  error: unmappable character (0xE2) for encoding US-ASCII
> [ERROR]* closed automatically ???these the marked paths will be deleted 
> as a result.
> [ERROR]   ^
> [ERROR] 
> /home/centos/git/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java:1652:
>  error: unmappable character (0x80) for encoding US-ASCII
> [ERROR]* closed automatically ???these the marked paths will be deleted 
> as a result.
> [ERROR]^
> [ERROR] 
> /home/centos/git/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java:1652:
>  error: unmappable character (0x94) for encoding US-ASCII
> [ERROR]* closed automatically ???these the marked paths will be deleted 
> as a result.
> {noformat}
> JDK9 JavaDoc cannot treat non-ascii characters due to 
> https://bugs.openjdk.java.net/browse/JDK-8188649.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15287) JDK9 JavaDoc build fails due to one-character underscore identifiers in hadoop-yarn-common

2018-03-05 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386930#comment-16386930
 ] 

Akira Ajisaka commented on HADOOP-15287:


Thank you for providing the patch! Would you ignore only 
{{org.apache.hadoop.yarn.webapp.hamlet}} package without using wildcard? I'm +1 
if that is addressed.

> JDK9 JavaDoc build fails due to one-character underscore identifiers in 
> hadoop-yarn-common
> --
>
> Key: HADOOP-15287
> URL: https://issues.apache.org/jira/browse/HADOOP-15287
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
> Environment: Java 9.0.4, Applied HADOOP-12760 and HDFS-11610
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-15287.1.patch
>
>
> {{mvn --projects hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
> javadoc:javadoc}} fails.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.0-M1:javadoc (default-cli) 
> on project hadoop-yarn-common: An error has occurred in Javadoc report 
> generation:
> [ERROR] Exit code: 1 - 
> ./hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet/Hamlet.java:50:
>  error: as of release 9, '_' is a keyword, and may not be used as an 
> identifier
> [ERROR]   public class HTML extends EImp implements 
> HamletSpec.HTML {
> [ERROR]   ^
> [ERROR] 
> ./hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet/Hamlet.java:92:
>  error: as of release 9, '_' is a keyword, and may not be used as an 
> identifier
> [ERROR]   return base().$href(href)._();
> [ERROR] ^
> ...
> {noformat}
> FYI: https://bugs.openjdk.java.net/browse/JDK-8061549



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15271) Remove unicode multibyte characters from JavaDoc

2018-03-05 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15271:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk. Thanks [~tasanuma0829]!

> Remove unicode multibyte characters from JavaDoc
> 
>
> Key: HADOOP-15271
> URL: https://issues.apache.org/jira/browse/HADOOP-15271
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
> Environment: Java 9.0.4, Applied HADOOP-12760 and HDFS-11610
>Reporter: Akira Ajisaka
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15271.1.patch, HADOOP-15271.2.patch
>
>
> {{mvn package -Pdist,native -Dtar -DskipTests}} fails.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.0-M1:jar (module-javadocs) 
> on project hadoop-common: MavenReportException: Error while generating 
> Javadoc: 
> [ERROR] Exit code: 1 - javadoc: warning - The old Doclet and Taglet APIs in 
> the packages
> [ERROR] com.sun.javadoc, com.sun.tools.doclets and their implementations
> [ERROR] are planned to be removed in a future JDK release. These
> [ERROR] components have been superseded by the new APIs in jdk.javadoc.doclet.
> [ERROR] Users are strongly recommended to migrate to the new APIs.
> [ERROR] 
> /home/centos/git/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java:1652:
>  error: unmappable character (0xE2) for encoding US-ASCII
> [ERROR]* closed automatically ???these the marked paths will be deleted 
> as a result.
> [ERROR]   ^
> [ERROR] 
> /home/centos/git/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java:1652:
>  error: unmappable character (0x80) for encoding US-ASCII
> [ERROR]* closed automatically ???these the marked paths will be deleted 
> as a result.
> [ERROR]^
> [ERROR] 
> /home/centos/git/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java:1652:
>  error: unmappable character (0x94) for encoding US-ASCII
> [ERROR]* closed automatically ???these the marked paths will be deleted 
> as a result.
> {noformat}
> JDK9 JavaDoc cannot treat non-ascii characters due to 
> https://bugs.openjdk.java.net/browse/JDK-8188649.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15289) FileStatus.readFields() assertion incorrect

2018-03-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386870#comment-16386870
 ] 

genericqa commented on HADOOP-15289:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 43s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  8s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 81m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
|   | hadoop.ha.TestZKFailoverControllerStress |
|   | hadoop.fs.shell.TestCopyFromLocal |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HADOOP-15289 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913104/HADOOP-15289-001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 19ffd8d5fb75 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 
21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 245751f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14258/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14258/testReport/ |
| Max. process+thread count | 1385 (vs. ulimit of 1) |
| modul

[jira] [Commented] (HADOOP-15289) FileStatus.readFields() assertion incorrect

2018-03-05 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386812#comment-16386812
 ] 

Chris Douglas commented on HADOOP-15289:


+1

> FileStatus.readFields() assertion incorrect
> ---
>
> Key: HADOOP-15289
> URL: https://issues.apache.org/jira/browse/HADOOP-15289
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-15289-001.patch
>
>
> As covered inHBASE-20123,  "Backup test fails against hadoop 3; ", I think 
> the assert at the end of {{FileStatus.readFields()}} is wrong; if you run the 
> code with assert=true against a directory, an IOE will get raised.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15289) FileStatus.readFields() assertion incorrect

2018-03-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386776#comment-16386776
 ] 

Ted Yu commented on HADOOP-15289:
-

Thanks for the quick fix, Steve.

lgtm

> FileStatus.readFields() assertion incorrect
> ---
>
> Key: HADOOP-15289
> URL: https://issues.apache.org/jira/browse/HADOOP-15289
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-15289-001.patch
>
>
> As covered inHBASE-20123,  "Backup test fails against hadoop 3; ", I think 
> the assert at the end of {{FileStatus.readFields()}} is wrong; if you run the 
> code with assert=true against a directory, an IOE will get raised.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15209) DistCp to eliminate needless deletion of files under already-deleted directories

2018-03-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386777#comment-16386777
 ] 

genericqa commented on HADOOP-15209:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
15s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 32s{color} | {color:orange} root: The patch generated 2 new + 287 unchanged 
- 37 fixed = 289 total (was 324) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m  3s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
46s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 46s{color} 
| {color:red} hadoop-distcp in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
50s{color} | {color:green} hadoop-azure-datalake in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}105m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.tools.TestDistCpSystem |
|   | hadoop.tools.TestDistCpSyncReverseFromTarget |
|   | hadoop.tools.TestDistCpSyncReverseFromSource |
|   | hadoop.tools.TestDistCpSync |
|   | hadoop.tools.mapred.TestCopyMapper |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HADOOP-15209 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913091/HADOOP-15209-005.patch
 |

[jira] [Resolved] (HADOOP-15290) Imprecise assertion in FileStatus w.r.t. symlink

2018-03-05 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HADOOP-15290.
-
Resolution: Duplicate

Dup of HADOOP-15289

> Imprecise assertion in FileStatus w.r.t. symlink
> 
>
> Key: HADOOP-15290
> URL: https://issues.apache.org/jira/browse/HADOOP-15290
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> In HBASE-20123, I logged the following stack trace:
> {code}
> 2018-03-03 14:46:10,858 ERROR [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(237): java.io.IOException: Path 
> hdfs://localhost:40578/backupUT/.tmp/backup_1520088356047 is not a symbolic 
> link
> java.io.IOException: Path 
> hdfs://localhost:40578/backupUT/.tmp/backup_1520088356047 is not a symbolic 
> link
>   at org.apache.hadoop.fs.FileStatus.getSymlink(FileStatus.java:338)
>   at org.apache.hadoop.fs.FileStatus.readFields(FileStatus.java:461)
>   at 
> org.apache.hadoop.tools.CopyListingFileStatus.readFields(CopyListingFileStatus.java:155)
>   at 
> org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.java:2308)
>   at 
> org.apache.hadoop.tools.CopyListing.validateFinalListing(CopyListing.java:163)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:91)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:90)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:84)
>   at org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:382)
>   at 
> org.apache.hadoop.hbase.backup.mapreduce.MapReduceBackupCopyJob$BackupDistCp.createInputFileListing(MapReduceBackupCopyJob.java:297)
> {code}
> [~ste...@apache.org] pointed out that the assertion in FileStatus.java is not 
> accurate:
> {code}
> assert (isDirectory() && getSymlink() == null) || !isDirectory();
> {code}
> {quote}
> It's assuming that getSymlink() returns null if there is no symlink, but 
> instead it raises and exception.
> {quote}
> Steve proposed the following replacement:
> {code}
> assert (!(isDirectory() && isSymlink())
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15290) Imprecise assertion in FileStatus w.r.t. symlink

2018-03-05 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-15290:
---

 Summary: Imprecise assertion in FileStatus w.r.t. symlink
 Key: HADOOP-15290
 URL: https://issues.apache.org/jira/browse/HADOOP-15290
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu


In HBASE-20123, I logged the following stack trace:
{code}
2018-03-03 14:46:10,858 ERROR [Time-limited test] 
mapreduce.MapReduceBackupCopyJob$BackupDistCp(237): java.io.IOException: Path 
hdfs://localhost:40578/backupUT/.tmp/backup_1520088356047 is not a symbolic link
java.io.IOException: Path 
hdfs://localhost:40578/backupUT/.tmp/backup_1520088356047 is not a symbolic link
  at org.apache.hadoop.fs.FileStatus.getSymlink(FileStatus.java:338)
  at org.apache.hadoop.fs.FileStatus.readFields(FileStatus.java:461)
  at 
org.apache.hadoop.tools.CopyListingFileStatus.readFields(CopyListingFileStatus.java:155)
  at 
org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.java:2308)
  at 
org.apache.hadoop.tools.CopyListing.validateFinalListing(CopyListing.java:163)
  at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:91)
  at 
org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:90)
  at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:84)
  at org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:382)
  at 
org.apache.hadoop.hbase.backup.mapreduce.MapReduceBackupCopyJob$BackupDistCp.createInputFileListing(MapReduceBackupCopyJob.java:297)
{code}
[~ste...@apache.org] pointed out that the assertion in FileStatus.java is not 
accurate:
{code}
assert (isDirectory() && getSymlink() == null) || !isDirectory();
{code}
{quote}
It's assuming that getSymlink() returns null if there is no symlink, but 
instead it raises and exception.
{quote}
Steve proposed the following replacement:
{code}
assert (!(isDirectory() && isSymlink())
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15289) FileStatus.readFields() assertion incorrect

2018-03-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15289:

Attachment: HADOOP-15289-001.patch

> FileStatus.readFields() assertion incorrect
> ---
>
> Key: HADOOP-15289
> URL: https://issues.apache.org/jira/browse/HADOOP-15289
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-15289-001.patch
>
>
> As covered inHBASE-20123,  "Backup test fails against hadoop 3; ", I think 
> the assert at the end of {{FileStatus.readFields()}} is wrong; if you run the 
> code with assert=true against a directory, an IOE will get raised.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15289) FileStatus.readFields() assertion incorrect

2018-03-05 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386767#comment-16386767
 ] 

Steve Loughran commented on HADOOP-15289:
-

a one line fix. No test, so I expect reviews by: 

+[~te...@apache.org], [~chris.douglas]

> FileStatus.readFields() assertion incorrect
> ---
>
> Key: HADOOP-15289
> URL: https://issues.apache.org/jira/browse/HADOOP-15289
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-15289-001.patch
>
>
> As covered inHBASE-20123,  "Backup test fails against hadoop 3; ", I think 
> the assert at the end of {{FileStatus.readFields()}} is wrong; if you run the 
> code with assert=true against a directory, an IOE will get raised.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15289) FileStatus.readFields() assertion incorrect

2018-03-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15289:

Status: Patch Available  (was: Open)

> FileStatus.readFields() assertion incorrect
> ---
>
> Key: HADOOP-15289
> URL: https://issues.apache.org/jira/browse/HADOOP-15289
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-15289-001.patch
>
>
> As covered inHBASE-20123,  "Backup test fails against hadoop 3; ", I think 
> the assert at the end of {{FileStatus.readFields()}} is wrong; if you run the 
> code with assert=true against a directory, an IOE will get raised.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15289) FileStatus.readFields() assertion incorrect

2018-03-05 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15289:
---

 Summary: FileStatus.readFields() assertion incorrect
 Key: HADOOP-15289
 URL: https://issues.apache.org/jira/browse/HADOOP-15289
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 3.1.0
Reporter: Steve Loughran


As covered inHBASE-20123,  "Backup test fails against hadoop 3; ", I think the 
assert at the end of {{FileStatus.readFields()}} is wrong; if you run the code 
with assert=true against a directory, an IOE will get raised.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15209) DistCp to eliminate needless deletion of files under already-deleted directories

2018-03-05 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386731#comment-16386731
 ] 

Steve Loughran commented on HADOOP-15209:
-

+ pull in HADOOP-8233, which skips the checksum check on 0 length files. Saves 
an HTTP round trip again

> DistCp to eliminate needless deletion of files under already-deleted 
> directories
> 
>
> Key: HADOOP-15209
> URL: https://issues.apache.org/jira/browse/HADOOP-15209
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15209-001.patch, HADOOP-15209-002.patch, 
> HADOOP-15209-003.patch, HADOOP-15209-004.patch, HADOOP-15209-005.patch
>
>
> DistCP issues a delete(file) request even if is underneath an already deleted 
> directory. This generates needless load on filesystems/object stores, and, if 
> the store throttles delete, can dramatically slow down the delete operation.
> If the distcp delete operation can build a history of deleted directories, 
> then it will know when it does not need to issue those deletes.
> Care is needed here to make sure that whatever structure is created does not 
> overload the heap of the process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-8233) Turn CRC checking off for 0 byte size and differing blocksizes

2018-03-05 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386728#comment-16386728
 ] 

Steve Loughran commented on HADOOP-8233:


* 0-byte length can be skipped
* blocksize is trouble, as for filesystems != HDFS, there's no guarantee that 
different blocksize ==> Different checksum.

> Turn CRC checking off for 0 byte size and differing blocksizes
> --
>
> Key: HADOOP-8233
> URL: https://issues.apache.org/jira/browse/HADOOP-8233
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.23.3
>Reporter: Dave Thompson
>Assignee: Dave Thompson
>Priority: Major
> Attachments: HADOOP-8233-branch-0.23.2.patch
>
>
> DistcpV2 (hadoop-tools/hadoop-distcp/..) can fail from checksum failure, 
> sometimes when copying a 0 byte file.Root cause of this may have to do 
> with an inconsistent nature of HDFS when creating 0 byte files, however 
> distcp can avoid this issue by not checking CRC when size is zero.
> Further, distcp fails checksum when copying from two clusters that use 
> different blocksizes.  In this case it does not make sense to check CRC, as 
> it is a guaranteed failure.
> We need to turn CRC checking off for the above two cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15209) DistCp to eliminate needless deletion of files under already-deleted directories

2018-03-05 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386708#comment-16386708
 ] 

Steve Loughran commented on HADOOP-15209:
-

And manual distcp to s3 london of hadoop-auth followed by a mvn clean then a 
distcp -update -delete
{code}
2018-03-05 19:29:31,084 [pool-6-thread-1] DEBUG s3a.S3AFileSystem 
(Listing.java:buildNextStatusBatch(486)) - Added 13 entries; ignored 0; 
hasNext=true; hasMoreObjects=false
2018-03-05 19:29:31,084 [pool-6-thread-1] DEBUG s3a.S3AFileSystem 
(Listing.java:sourceHasNext(378)) - Start iterating the provided status.
2018-03-05 19:29:31,088 [Thread-217] INFO  tools.SimpleCopyListing 
(SimpleCopyListing.java:printStats(608)) - Paths (files+dirs) cnt = 243; dirCnt 
= 58
2018-03-05 19:29:31,088 [Thread-217] INFO  tools.SimpleCopyListing 
(SimpleCopyListing.java:doBuildListing(402)) - Build file listing completed.
2018-03-05 19:29:31,109 [Thread-217] INFO  tools.DistCp 
(CopyListing.java:buildListing(94)) - Number of paths in the copy list: 243
2018-03-05 19:29:31,125 [Thread-217] INFO  tools.DistCp 
(CopyListing.java:buildListing(94)) - Number of paths in the copy list: 243
2018-03-05 19:29:31,142 [Thread-217] INFO  mapred.CopyCommitter 
(CopyCommitter.java:deleteMissing(414)) - Listing completed in 0:00:05.938
2018-03-05 19:29:31,147 [Thread-217] DEBUG s3a.S3AStorageStatistics 
(S3AStorageStatistics.java:incrementCounter(63)) - op_delete += 1  ->  1
2018-03-05 19:29:31,147 [Thread-217] DEBUG s3a.S3AStorageStatistics 
(S3AStorageStatistics.java:incrementCounter(63)) - op_get_file_status += 1  ->  
481
2018-03-05 19:29:31,147 [Thread-217] DEBUG s3a.S3AFileSystem 
(S3AFileSystem.java:innerGetFileStatus(2106)) - Getting path status for 
s3a://hwdev-steve-london/distcp/hadoop-common-project/hadoop-auth/target/classes
  (distcp/hadoop-common-project/hadoop-auth/target/classes)
2018-03-05 19:29:31,147 [Thread-217] DEBUG s3a.S3AStorageStatistics 
(S3AStorageStatistics.java:incrementCounter(63)) - object_metadata_requests += 
1  ->  691
2018-03-05 19:29:31,165 [Thread-217] DEBUG s3a.S3AStorageStatistics 
(S3AStorageStatistics.java:incrementCounter(63)) - object_metadata_requests += 
1  ->  692
2018-03-05 19:29:31,198 [Thread-217] DEBUG s3a.S3AStorageStatistics 
(S3AStorageStatistics.java:incrementCounter(63)) - object_list_requests += 1  
->  260
2018-03-05 19:29:31,222 [Thread-217] DEBUG s3a.S3AFileSystem 
(S3AFileSystem.java:s3GetFileStatus(2229)) - Found path as directory (with /): 
1/0
2018-03-05 19:29:31,222 [Thread-217] DEBUG s3a.S3AFileSystem 
(S3AFileSystem.java:s3GetFileStatus(2236)) - Prefix: 
distcp/hadoop-common-project/hadoop-auth/target/classes/META-INF/
2018-03-05 19:29:31,222 [Thread-217] DEBUG s3a.S3AFileSystem 
(S3AFileSystem.java:innerDelete(1732)) - Delete path 
s3a://hwdev-steve-london/distcp/hadoop-common-project/hadoop-auth/target/classes
 - recursive true
2018-03-05 19:29:31,222 [Thread-217] DEBUG s3a.S3AFileSystem 
(S3AFileSystem.java:innerDelete(1737)) - delete: Path is a directory: 
s3a://hwdev-steve-london/distcp/hadoop-common-project/hadoop-auth/target/classes
2018-03-05 19:29:31,222 [Thread-217] DEBUG s3a.S3AFileSystem 
(S3AFileSystem.java:innerDelete(1759)) - Getting objects for directory prefix 
distcp/hadoop-common-project/hadoop-auth/target/classes/ to delete
2018-03-05 19:29:31,222 [Thread-217] DEBUG s3a.S3AStorageStatistics 
(S3AStorageStatistics.java:incrementCounter(63)) - object_list_requests += 1  
->  261
2018-03-05 19:29:31,260 [Thread-217] DEBUG s3a.S3AFileSystem 
(S3AFileSystem.java:innerDelete(1769)) - Got object to delete 
distcp/hadoop-common-project/hadoop-auth/target/classes/META-INF/LICENSE.txt
2018-03-05 19:29:31,260 [Thread-217] DEBUG s3a.S3AFileSystem 
(S3AFileSystem.java:innerDelete(1769)) - Got object to delete 
distcp/hadoop-common-project/hadoop-auth/target/classes/META-INF/NOTICE.txt
2018-03-05 19:29:31,260 [Thread-217] DEBUG s3a.S3AFileSystem 
(S3AFileSystem.java:innerDelete(1769)) - Got object to delete 
distcp/hadoop-common-project/hadoop-auth/target/classes/org/apache/hadoop/security/authentication/client/AuthenticatedURL$1.class
2018-03-05 19:29:31,260 [Thread-217] DEBUG s3a.S3AFileSystem 
(S3AFileSystem.java:innerDelete(1769)) - Got object to delete 
distcp/hadoop-common-project/hadoop-auth/target/classes/org/apache/hadoop/security/authentication/client/AuthenticatedURL$AuthCookieHandler.class
2018-03-05 19:29:31,260 [Thread-217] DEBUG s3a.S3AFileSystem 
(S3AFileSystem.java:innerDelete(1769)) - Got object to delete 
distcp/hadoop-common-project/hadoop-auth/target/classes/org/apache/hadoop/security/authentication/client/AuthenticatedURL$Token.class
2018-03-05 19:29:31,261 [Thread-217] DEBUG s3a.S3AFileSystem 
(S3AFileSystem.java:innerDelete(1769)) - Got object to delete 
distcp/hadoop-common-project/hadoop-auth/target/classes/org/apache/hadoop/security/authentication/client/AuthenticatedURL.class

[jira] [Updated] (HADOOP-15209) DistCp to eliminate needless deletion of files under already-deleted directories

2018-03-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15209:

Status: Patch Available  (was: Open)

tested ITestS3AContractDistCp against s3 ireland

> DistCp to eliminate needless deletion of files under already-deleted 
> directories
> 
>
> Key: HADOOP-15209
> URL: https://issues.apache.org/jira/browse/HADOOP-15209
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15209-001.patch, HADOOP-15209-002.patch, 
> HADOOP-15209-003.patch, HADOOP-15209-004.patch, HADOOP-15209-005.patch
>
>
> DistCP issues a delete(file) request even if is underneath an already deleted 
> directory. This generates needless load on filesystems/object stores, and, if 
> the store throttles delete, can dramatically slow down the delete operation.
> If the distcp delete operation can build a history of deleted directories, 
> then it will know when it does not need to issue those deletes.
> Care is needed here to make sure that whatever structure is created does not 
> overload the heap of the process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15209) DistCp to eliminate needless deletion of files under already-deleted directories

2018-03-05 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386597#comment-16386597
 ] 

Steve Loughran commented on HADOOP-15209:
-

Patch 005. 
* improve copycommitter logging
* contains the checksum bypassing of HADOOP-15273, including not asking for a 
checksum on the destFS if the srcFS doesn't have one. That way, if you are 
doing a distcp from localfs or some store FS without checksums (wasb,...) then 
it will avoid the HTTP/RPC call for the dest FS.Makes a big difference from 
localfs to s3

> DistCp to eliminate needless deletion of files under already-deleted 
> directories
> 
>
> Key: HADOOP-15209
> URL: https://issues.apache.org/jira/browse/HADOOP-15209
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15209-001.patch, HADOOP-15209-002.patch, 
> HADOOP-15209-003.patch, HADOOP-15209-004.patch, HADOOP-15209-005.patch
>
>
> DistCP issues a delete(file) request even if is underneath an already deleted 
> directory. This generates needless load on filesystems/object stores, and, if 
> the store throttles delete, can dramatically slow down the delete operation.
> If the distcp delete operation can build a history of deleted directories, 
> then it will know when it does not need to issue those deletes.
> Care is needed here to make sure that whatever structure is created does not 
> overload the heap of the process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15209) DistCp to eliminate needless deletion of files under already-deleted directories

2018-03-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15209:

Attachment: HADOOP-15209-005.patch

> DistCp to eliminate needless deletion of files under already-deleted 
> directories
> 
>
> Key: HADOOP-15209
> URL: https://issues.apache.org/jira/browse/HADOOP-15209
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15209-001.patch, HADOOP-15209-002.patch, 
> HADOOP-15209-003.patch, HADOOP-15209-004.patch, HADOOP-15209-005.patch
>
>
> DistCP issues a delete(file) request even if is underneath an already deleted 
> directory. This generates needless load on filesystems/object stores, and, if 
> the store throttles delete, can dramatically slow down the delete operation.
> If the distcp delete operation can build a history of deleted directories, 
> then it will know when it does not need to issue those deletes.
> Care is needed here to make sure that whatever structure is created does not 
> overload the heap of the process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15090) Add ADL troubleshooting doc

2018-03-05 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15090:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.10.0
   Status: Resolved  (was: Patch Available)

> Add ADL troubleshooting doc
> ---
>
> Key: HADOOP-15090
> URL: https://issues.apache.org/jira/browse/HADOOP-15090
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/adl
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 3.0.1
>
> Attachments: HADOOP-15090-001.patch, HADOOP-15090-branch-2.002.patch, 
> HADOOP-15090-branch-2.003.patch, HADOOP-15090-trunk.patch
>
>
> Add a troubleshooting section/doc to the ADL docs based on our experiences.
> this should not be a substitute for improving the diagnostics/fixing the 
> error messages. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15090) Add ADL troubleshooting doc

2018-03-05 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386585#comment-16386585
 ] 

Akira Ajisaka commented on HADOOP-15090:


+1, already committed to branch-2 by Steve. Thank you, Masatake and Steve!

> Add ADL troubleshooting doc
> ---
>
> Key: HADOOP-15090
> URL: https://issues.apache.org/jira/browse/HADOOP-15090
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/adl
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 3.0.1
>
> Attachments: HADOOP-15090-001.patch, HADOOP-15090-branch-2.002.patch, 
> HADOOP-15090-branch-2.003.patch, HADOOP-15090-trunk.patch
>
>
> Add a troubleshooting section/doc to the ADL docs based on our experiences.
> this should not be a substitute for improving the diagnostics/fixing the 
> error messages. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15273) distcp to downgrade on checksum algorithm mismatch to "files unchanged"

2018-03-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15273:

Summary: distcp to downgrade on checksum algorithm mismatch to "files 
unchanged"  (was: distcp error message on checksum mismatch is misleading when 
checksum protocol itself is different)

> distcp to downgrade on checksum algorithm mismatch to "files unchanged"
> ---
>
> Key: HADOOP-15273
> URL: https://issues.apache.org/jira/browse/HADOOP-15273
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Priority: Minor
>
> When using distcp without {{-skipCRC}} . If there's a checksum mismatch 
> between src and dest store types (e.g hdfs to s3), then the error message 
> will talk about blocksize, even when its the underlying checksum protocol 
> itself which is the cause for failure
> bq. Source and target differ in block-size. Use -pb to preserve block-sizes 
> during copy. Alternatively, skip checksum-checks altogether, using -skipCrc. 
> (NOTE: By skipping checksums, one runs the risk of masking data-corruption 
> during file-transfer.)
> IF the checksum types are fundamentally different, the error message should 
> say so



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15209) DistCp to eliminate needless deletion of files under already-deleted directories

2018-03-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15209:

Status: Open  (was: Patch Available)

> DistCp to eliminate needless deletion of files under already-deleted 
> directories
> 
>
> Key: HADOOP-15209
> URL: https://issues.apache.org/jira/browse/HADOOP-15209
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15209-001.patch, HADOOP-15209-002.patch, 
> HADOOP-15209-003.patch, HADOOP-15209-004.patch
>
>
> DistCP issues a delete(file) request even if is underneath an already deleted 
> directory. This generates needless load on filesystems/object stores, and, if 
> the store throttles delete, can dramatically slow down the delete operation.
> If the distcp delete operation can build a history of deleted directories, 
> then it will know when it does not need to issue those deletes.
> Care is needed here to make sure that whatever structure is created does not 
> overload the heap of the process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15209) DistCp to eliminate needless deletion of files under already-deleted directories

2018-03-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15209:

Status: Patch Available  (was: Open)

Patch 004

adds a retry on the getFileStatus call after a delete fails. This is to try to 
handle eventual consistency quirks against, as usual, s3. 

Also, more logging, including calling targetfs.toString at the end, so you get 
more update states from s3a.

tested manually against s3a london

> DistCp to eliminate needless deletion of files under already-deleted 
> directories
> 
>
> Key: HADOOP-15209
> URL: https://issues.apache.org/jira/browse/HADOOP-15209
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15209-001.patch, HADOOP-15209-002.patch, 
> HADOOP-15209-003.patch, HADOOP-15209-004.patch
>
>
> DistCP issues a delete(file) request even if is underneath an already deleted 
> directory. This generates needless load on filesystems/object stores, and, if 
> the store throttles delete, can dramatically slow down the delete operation.
> If the distcp delete operation can build a history of deleted directories, 
> then it will know when it does not need to issue those deletes.
> Care is needed here to make sure that whatever structure is created does not 
> overload the heap of the process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15288) TestSwiftFileSystemBlockLocation doesn't compile

2018-03-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386496#comment-16386496
 ] 

Hudson commented on HADOOP-15288:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13771 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13771/])
HADOOP-15288. TestSwiftFileSystemBlockLocation doesn't compile. (stevel: rev 
2e1e049bd007b1c5e69cfe839ec18c2b1877907a)
* (edit) 
hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemBlockLocation.java


> TestSwiftFileSystemBlockLocation doesn't compile
> 
>
> Key: HADOOP-15288
> URL: https://issues.apache.org/jira/browse/HADOOP-15288
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, fs/swift
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
> Fix For: 3.2.0
>
> Attachments: HADOOP-15288-001.patch
>
>
> TestSwiftFileSystemBlockLocation doesn't comple after the switch to the slf4J 
> APIs. one line fix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15209) DistCp to eliminate needless deletion of files under already-deleted directories

2018-03-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15209:

Status: Open  (was: Patch Available)

> DistCp to eliminate needless deletion of files under already-deleted 
> directories
> 
>
> Key: HADOOP-15209
> URL: https://issues.apache.org/jira/browse/HADOOP-15209
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15209-001.patch, HADOOP-15209-002.patch, 
> HADOOP-15209-003.patch, HADOOP-15209-004.patch
>
>
> DistCP issues a delete(file) request even if is underneath an already deleted 
> directory. This generates needless load on filesystems/object stores, and, if 
> the store throttles delete, can dramatically slow down the delete operation.
> If the distcp delete operation can build a history of deleted directories, 
> then it will know when it does not need to issue those deletes.
> Care is needed here to make sure that whatever structure is created does not 
> overload the heap of the process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15209) DistCp to eliminate needless deletion of files under already-deleted directories

2018-03-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15209:

Attachment: HADOOP-15209-004.patch

> DistCp to eliminate needless deletion of files under already-deleted 
> directories
> 
>
> Key: HADOOP-15209
> URL: https://issues.apache.org/jira/browse/HADOOP-15209
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15209-001.patch, HADOOP-15209-002.patch, 
> HADOOP-15209-003.patch, HADOOP-15209-004.patch
>
>
> DistCP issues a delete(file) request even if is underneath an already deleted 
> directory. This generates needless load on filesystems/object stores, and, if 
> the store throttles delete, can dramatically slow down the delete operation.
> If the distcp delete operation can build a history of deleted directories, 
> then it will know when it does not need to issue those deletes.
> Care is needed here to make sure that whatever structure is created does not 
> overload the heap of the process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15282) HADOOP-15235 broke TestHttpFSServerWebServer

2018-03-05 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386482#comment-16386482
 ] 

Robert Kanter commented on HADOOP-15282:


Thanks for the review [~ajisakaa]!

> HADOOP-15235 broke TestHttpFSServerWebServer
> 
>
> Key: HADOOP-15282
> URL: https://issues.apache.org/jira/browse/HADOOP-15282
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.2.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15282.001.patch
>
>
> As [~xiaochen] pointed out in [this 
> comment|https://issues.apache.org/jira/browse/HADOOP-15235?focusedCommentId=16375379&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16375379]
>  on HADOOP-15235, it broke {{TestHttpFSServerWebServer}}:
> {noformat}
> 2018-02-23 23:13:29,791 WARN  ServletHandler - /webhdfs/v1/
> java.lang.IllegalArgumentException: Empty key
>   at javax.crypto.spec.SecretKeySpec.(SecretKeySpec.java:96)
>   at 
> org.apache.hadoop.security.authentication.util.Signer.computeSignature(Signer.java:93)
>   at 
> org.apache.hadoop.security.authentication.util.Signer.sign(Signer.java:59)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:587)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)
>   at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1617)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:534)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>   at java.lang.Thread.run(Thread.java:745)
> java.lang.AssertionError: 
> Expected :500
> Actual   :200
>  
> {noformat}
> This only affects trunk because {{TestHttpFSServerWebServer}} doesn't exist 
> in branch-2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15288) TestSwiftFileSystemBlockLocation doesn't compile

2018-03-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15288:

   Resolution: Fixed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

Ok, it's in.

> TestSwiftFileSystemBlockLocation doesn't compile
> 
>
> Key: HADOOP-15288
> URL: https://issues.apache.org/jira/browse/HADOOP-15288
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, fs/swift
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
> Fix For: 3.2.0
>
> Attachments: HADOOP-15288-001.patch
>
>
> TestSwiftFileSystemBlockLocation doesn't comple after the switch to the slf4J 
> APIs. one line fix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15288) TestSwiftFileSystemBlockLocation doesn't compile

2018-03-05 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386409#comment-16386409
 ] 

Akira Ajisaka commented on HADOOP-15288:


+1, thank you Steve for the quick fix.

> TestSwiftFileSystemBlockLocation doesn't compile
> 
>
> Key: HADOOP-15288
> URL: https://issues.apache.org/jira/browse/HADOOP-15288
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, fs/swift
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-15288-001.patch
>
>
> TestSwiftFileSystemBlockLocation doesn't comple after the switch to the slf4J 
> APIs. one line fix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15288) TestSwiftFileSystemBlockLocation doesn't compile

2018-03-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386402#comment-16386402
 ] 

genericqa commented on HADOOP-15288:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
18s{color} | {color:green} hadoop-openstack in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HADOOP-15288 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913075/HADOOP-15288-001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 62db3786653d 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8110d6a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14256/testReport/ |
| Max. process+thread count | 441 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-openstack U: hadoop-tools/hadoop-openstack |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14256/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestSwiftFileSystemBlockLocation doesn't compile
> 
>
> Key: HADOOP-15288
>

[jira] [Commented] (HADOOP-15234) NPE when initializing KMSWebApp

2018-03-05 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386394#comment-16386394
 ] 

Rushabh S Shah commented on HADOOP-15234:
-

{quote} should we throw in the implementation of 
KeyProviderFactory#createProvider such as JavaKeyStoreProvider#createprovider 
and KMSCientProvider#createProvider
{quote}
How do you know what key provider it is trying to create ?
Are you trying to say that we compare the passed scheme with all the schemes 
and try to find the best fit ?
IMO it is too much spoon feeding to administrators. We just have 3-4 schemes 
and it shouldn't be too difficult to figure out the right scheme.
bq. with more specific exception message for invalid scheme like we do for the 
authority and port check in KMSClientProvider#createProvdier?
In these checks, we know the scheme was {{kms://}} and there is something wrong 
with authority and port.



> NPE when initializing KMSWebApp
> ---
>
> Key: HADOOP-15234
> URL: https://issues.apache.org/jira/browse/HADOOP-15234
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Xiao Chen
>Assignee: fang zhenyi
>Priority: Major
> Attachments: HADOOP-15234.001.patch, HADOOP-15234.002.patch
>
>
> During KMS startup, if the {{keyProvider}} is null, it will NPE inside 
> KeyProviderExtension.
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.crypto.key.KeyProviderExtension.(KeyProviderExtension.java:43)
>   at 
> org.apache.hadoop.crypto.key.CachingKeyProvider.(CachingKeyProvider.java:93)
>   at 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:170)
> {noformat}
> We're investigating the exact scenario that could lead to this, but the NPE 
> and log around it can be improved.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15234) NPE when initializing KMSWebApp

2018-03-05 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386371#comment-16386371
 ] 

Xiaoyu Yao commented on HADOOP-15234:
-

For this specific case, i.e.,  invalid scheme name for provider uri, should we 
throw in the implementation of KeyProviderFactory#createProvider such as 
JavaKeyStoreProvider#createprovider and KMSCientProvider#createProvider with 
more specific exception message for invalid scheme like we do for the authority 
and port check in KMSClientProvider#createProvdier? 


> NPE when initializing KMSWebApp
> ---
>
> Key: HADOOP-15234
> URL: https://issues.apache.org/jira/browse/HADOOP-15234
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Xiao Chen
>Assignee: fang zhenyi
>Priority: Major
> Attachments: HADOOP-15234.001.patch, HADOOP-15234.002.patch
>
>
> During KMS startup, if the {{keyProvider}} is null, it will NPE inside 
> KeyProviderExtension.
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.crypto.key.KeyProviderExtension.(KeyProviderExtension.java:43)
>   at 
> org.apache.hadoop.crypto.key.CachingKeyProvider.(CachingKeyProvider.java:93)
>   at 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:170)
> {noformat}
> We're investigating the exact scenario that could lead to this, but the NPE 
> and log around it can be improved.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15267) S3A multipart upload fails when SSE-C encryption is enabled

2018-03-05 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386329#comment-16386329
 ] 

Steve Loughran commented on HADOOP-15267:
-

yes, LGTM. that's the production code & the mock tests, just that integration 
one

thanks for doing this

> S3A multipart upload fails when SSE-C encryption is enabled
> ---
>
> Key: HADOOP-15267
> URL: https://issues.apache.org/jira/browse/HADOOP-15267
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
> Environment: Hadoop 3.1 Snapshot
>Reporter: Anis Elleuch
>Assignee: Anis Elleuch
>Priority: Critical
> Attachments: HADOOP-15267-001.patch
>
>
> When I enable SSE-C encryption in Hadoop 3.1 and set  fs.s3a.multipart.size 
> to 5 Mb, storing data in AWS doesn't work anymore. For example, running the 
> following code:
> {code}
> >>> df1 = spark.read.json('/home/user/people.json')
> >>> df1.write.mode("overwrite").json("s3a://testbucket/people.json")
> {code}
> shows the following exception:
> {code:java}
> com.amazonaws.services.s3.model.AmazonS3Exception: The multipart upload 
> initiate requested encryption. Subsequent part requests must include the 
> appropriate encryption parameters.
> {code}
> After some investigation, I discovered that hadoop-aws doesn't send SSE-C 
> headers in Put Object Part as stated in AWS specification: 
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPart.html]
> {code:java}
> If you requested server-side encryption using a customer-provided encryption 
> key in your initiate multipart upload request, you must provide identical 
> encryption information in each part upload using the following headers.
> {code}
>  
> You can find a patch attached to this issue for a better clarification of the 
> problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15274) Move hadoop-openstack to slf4j

2018-03-05 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386322#comment-16386322
 ] 

Steve Loughran commented on HADOOP-15274:
-

openstack has stopped compiling on trunk for me; filed HADOOP-15288 to cover 
it. please review as soon as soon as possible. thanks

> Move hadoop-openstack to slf4j
> --
>
> Key: HADOOP-15274
> URL: https://issues.apache.org/jira/browse/HADOOP-15274
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/swift
>Reporter: Steve Loughran
>Assignee: fang zhenyi
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HADOOP-15274.001.patch, HADOOP-15274.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15288) TestSwiftFileSystemBlockLocation doesn't compile

2018-03-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15288:

Attachment: HADOOP-15288-001.patch

> TestSwiftFileSystemBlockLocation doesn't compile
> 
>
> Key: HADOOP-15288
> URL: https://issues.apache.org/jira/browse/HADOOP-15288
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, fs/swift
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-15288-001.patch
>
>
> TestSwiftFileSystemBlockLocation doesn't comple after the switch to the slf4J 
> APIs. one line fix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15288) TestSwiftFileSystemBlockLocation doesn't compile

2018-03-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15288:

Assignee: Steve Loughran
  Status: Patch Available  (was: Open)

> TestSwiftFileSystemBlockLocation doesn't compile
> 
>
> Key: HADOOP-15288
> URL: https://issues.apache.org/jira/browse/HADOOP-15288
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, fs/swift
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-15288-001.patch
>
>
> TestSwiftFileSystemBlockLocation doesn't comple after the switch to the slf4J 
> APIs. one line fix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15284) Docker launch fails when user private filecache directory is missing

2018-03-05 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-15284:

Affects Version/s: 3.1.0
  Summary: Docker launch fails when user private filecache 
directory is missing  (was: Could not determine real path of mount)

ContainerLocalizer, which is run for every user-specific localization (i.e.: 
PRIVATE and APPLICATION visibility), creates both the 
usercache/_user_/filecache and usercache/_user_/appcache directories whenever 
it runs (see ContainerLocalizer#initDirs).

If this directory is missing then I'm wondering if this is a case where 
_nothing_ was localized for this user, not just PRIVATE but also no APPLICATION 
visibility resources (i.e.: only public resources or no resources at all).  The 
only reason this would have worked before YARN-7815 is because the container 
executor creates the container work directory which exists under the 
usercache/_user_ directory, and that's what it used to mount before tha changes 
in YARN-7815.

> Docker launch fails when user private filecache directory is missing
> 
>
> Key: HADOOP-15284
> URL: https://issues.apache.org/jira/browse/HADOOP-15284
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Priority: Major
>
> Docker container is failing to launch in trunk.  The root cause is:
> {code}
> [COMPINSTANCE sleeper-1 : container_1520032931921_0001_01_20]: 
> [2018-03-02 23:26:09.196]Exception from container-launch.
> Container id: container_1520032931921_0001_01_20
> Exit code: 29
> Exception message: image: hadoop/centos:latest is trusted in hadoop registry.
> Could not determine real path of mount 
> '/tmp/hadoop-yarn/nm-local-dir/usercache/hbase/filecache'
> Could not determine real path of mount 
> '/tmp/hadoop-yarn/nm-local-dir/usercache/hbase/filecache'
> Invalid docker mount 
> '/tmp/hadoop-yarn/nm-local-dir/usercache/hbase/filecache:/tmp/hadoop-yarn/nm-local-dir/usercache/hbase/filecache',
>  realpath=/tmp/hadoop-yarn/nm-local-dir/usercache/hbase/filecache
> Error constructing docker command, docker error code=12, error 
> message='Invalid docker mount'
> Shell output: main : command provided 4
> main : run as user is hbase
> main : requested yarn user is hbase
> Creating script paths...
> Creating local dirs...
> [2018-03-02 23:26:09.240]Diagnostic message from attempt 0 : [2018-03-02 
> 23:26:09.240]
> [2018-03-02 23:26:09.240]Container exited with a non-zero exit code 29.
> [2018-03-02 23:26:39.278]Could not find 
> nmPrivate/application_1520032931921_0001/container_1520032931921_0001_01_20//container_1520032931921_0001_01_20.pid
>  in any of the directories
> [COMPONENT sleeper]: Failed 11 times, exceeded the limit - 10. Shutting down 
> now...
> {code}
> The filecache cant not be mounted because it doesn't exist.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15288) TestSwiftFileSystemBlockLocation doesn't compile

2018-03-05 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386301#comment-16386301
 ] 

Steve Loughran commented on HADOOP-15288:
-

{code}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
(default-testCompile) on project hadoop-openstack: Compilation failure
[ERROR] 
/Users/stevel/Projects/hadoop-trunk/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemBlockLocation.java:[52,8]
 no suitable method found for info(org.apache.hadoop.fs.BlockLocation)
[ERROR] method org.slf4j.Logger.info(java.lang.String) is not applicable
[ERROR] (argument mismatch; org.apache.hadoop.fs.BlockLocation cannot be 
converted to java.lang.String)
[ERROR] method org.slf4j.Logger.info(java.lang.String,java.lang.Object...) is 
not applicable
[ERROR] (argument mismatch; org.apache.hadoop.fs.BlockLocation cannot be 
converted to java.lang.String)
[ERROR] method 
org.slf4j.Logger.info(org.slf4j.Marker,java.lang.String,java.lang.Object...) is 
not applicable
[ERROR] (argument mismatch; org.apache.hadoop.fs.BlockLocation cannot be 
converted to org.slf4j.Marker)
[ERROR] -> [Help 1]
[ERROR] 
{code}

The old API had an overload of LOG.info(Object)...needs to move to info("{}", 
object)

> TestSwiftFileSystemBlockLocation doesn't compile
> 
>
> Key: HADOOP-15288
> URL: https://issues.apache.org/jira/browse/HADOOP-15288
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, fs/swift
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Critical
>
> TestSwiftFileSystemBlockLocation doesn't comple after the switch to the slf4J 
> APIs. one line fix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15288) TestSwiftFileSystemBlockLocation doesn't compile

2018-03-05 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15288:
---

 Summary: TestSwiftFileSystemBlockLocation doesn't compile
 Key: HADOOP-15288
 URL: https://issues.apache.org/jira/browse/HADOOP-15288
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, fs/swift
Affects Versions: 3.2.0
Reporter: Steve Loughran


TestSwiftFileSystemBlockLocation doesn't comple after the switch to the slf4J 
APIs. one line fix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15267) S3A multipart upload fails when SSE-C encryption is enabled

2018-03-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386266#comment-16386266
 ] 

genericqa commented on HADOOP-15267:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
39s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 18s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HADOOP-15267 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913062/HADOOP-15267-001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 65c8a5884b38 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8110d6a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14255/testReport/ |
| Max. process+thread count | 348 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14255/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> S3A multipart upload fails when SSE-C encryption is enabled
> ---
>
> Key: HADOOP-15267
> 

[jira] [Commented] (HADOOP-15284) Could not determine real path of mount

2018-03-05 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386208#comment-16386208
 ] 

Jason Lowe commented on HADOOP-15284:
-

Looks like this was caused by YARN-7815.  The user's directory that was mounted 
before is always going to be there because the container executor creates the 
underlying container directory, but the user's filecache directory for 
resources with PRIVATE visibility may not be there.

One straightforward fix is to have the container executor ensure the user's 
filecache directory is present when launching Docker containers, but there may 
be cleaner alternatives.

> Could not determine real path of mount
> --
>
> Key: HADOOP-15284
> URL: https://issues.apache.org/jira/browse/HADOOP-15284
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Eric Yang
>Priority: Major
>
> Docker container is failing to launch in trunk.  The root cause is:
> {code}
> [COMPINSTANCE sleeper-1 : container_1520032931921_0001_01_20]: 
> [2018-03-02 23:26:09.196]Exception from container-launch.
> Container id: container_1520032931921_0001_01_20
> Exit code: 29
> Exception message: image: hadoop/centos:latest is trusted in hadoop registry.
> Could not determine real path of mount 
> '/tmp/hadoop-yarn/nm-local-dir/usercache/hbase/filecache'
> Could not determine real path of mount 
> '/tmp/hadoop-yarn/nm-local-dir/usercache/hbase/filecache'
> Invalid docker mount 
> '/tmp/hadoop-yarn/nm-local-dir/usercache/hbase/filecache:/tmp/hadoop-yarn/nm-local-dir/usercache/hbase/filecache',
>  realpath=/tmp/hadoop-yarn/nm-local-dir/usercache/hbase/filecache
> Error constructing docker command, docker error code=12, error 
> message='Invalid docker mount'
> Shell output: main : command provided 4
> main : run as user is hbase
> main : requested yarn user is hbase
> Creating script paths...
> Creating local dirs...
> [2018-03-02 23:26:09.240]Diagnostic message from attempt 0 : [2018-03-02 
> 23:26:09.240]
> [2018-03-02 23:26:09.240]Container exited with a non-zero exit code 29.
> [2018-03-02 23:26:39.278]Could not find 
> nmPrivate/application_1520032931921_0001/container_1520032931921_0001_01_20//container_1520032931921_0001_01_20.pid
>  in any of the directories
> [COMPONENT sleeper]: Failed 11 times, exceeded the limit - 10. Shutting down 
> now...
> {code}
> The filecache cant not be mounted because it doesn't exist.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15286) Remove unused imports from TestKMSWithZK.java

2018-03-05 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386203#comment-16386203
 ] 

Ajay Kumar commented on HADOOP-15286:
-

 [~ajisakaa], thanks for review and commit!!

> Remove unused imports from TestKMSWithZK.java
> -
>
> Key: HADOOP-15286
> URL: https://issues.apache.org/jira/browse/HADOOP-15286
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Ajay Kumar
>Priority: Minor
>  Labels: newbie
> Fix For: 3.1.0, 2.10.0
>
> Attachments: HADOOP-15286.000.patch
>
>
> There are 30+ unused imports in TestKMSWithZK.java. Let's clean them up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15267) S3A multipart upload fails when SSE-C encryption is enabled

2018-03-05 Thread Anis Elleuch (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386201#comment-16386201
 ] 

Anis Elleuch commented on HADOOP-15267:
---

[~ste...@apache.org]: I updated the patch with the correct name and made 
changes you requested (I hope all of them). It doesn't include the integration 
tests yet but I just wanted to do a review for this progress first.

I ran hadoop aws tests ({{cd hadoop-tools/hadoop-aws; mvn test}}) using my AWS 
S3 bucket vadmeste-hadoop in region us-east-1.


> S3A multipart upload fails when SSE-C encryption is enabled
> ---
>
> Key: HADOOP-15267
> URL: https://issues.apache.org/jira/browse/HADOOP-15267
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
> Environment: Hadoop 3.1 Snapshot
>Reporter: Anis Elleuch
>Assignee: Anis Elleuch
>Priority: Critical
> Attachments: HADOOP-15267-001.patch
>
>
> When I enable SSE-C encryption in Hadoop 3.1 and set  fs.s3a.multipart.size 
> to 5 Mb, storing data in AWS doesn't work anymore. For example, running the 
> following code:
> {code}
> >>> df1 = spark.read.json('/home/user/people.json')
> >>> df1.write.mode("overwrite").json("s3a://testbucket/people.json")
> {code}
> shows the following exception:
> {code:java}
> com.amazonaws.services.s3.model.AmazonS3Exception: The multipart upload 
> initiate requested encryption. Subsequent part requests must include the 
> appropriate encryption parameters.
> {code}
> After some investigation, I discovered that hadoop-aws doesn't send SSE-C 
> headers in Put Object Part as stated in AWS specification: 
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPart.html]
> {code:java}
> If you requested server-side encryption using a customer-provided encryption 
> key in your initiate multipart upload request, you must provide identical 
> encryption information in each part upload using the following headers.
> {code}
>  
> You can find a patch attached to this issue for a better clarification of the 
> problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15267) S3A multipart upload fails when SSE-C encryption is enabled

2018-03-05 Thread Anis Elleuch (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anis Elleuch updated HADOOP-15267:
--
Attachment: (was: hadoop-fix.patch)

> S3A multipart upload fails when SSE-C encryption is enabled
> ---
>
> Key: HADOOP-15267
> URL: https://issues.apache.org/jira/browse/HADOOP-15267
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
> Environment: Hadoop 3.1 Snapshot
>Reporter: Anis Elleuch
>Assignee: Anis Elleuch
>Priority: Critical
> Attachments: HADOOP-15267-001.patch
>
>
> When I enable SSE-C encryption in Hadoop 3.1 and set  fs.s3a.multipart.size 
> to 5 Mb, storing data in AWS doesn't work anymore. For example, running the 
> following code:
> {code}
> >>> df1 = spark.read.json('/home/user/people.json')
> >>> df1.write.mode("overwrite").json("s3a://testbucket/people.json")
> {code}
> shows the following exception:
> {code:java}
> com.amazonaws.services.s3.model.AmazonS3Exception: The multipart upload 
> initiate requested encryption. Subsequent part requests must include the 
> appropriate encryption parameters.
> {code}
> After some investigation, I discovered that hadoop-aws doesn't send SSE-C 
> headers in Put Object Part as stated in AWS specification: 
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPart.html]
> {code:java}
> If you requested server-side encryption using a customer-provided encryption 
> key in your initiate multipart upload request, you must provide identical 
> encryption information in each part upload using the following headers.
> {code}
>  
> You can find a patch attached to this issue for a better clarification of the 
> problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-883) Create fsck tool for S3 file system

2018-03-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-883.
---
Resolution: Won't Fix

given this JIRA is 11 years old, time to mark as a wontfix

> Create fsck tool for S3 file system
> ---
>
> Key: HADOOP-883
> URL: https://issues.apache.org/jira/browse/HADOOP-883
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 0.10.1
>Reporter: Tom White
>Priority: Minor
>
> A fsck tool for S3 would help diagnose data problems and also collect 
> statistics on the use of a S3 volume.
> The existing 'bin/hadoop fsck' invocation should be extended to support S3 
> (currently it supports only HDFS) rather than adding another command. It 
> should be possible to do this by extracting the filesystem from the path 
> (required first argument) and delegating to the relevant fsck tool.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15267) S3A multipart upload fails when SSE-C encryption is enabled

2018-03-05 Thread Anis Elleuch (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anis Elleuch updated HADOOP-15267:
--
Attachment: HADOOP-15267-001.patch

> S3A multipart upload fails when SSE-C encryption is enabled
> ---
>
> Key: HADOOP-15267
> URL: https://issues.apache.org/jira/browse/HADOOP-15267
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
> Environment: Hadoop 3.1 Snapshot
>Reporter: Anis Elleuch
>Assignee: Anis Elleuch
>Priority: Critical
> Attachments: HADOOP-15267-001.patch, hadoop-fix.patch
>
>
> When I enable SSE-C encryption in Hadoop 3.1 and set  fs.s3a.multipart.size 
> to 5 Mb, storing data in AWS doesn't work anymore. For example, running the 
> following code:
> {code}
> >>> df1 = spark.read.json('/home/user/people.json')
> >>> df1.write.mode("overwrite").json("s3a://testbucket/people.json")
> {code}
> shows the following exception:
> {code:java}
> com.amazonaws.services.s3.model.AmazonS3Exception: The multipart upload 
> initiate requested encryption. Subsequent part requests must include the 
> appropriate encryption parameters.
> {code}
> After some investigation, I discovered that hadoop-aws doesn't send SSE-C 
> headers in Put Object Part as stated in AWS specification: 
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPart.html]
> {code:java}
> If you requested server-side encryption using a customer-provided encryption 
> key in your initiate multipart upload request, you must provide identical 
> encryption information in each part upload using the following headers.
> {code}
>  
> You can find a patch attached to this issue for a better clarification of the 
> problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13761) S3Guard: implement retries for DDB failures and throttling; translate exceptions

2018-03-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386178#comment-16386178
 ] 

Hudson commented on HADOOP-13761:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13769 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13769/])
HADOOP-13761. S3Guard: implement retries for DDB failures and (stevel: rev 
8110d6a0d59e7dc2ddb25fa424fab188c5e9ce35)
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/InconsistentS3Object.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ARetryPolicy.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java
* (add) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestDynamoDBMetadataStoreScale.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/InconsistentAmazonS3Client.java
* (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/AbstractCommitITest.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardListConsistency.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStore.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AInconsistency.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Invoker.java
* (edit) hadoop-tools/hadoop-aws/dev-support/findbugs-exclude.xml
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3GuardExistsRetryPolicy.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestUtils.java
* (delete) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ITestDynamoDBMetadataStoreScale.java
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/FailureInjectionPolicy.java
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AReadOpContext.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStoreTestBase.java
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AOpContext.java


> S3Guard: implement retries for DDB failures and throttling; translate 
> exceptions
> 
>
> Key: HADOOP-13761
> URL: https://issues.apache.org/jira/browse/HADOOP-13761
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Blocker
> Fix For: 3.1.0
>
> Attachments: HADOOP-13761-004-to-005.patch, 
> HADOOP-13761-005-to-006-approx.diff.txt, HADOOP-13761-005.patch, 
> HADOOP-13761-006.patch, HADOOP-13761-007.patch, HADOOP-13761-008.patch, 
> HADOOP-13761-009.patch, HADOOP-13761-010.patch, HADOOP-13761-010.patch, 
> HADOOP-13761-011.patch, HADOOP-13761-012.patch, HADOOP-13761-013.patch, 
> HADOOP-13761.001.patch, HADOOP-13761.002.patch, HADOOP-13761.003.patch, 
> HADOOP-13761.004.patch, HADOOP-15183-013.patch
>
>
> Following the S3AFileSystem integration patch in HADOOP-13651, we need to add 
> retry logic.
> In HADOOP-13651, I added TODO comments in most of the places retry loops are 
> needed, including:
> - open(path).  If MetadataStore reflects recent create/move of file path, but 
> we fail to read it from S3, retry.
> - delete(path).  If deleteObject() on S3 fails, but MetadataStore shows the 
> file exists, retry.
> - rename(src,dest).  If source path is not visible in S3 yet, retry.
> - listFiles(). Skip for now. Not currently implemented in S3Guard. I will 
> create a separate JIRA for this as it will likely require interface changes 
> (i.e. prefix or subtree scan).
> We may miss some cases initially and we should do failure injection testing 
> to make sure we're covered.  Failure injection tests can be a separate JIRA 
> to make this easier to review.
> We also need basic configuration parameters around retry policy.  There 
> should be a way to specify maximum retry duration, as some applications would 
> prefer to receive an error eventually, than waiting indefinitely.  We should 
> also be keeping statistics when inconsistency is detected and we enter a 
> retry loop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: co

[jira] [Commented] (HADOOP-15234) NPE when initializing KMSWebApp

2018-03-05 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386162#comment-16386162
 ] 

Rushabh S Shah commented on HADOOP-15234:
-

[~zhenyi]: Sorry for asking you to make 2-3 revisions for such a simple patch.
But I am not a big fan of using Preconditions.
We are creating a string even if we are not going to use it.
We can just add a basic null check instead of {{Preconditions}} and then create 
a string for the exception if keyprovider is null.
Also lets wait for Xiao to come back and see whether he is comfortable to push 
this patch w/o unit tests.

> NPE when initializing KMSWebApp
> ---
>
> Key: HADOOP-15234
> URL: https://issues.apache.org/jira/browse/HADOOP-15234
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Xiao Chen
>Assignee: fang zhenyi
>Priority: Major
> Attachments: HADOOP-15234.001.patch, HADOOP-15234.002.patch
>
>
> During KMS startup, if the {{keyProvider}} is null, it will NPE inside 
> KeyProviderExtension.
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.crypto.key.KeyProviderExtension.(KeyProviderExtension.java:43)
>   at 
> org.apache.hadoop.crypto.key.CachingKeyProvider.(CachingKeyProvider.java:93)
>   at 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:170)
> {noformat}
> We're investigating the exact scenario that could lead to this, but the NPE 
> and log around it can be improved.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13761) S3Guard: implement retries for DDB failures and throttling; translate exceptions

2018-03-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13761:

   Resolution: Fixed
Fix Version/s: 3.1.0
   Status: Resolved  (was: Patch Available)

committed to branch-3.1. which makes this what should be the final big s3guard 
change for there.

> S3Guard: implement retries for DDB failures and throttling; translate 
> exceptions
> 
>
> Key: HADOOP-13761
> URL: https://issues.apache.org/jira/browse/HADOOP-13761
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Blocker
> Fix For: 3.1.0
>
> Attachments: HADOOP-13761-004-to-005.patch, 
> HADOOP-13761-005-to-006-approx.diff.txt, HADOOP-13761-005.patch, 
> HADOOP-13761-006.patch, HADOOP-13761-007.patch, HADOOP-13761-008.patch, 
> HADOOP-13761-009.patch, HADOOP-13761-010.patch, HADOOP-13761-010.patch, 
> HADOOP-13761-011.patch, HADOOP-13761-012.patch, HADOOP-13761-013.patch, 
> HADOOP-13761.001.patch, HADOOP-13761.002.patch, HADOOP-13761.003.patch, 
> HADOOP-13761.004.patch, HADOOP-15183-013.patch
>
>
> Following the S3AFileSystem integration patch in HADOOP-13651, we need to add 
> retry logic.
> In HADOOP-13651, I added TODO comments in most of the places retry loops are 
> needed, including:
> - open(path).  If MetadataStore reflects recent create/move of file path, but 
> we fail to read it from S3, retry.
> - delete(path).  If deleteObject() on S3 fails, but MetadataStore shows the 
> file exists, retry.
> - rename(src,dest).  If source path is not visible in S3 yet, retry.
> - listFiles(). Skip for now. Not currently implemented in S3Guard. I will 
> create a separate JIRA for this as it will likely require interface changes 
> (i.e. prefix or subtree scan).
> We may miss some cases initially and we should do failure injection testing 
> to make sure we're covered.  Failure injection tests can be a separate JIRA 
> to make this easier to review.
> We also need basic configuration parameters around retry policy.  There 
> should be a way to specify maximum retry duration, as some applications would 
> prefer to receive an error eventually, than waiting indefinitely.  We should 
> also be keeping statistics when inconsistency is detected and we enter a 
> retry loop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13761) S3Guard: implement retries for DDB failures and throttling; translate exceptions

2018-03-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386098#comment-16386098
 ] 

genericqa commented on HADOOP-13761:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  0m 
13s{color} | {color:red} Docker failed to build yetus/hadoop:d4cc50f. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-13761 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913046/HADOOP-13761-013.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14254/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> S3Guard: implement retries for DDB failures and throttling; translate 
> exceptions
> 
>
> Key: HADOOP-13761
> URL: https://issues.apache.org/jira/browse/HADOOP-13761
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Blocker
> Attachments: HADOOP-13761-004-to-005.patch, 
> HADOOP-13761-005-to-006-approx.diff.txt, HADOOP-13761-005.patch, 
> HADOOP-13761-006.patch, HADOOP-13761-007.patch, HADOOP-13761-008.patch, 
> HADOOP-13761-009.patch, HADOOP-13761-010.patch, HADOOP-13761-010.patch, 
> HADOOP-13761-011.patch, HADOOP-13761-012.patch, HADOOP-13761-013.patch, 
> HADOOP-13761.001.patch, HADOOP-13761.002.patch, HADOOP-13761.003.patch, 
> HADOOP-13761.004.patch, HADOOP-15183-013.patch
>
>
> Following the S3AFileSystem integration patch in HADOOP-13651, we need to add 
> retry logic.
> In HADOOP-13651, I added TODO comments in most of the places retry loops are 
> needed, including:
> - open(path).  If MetadataStore reflects recent create/move of file path, but 
> we fail to read it from S3, retry.
> - delete(path).  If deleteObject() on S3 fails, but MetadataStore shows the 
> file exists, retry.
> - rename(src,dest).  If source path is not visible in S3 yet, retry.
> - listFiles(). Skip for now. Not currently implemented in S3Guard. I will 
> create a separate JIRA for this as it will likely require interface changes 
> (i.e. prefix or subtree scan).
> We may miss some cases initially and we should do failure injection testing 
> to make sure we're covered.  Failure injection tests can be a separate JIRA 
> to make this easier to review.
> We also need basic configuration parameters around retry policy.  There 
> should be a way to specify maximum retry duration, as some applications would 
> prefer to receive an error eventually, than waiting indefinitely.  We should 
> also be keeping statistics when inconsistency is detected and we enter a 
> retry loop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13761) S3Guard: implement retries for DDB failures and throttling; translate exceptions

2018-03-05 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386097#comment-16386097
 ] 

Steve Loughran commented on HADOOP-13761:
-

+1 committing. I see I used the wrong JIRA name on the -013 patch; resubmitting 
it with the correct name just to avoid confusion.

> S3Guard: implement retries for DDB failures and throttling; translate 
> exceptions
> 
>
> Key: HADOOP-13761
> URL: https://issues.apache.org/jira/browse/HADOOP-13761
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Blocker
> Attachments: HADOOP-13761-004-to-005.patch, 
> HADOOP-13761-005-to-006-approx.diff.txt, HADOOP-13761-005.patch, 
> HADOOP-13761-006.patch, HADOOP-13761-007.patch, HADOOP-13761-008.patch, 
> HADOOP-13761-009.patch, HADOOP-13761-010.patch, HADOOP-13761-010.patch, 
> HADOOP-13761-011.patch, HADOOP-13761-012.patch, HADOOP-13761-013.patch, 
> HADOOP-13761.001.patch, HADOOP-13761.002.patch, HADOOP-13761.003.patch, 
> HADOOP-13761.004.patch, HADOOP-15183-013.patch
>
>
> Following the S3AFileSystem integration patch in HADOOP-13651, we need to add 
> retry logic.
> In HADOOP-13651, I added TODO comments in most of the places retry loops are 
> needed, including:
> - open(path).  If MetadataStore reflects recent create/move of file path, but 
> we fail to read it from S3, retry.
> - delete(path).  If deleteObject() on S3 fails, but MetadataStore shows the 
> file exists, retry.
> - rename(src,dest).  If source path is not visible in S3 yet, retry.
> - listFiles(). Skip for now. Not currently implemented in S3Guard. I will 
> create a separate JIRA for this as it will likely require interface changes 
> (i.e. prefix or subtree scan).
> We may miss some cases initially and we should do failure injection testing 
> to make sure we're covered.  Failure injection tests can be a separate JIRA 
> to make this easier to review.
> We also need basic configuration parameters around retry policy.  There 
> should be a way to specify maximum retry duration, as some applications would 
> prefer to receive an error eventually, than waiting indefinitely.  We should 
> also be keeping statistics when inconsistency is detected and we enter a 
> retry loop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13761) S3Guard: implement retries for DDB failures and throttling; translate exceptions

2018-03-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13761:

Attachment: HADOOP-13761-013.patch

> S3Guard: implement retries for DDB failures and throttling; translate 
> exceptions
> 
>
> Key: HADOOP-13761
> URL: https://issues.apache.org/jira/browse/HADOOP-13761
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Blocker
> Attachments: HADOOP-13761-004-to-005.patch, 
> HADOOP-13761-005-to-006-approx.diff.txt, HADOOP-13761-005.patch, 
> HADOOP-13761-006.patch, HADOOP-13761-007.patch, HADOOP-13761-008.patch, 
> HADOOP-13761-009.patch, HADOOP-13761-010.patch, HADOOP-13761-010.patch, 
> HADOOP-13761-011.patch, HADOOP-13761-012.patch, HADOOP-13761-013.patch, 
> HADOOP-13761.001.patch, HADOOP-13761.002.patch, HADOOP-13761.003.patch, 
> HADOOP-13761.004.patch, HADOOP-15183-013.patch
>
>
> Following the S3AFileSystem integration patch in HADOOP-13651, we need to add 
> retry logic.
> In HADOOP-13651, I added TODO comments in most of the places retry loops are 
> needed, including:
> - open(path).  If MetadataStore reflects recent create/move of file path, but 
> we fail to read it from S3, retry.
> - delete(path).  If deleteObject() on S3 fails, but MetadataStore shows the 
> file exists, retry.
> - rename(src,dest).  If source path is not visible in S3 yet, retry.
> - listFiles(). Skip for now. Not currently implemented in S3Guard. I will 
> create a separate JIRA for this as it will likely require interface changes 
> (i.e. prefix or subtree scan).
> We may miss some cases initially and we should do failure injection testing 
> to make sure we're covered.  Failure injection tests can be a separate JIRA 
> to make this easier to review.
> We also need basic configuration parameters around retry policy.  There 
> should be a way to specify maximum retry duration, as some applications would 
> prefer to receive an error eventually, than waiting indefinitely.  We should 
> also be keeping statistics when inconsistency is detected and we enter a 
> retry loop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13761) S3Guard: implement retries for DDB failures and throttling; translate exceptions

2018-03-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386091#comment-16386091
 ] 

genericqa commented on HADOOP-13761:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} hadoop-tools/hadoop-aws: The patch generated 0 new + 
15 unchanged - 1 fixed = 15 total (was 16) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
35s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HADOOP-13761 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913033/HADOOP-15183-013.patch
 |
| Optional Tests |  asflicense  findbugs  xml  compile  javac  javadoc  
mvninstall  mvnsite  unit  shadedclient  checkstyle  |
| uname | Linux 80e1525b0b25 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e8c5be6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14252/testReport/ |
| Max. process+thread count | 336 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14252/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://

[jira] [Commented] (HADOOP-15277) remove .FluentPropertyBeanIntrospector from CLI operation log output

2018-03-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386087#comment-16386087
 ] 

genericqa commented on HADOOP-15277:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
25m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
50s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HADOOP-15277 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913034/HADOOP-15277-001.patch
 |
| Optional Tests |  asflicense  mvnsite  unit  |
| uname | Linux 2cbdbc4f6b99 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e8c5be6 |
| maven | version: Apache Maven 3.3.9 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14253/testReport/ |
| Max. process+thread count | 407 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14253/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> remove .FluentPropertyBeanIntrospector from CLI operation log output
> 
>
> Key: HADOOP-15277
> URL: https://issues.apache.org/jira/browse/HADOOP-15277
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15277-001.patch
>
>
> when using the default logs, I get told off by beanutils
> {code}
> 18/03/01 18:43:54 INFO beanutils.FluentPropertyBeanIntrospector: Error when 
> creating PropertyDescriptor for public final void 
> org.apache.commons.configuration2.AbstractConfiguration.setProperty(java.lang.String,java.lang.Object)!
>  Ignoring this property.
> {code}
> This is a distraction.
> I propose to raise the log level to ERROR for that class in log4j.properties



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15271) Remove unicode multibyte characters from JavaDoc

2018-03-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386058#comment-16386058
 ] 

genericqa commented on HADOOP-15271:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m  
4s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
47s{color} | {color:green} hadoop-mapreduce-examples in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 99m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HADOOP-15271 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913022/HADOOP-15271.2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 67e564275217 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e8c5be6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results 

[jira] [Commented] (HADOOP-15267) S3A multipart upload fails when SSE-C encryption is enabled

2018-03-05 Thread Anis Elleuch (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386049#comment-16386049
 ] 

Anis Elleuch commented on HADOOP-15267:
---

Thanks [~ste...@apache.org],

I am going to do that. Meanwhile, it looks like branch-3.1 currently generates 
compiling errors when running the tests.. I am going to work against master to 
go faster and then I'll see what would be the next steps.

> S3A multipart upload fails when SSE-C encryption is enabled
> ---
>
> Key: HADOOP-15267
> URL: https://issues.apache.org/jira/browse/HADOOP-15267
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
> Environment: Hadoop 3.1 Snapshot
>Reporter: Anis Elleuch
>Assignee: Anis Elleuch
>Priority: Critical
> Attachments: hadoop-fix.patch
>
>
> When I enable SSE-C encryption in Hadoop 3.1 and set  fs.s3a.multipart.size 
> to 5 Mb, storing data in AWS doesn't work anymore. For example, running the 
> following code:
> {code}
> >>> df1 = spark.read.json('/home/user/people.json')
> >>> df1.write.mode("overwrite").json("s3a://testbucket/people.json")
> {code}
> shows the following exception:
> {code:java}
> com.amazonaws.services.s3.model.AmazonS3Exception: The multipart upload 
> initiate requested encryption. Subsequent part requests must include the 
> appropriate encryption parameters.
> {code}
> After some investigation, I discovered that hadoop-aws doesn't send SSE-C 
> headers in Put Object Part as stated in AWS specification: 
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPart.html]
> {code:java}
> If you requested server-side encryption using a customer-provided encryption 
> key in your initiate multipart upload request, you must provide identical 
> encryption information in each part upload using the following headers.
> {code}
>  
> You can find a patch attached to this issue for a better clarification of the 
> problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15277) remove .FluentPropertyBeanIntrospector from CLI operation log output

2018-03-05 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386033#comment-16386033
 ] 

Steve Loughran commented on HADOOP-15277:
-

Patch 001; moves log level to WARN. This is what is already used in 
hadoop-common-project/hadoop-common/src/main/conf/log4j.properties, and my 
local hadoop conf so I'm happy with it

> remove .FluentPropertyBeanIntrospector from CLI operation log output
> 
>
> Key: HADOOP-15277
> URL: https://issues.apache.org/jira/browse/HADOOP-15277
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15277-001.patch
>
>
> when using the default logs, I get told off by beanutils
> {code}
> 18/03/01 18:43:54 INFO beanutils.FluentPropertyBeanIntrospector: Error when 
> creating PropertyDescriptor for public final void 
> org.apache.commons.configuration2.AbstractConfiguration.setProperty(java.lang.String,java.lang.Object)!
>  Ignoring this property.
> {code}
> This is a distraction.
> I propose to raise the log level to ERROR for that class in log4j.properties



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15277) remove .FluentPropertyBeanIntrospector from CLI operation log output

2018-03-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15277:

Status: Patch Available  (was: Open)

> remove .FluentPropertyBeanIntrospector from CLI operation log output
> 
>
> Key: HADOOP-15277
> URL: https://issues.apache.org/jira/browse/HADOOP-15277
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15277-001.patch
>
>
> when using the default logs, I get told off by beanutils
> {code}
> 18/03/01 18:43:54 INFO beanutils.FluentPropertyBeanIntrospector: Error when 
> creating PropertyDescriptor for public final void 
> org.apache.commons.configuration2.AbstractConfiguration.setProperty(java.lang.String,java.lang.Object)!
>  Ignoring this property.
> {code}
> This is a distraction.
> I propose to raise the log level to ERROR for that class in log4j.properties



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15277) remove .FluentPropertyBeanIntrospector from CLI operation log output

2018-03-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15277:

Attachment: HADOOP-15277-001.patch

> remove .FluentPropertyBeanIntrospector from CLI operation log output
> 
>
> Key: HADOOP-15277
> URL: https://issues.apache.org/jira/browse/HADOOP-15277
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15277-001.patch
>
>
> when using the default logs, I get told off by beanutils
> {code}
> 18/03/01 18:43:54 INFO beanutils.FluentPropertyBeanIntrospector: Error when 
> creating PropertyDescriptor for public final void 
> org.apache.commons.configuration2.AbstractConfiguration.setProperty(java.lang.String,java.lang.Object)!
>  Ignoring this property.
> {code}
> This is a distraction.
> I propose to raise the log level to ERROR for that class in log4j.properties



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15183) S3Guard store becomes inconsistent after partial failure of rename

2018-03-05 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386026#comment-16386026
 ] 

Steve Loughran commented on HADOOP-15183:
-

+1
committed; thanks for fixing this up!

> S3Guard store becomes inconsistent after partial failure of rename
> --
>
> Key: HADOOP-15183
> URL: https://issues.apache.org/jira/browse/HADOOP-15183
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-15183-001.patch, HADOOP-15183-002.patch, 
> org.apache.hadoop.fs.s3a.auth.ITestAssumeRole-output.txt
>
>
> If an S3A rename() operation fails partway through, such as when the user 
> doesn't have permissions to delete the source files after copying to the 
> destination, then the s3guard view of the world ends up inconsistent. In 
> particular the sequence
>  (assuming src/file* is a list of files file1...file10 and read only to 
> caller)
>
> # create file rename src/file1 dest/ ; expect AccessDeniedException in the 
> delete, dest/file1 will exist
> # delete file dest/file1
> # rename src/file* dest/  ; expect failure
> # list dest; you will not see dest/file1
> You will not see file1 in the listing, presumably because it will have a 
> tombstone marker and the update at the end of the rename() didn't take place: 
> the old data is still there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13761) S3Guard: implement retries for DDB failures and throttling; translate exceptions

2018-03-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13761:

Status: Patch Available  (was: Open)

patch 013; patch 012 with findbugs told to STFU on that warning. Silent locally.

Reviewed the patch and LGTM. No double retry now.

> S3Guard: implement retries for DDB failures and throttling; translate 
> exceptions
> 
>
> Key: HADOOP-13761
> URL: https://issues.apache.org/jira/browse/HADOOP-13761
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Blocker
> Attachments: HADOOP-13761-004-to-005.patch, 
> HADOOP-13761-005-to-006-approx.diff.txt, HADOOP-13761-005.patch, 
> HADOOP-13761-006.patch, HADOOP-13761-007.patch, HADOOP-13761-008.patch, 
> HADOOP-13761-009.patch, HADOOP-13761-010.patch, HADOOP-13761-010.patch, 
> HADOOP-13761-011.patch, HADOOP-13761-012.patch, HADOOP-13761.001.patch, 
> HADOOP-13761.002.patch, HADOOP-13761.003.patch, HADOOP-13761.004.patch, 
> HADOOP-15183-013.patch
>
>
> Following the S3AFileSystem integration patch in HADOOP-13651, we need to add 
> retry logic.
> In HADOOP-13651, I added TODO comments in most of the places retry loops are 
> needed, including:
> - open(path).  If MetadataStore reflects recent create/move of file path, but 
> we fail to read it from S3, retry.
> - delete(path).  If deleteObject() on S3 fails, but MetadataStore shows the 
> file exists, retry.
> - rename(src,dest).  If source path is not visible in S3 yet, retry.
> - listFiles(). Skip for now. Not currently implemented in S3Guard. I will 
> create a separate JIRA for this as it will likely require interface changes 
> (i.e. prefix or subtree scan).
> We may miss some cases initially and we should do failure injection testing 
> to make sure we're covered.  Failure injection tests can be a separate JIRA 
> to make this easier to review.
> We also need basic configuration parameters around retry policy.  There 
> should be a way to specify maximum retry duration, as some applications would 
> prefer to receive an error eventually, than waiting indefinitely.  We should 
> also be keeping statistics when inconsistency is detected and we enter a 
> retry loop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13761) S3Guard: implement retries for DDB failures and throttling; translate exceptions

2018-03-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13761:

Status: Open  (was: Patch Available)

> S3Guard: implement retries for DDB failures and throttling; translate 
> exceptions
> 
>
> Key: HADOOP-13761
> URL: https://issues.apache.org/jira/browse/HADOOP-13761
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Blocker
> Attachments: HADOOP-13761-004-to-005.patch, 
> HADOOP-13761-005-to-006-approx.diff.txt, HADOOP-13761-005.patch, 
> HADOOP-13761-006.patch, HADOOP-13761-007.patch, HADOOP-13761-008.patch, 
> HADOOP-13761-009.patch, HADOOP-13761-010.patch, HADOOP-13761-010.patch, 
> HADOOP-13761-011.patch, HADOOP-13761-012.patch, HADOOP-13761.001.patch, 
> HADOOP-13761.002.patch, HADOOP-13761.003.patch, HADOOP-13761.004.patch, 
> HADOOP-15183-013.patch
>
>
> Following the S3AFileSystem integration patch in HADOOP-13651, we need to add 
> retry logic.
> In HADOOP-13651, I added TODO comments in most of the places retry loops are 
> needed, including:
> - open(path).  If MetadataStore reflects recent create/move of file path, but 
> we fail to read it from S3, retry.
> - delete(path).  If deleteObject() on S3 fails, but MetadataStore shows the 
> file exists, retry.
> - rename(src,dest).  If source path is not visible in S3 yet, retry.
> - listFiles(). Skip for now. Not currently implemented in S3Guard. I will 
> create a separate JIRA for this as it will likely require interface changes 
> (i.e. prefix or subtree scan).
> We may miss some cases initially and we should do failure injection testing 
> to make sure we're covered.  Failure injection tests can be a separate JIRA 
> to make this easier to review.
> We also need basic configuration parameters around retry policy.  There 
> should be a way to specify maximum retry duration, as some applications would 
> prefer to receive an error eventually, than waiting indefinitely.  We should 
> also be keeping statistics when inconsistency is detected and we enter a 
> retry loop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13761) S3Guard: implement retries for DDB failures and throttling; translate exceptions

2018-03-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13761:

Attachment: HADOOP-15183-013.patch

> S3Guard: implement retries for DDB failures and throttling; translate 
> exceptions
> 
>
> Key: HADOOP-13761
> URL: https://issues.apache.org/jira/browse/HADOOP-13761
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Blocker
> Attachments: HADOOP-13761-004-to-005.patch, 
> HADOOP-13761-005-to-006-approx.diff.txt, HADOOP-13761-005.patch, 
> HADOOP-13761-006.patch, HADOOP-13761-007.patch, HADOOP-13761-008.patch, 
> HADOOP-13761-009.patch, HADOOP-13761-010.patch, HADOOP-13761-010.patch, 
> HADOOP-13761-011.patch, HADOOP-13761-012.patch, HADOOP-13761.001.patch, 
> HADOOP-13761.002.patch, HADOOP-13761.003.patch, HADOOP-13761.004.patch, 
> HADOOP-15183-013.patch
>
>
> Following the S3AFileSystem integration patch in HADOOP-13651, we need to add 
> retry logic.
> In HADOOP-13651, I added TODO comments in most of the places retry loops are 
> needed, including:
> - open(path).  If MetadataStore reflects recent create/move of file path, but 
> we fail to read it from S3, retry.
> - delete(path).  If deleteObject() on S3 fails, but MetadataStore shows the 
> file exists, retry.
> - rename(src,dest).  If source path is not visible in S3 yet, retry.
> - listFiles(). Skip for now. Not currently implemented in S3Guard. I will 
> create a separate JIRA for this as it will likely require interface changes 
> (i.e. prefix or subtree scan).
> We may miss some cases initially and we should do failure injection testing 
> to make sure we're covered.  Failure injection tests can be a separate JIRA 
> to make this easier to review.
> We also need basic configuration parameters around retry policy.  There 
> should be a way to specify maximum retry duration, as some applications would 
> prefer to receive an error eventually, than waiting indefinitely.  We should 
> also be keeping statistics when inconsistency is detected and we enter a 
> retry loop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15267) S3A multipart upload fails when SSE-C encryption is enabled

2018-03-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15267:

Target Version/s: 3.1.0

> S3A multipart upload fails when SSE-C encryption is enabled
> ---
>
> Key: HADOOP-15267
> URL: https://issues.apache.org/jira/browse/HADOOP-15267
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
> Environment: Hadoop 3.1 Snapshot
>Reporter: Anis Elleuch
>Assignee: Anis Elleuch
>Priority: Critical
> Attachments: hadoop-fix.patch
>
>
> When I enable SSE-C encryption in Hadoop 3.1 and set  fs.s3a.multipart.size 
> to 5 Mb, storing data in AWS doesn't work anymore. For example, running the 
> following code:
> {code}
> >>> df1 = spark.read.json('/home/user/people.json')
> >>> df1.write.mode("overwrite").json("s3a://testbucket/people.json")
> {code}
> shows the following exception:
> {code:java}
> com.amazonaws.services.s3.model.AmazonS3Exception: The multipart upload 
> initiate requested encryption. Subsequent part requests must include the 
> appropriate encryption parameters.
> {code}
> After some investigation, I discovered that hadoop-aws doesn't send SSE-C 
> headers in Put Object Part as stated in AWS specification: 
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPart.html]
> {code:java}
> If you requested server-side encryption using a customer-provided encryption 
> key in your initiate multipart upload request, you must provide identical 
> encryption information in each part upload using the following headers.
> {code}
>  
> You can find a patch attached to this issue for a better clarification of the 
> problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15267) S3A multipart upload fails when SSE-C encryption is enabled

2018-03-05 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16385986#comment-16385986
 ] 

Steve Loughran commented on HADOOP-15267:
-

Checkstyle is line width == 82; need to look at that to see if the code looks 
better with it, in which case we can ignore

Test failures are legitimate NPEs in the new code

{code}
[ERROR] 
testTaskMultiFileUploadFailure[0](org.apache.hadoop.fs.s3a.commit.staging.TestStagingCommitter)
  Time elapsed: 0.14 s  <<< ERROR!
java.lang.NullPointerException
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.setOptionalUploadPartRequestParameters(S3AFileSystem.java:2610)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.uploadPart(S3AFileSystem.java:1567)
at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.lambda$uploadPart$8(WriteOperationHelper.java:474)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:260)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:314)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:256)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:231)
at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.retry(WriteOperationHelper.java:123)
at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.uploadPart(WriteOperationHelper.java:471)
at 
org.apache.hadoop.fs.s3a.commit.CommitOperations.uploadFileToPendingCommit(CommitOperations.java:477)
at 
org.apache.hadoop.fs.s3a.commit.staging.StagingCommitter.lambda$commitTaskInternal$4(StagingCommitter.java:698)
at 
org.apache.hadoop.fs.s3a.commit.Tasks$Builder.runSingleThreaded(Tasks.java:165)
at org.apache.hadoop.fs.s3a.commit.Tasks$Builder.run(Tasks.java:150)
at 
org.apache.hadoop.fs.s3a.commit.staging.StagingCommitter.commitTaskInternal(StagingCommitter.java:690)
at 
org.apache.hadoop.fs.s3a.commit.staging.StagingCommitter.commitTask(StagingCommitter.java:635)
at 
org.apache.hadoop.fs.s3a.commit.staging.TestStagingCommitter.lambda$testTaskMultiFileUploadFailure$3(TestStagingCommitter.java:427)
at 
org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:491)
at 
org.apache.hadoop.fs.s3a.commit.staging.TestStagingCommitter.testTaskMultiFileUploadFailure(TestStagingCommitter.java:423)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{code}

[~vadmeste]: I need to draw your attention to the hadoop-aws [patch submission 
policy|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md].
 Nobodies patches get reviewed until the submitter declares which s3 endpoint 
they've run all the hadoop-aws integration tests against; Jenkins only runs the 
unit tests.

Here it's one of the mock tests, so while the patch may work in production, the 
mock S3FS may need some tweaks to handle the setup, which means 
{{MockS3AFileSystem}} is going to need some attention. 

This is what I suggest
# ignore my recommendation to move the change into 
{{WriteOperationsHelper.newUploadPartRequest()}}, as that's running outside the 
FS...you'd need to add more entry points into S3AFileSystem and wire up.
# make {{setOptionalUploadPartRequestParameters}} protected, add javadocs, etc.
# in {{MockS3AFileSystem}}, make it a no-op.
# ...after that the failing tests should work...
# then its time to worry about the integration tests.

This is an important patch, it is ready to go in apart from those tests, but 
yes, we need the text fixup & something new to verify the problem is not only 
fixed, but never going to come back. 




> S3A multipart upload fails when SSE-C encryption is enabled
> ---
>
> Key: HADOOP-15267
> URL: https://issues.apache.org/jira/browse/HADOOP-15267
> Project: Hadoop Common
>  Issue Typ

[jira] [Assigned] (HADOOP-15277) remove .FluentPropertyBeanIntrospector from CLI operation log output

2018-03-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-15277:
---

Assignee: Steve Loughran

> remove .FluentPropertyBeanIntrospector from CLI operation log output
> 
>
> Key: HADOOP-15277
> URL: https://issues.apache.org/jira/browse/HADOOP-15277
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> when using the default logs, I get told off by beanutils
> {code}
> 18/03/01 18:43:54 INFO beanutils.FluentPropertyBeanIntrospector: Error when 
> creating PropertyDescriptor for public final void 
> org.apache.commons.configuration2.AbstractConfiguration.setProperty(java.lang.String,java.lang.Object)!
>  Ignoring this property.
> {code}
> This is a distraction.
> I propose to raise the log level to ERROR for that class in log4j.properties



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15271) Remove unicode multibyte characters from JavaDoc

2018-03-05 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16385982#comment-16385982
 ] 

Takanobu Asanuma commented on HADOOP-15271:
---

I confirmed that {{mvn javadoc:javadoc}} succeeds with java9 if the latest 
patch and HADOOP-15287 are applied.

> Remove unicode multibyte characters from JavaDoc
> 
>
> Key: HADOOP-15271
> URL: https://issues.apache.org/jira/browse/HADOOP-15271
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
> Environment: Java 9.0.4, Applied HADOOP-12760 and HDFS-11610
>Reporter: Akira Ajisaka
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-15271.1.patch, HADOOP-15271.2.patch
>
>
> {{mvn package -Pdist,native -Dtar -DskipTests}} fails.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.0-M1:jar (module-javadocs) 
> on project hadoop-common: MavenReportException: Error while generating 
> Javadoc: 
> [ERROR] Exit code: 1 - javadoc: warning - The old Doclet and Taglet APIs in 
> the packages
> [ERROR] com.sun.javadoc, com.sun.tools.doclets and their implementations
> [ERROR] are planned to be removed in a future JDK release. These
> [ERROR] components have been superseded by the new APIs in jdk.javadoc.doclet.
> [ERROR] Users are strongly recommended to migrate to the new APIs.
> [ERROR] 
> /home/centos/git/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java:1652:
>  error: unmappable character (0xE2) for encoding US-ASCII
> [ERROR]* closed automatically ???these the marked paths will be deleted 
> as a result.
> [ERROR]   ^
> [ERROR] 
> /home/centos/git/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java:1652:
>  error: unmappable character (0x80) for encoding US-ASCII
> [ERROR]* closed automatically ???these the marked paths will be deleted 
> as a result.
> [ERROR]^
> [ERROR] 
> /home/centos/git/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java:1652:
>  error: unmappable character (0x94) for encoding US-ASCII
> [ERROR]* closed automatically ???these the marked paths will be deleted 
> as a result.
> {noformat}
> JDK9 JavaDoc cannot treat non-ascii characters due to 
> https://bugs.openjdk.java.net/browse/JDK-8188649.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15100) Configuration#Resource constructor change broke Hive tests

2018-03-05 Thread Daniel Voros (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16385981#comment-16385981
 ] 

Daniel Voros commented on HADOOP-15100:
---

Issues caused by disabling the resolution of system properties will be tracked 
here: HIVE-18319.

> Configuration#Resource constructor change broke Hive tests
> --
>
> Key: HADOOP-15100
> URL: https://issues.apache.org/jira/browse/HADOOP-15100
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.3, 2.7.5, 3.0.0, 2.9.1
>Reporter: Xiao Chen
>Priority: Critical
>
> In CDH's C6 rebased testing, the following Hive tests started failing:
> {noformat}
> org.apache.hive.minikdc.TestJdbcWithMiniKdcCookie.org.apache.hive.minikdc.TestJdbcWithMiniKdcCookie
> org.apache.hive.minikdc.TestJdbcWithMiniKdcCookie.org.apache.hive.minikdc.TestJdbcWithMiniKdcCookie
> org.apache.hive.minikdc.TestHiveAuthFactory.org.apache.hive.minikdc.TestHiveAuthFactory
> org.apache.hive.minikdc.TestJdbcWithMiniKdcSQLAuthHttp.org.apache.hive.minikdc.TestJdbcWithMiniKdcSQLAuthHttp
> org.apache.hive.minikdc.TestJdbcWithMiniKdcSQLAuthHttp.org.apache.hive.minikdc.TestJdbcWithMiniKdcSQLAuthHttp
> org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs
> org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs
> org.apache.hive.minikdc.TestJdbcWithMiniKdc.org.apache.hive.minikdc.TestJdbcWithMiniKdc
> org.apache.hive.minikdc.TestJdbcWithMiniKdc.org.apache.hive.minikdc.TestJdbcWithMiniKdc
> org.apache.hive.minikdc.TestHs2HooksWithMiniKdc.org.apache.hive.minikdc.TestHs2HooksWithMiniKdc
> org.apache.hive.minikdc.TestHs2HooksWithMiniKdc.org.apache.hive.minikdc.TestHs2HooksWithMiniKdc
> org.apache.hive.minikdc.TestJdbcNonKrbSASLWithMiniKdc.org.apache.hive.minikdc.TestJdbcNonKrbSASLWithMiniKdc
> org.apache.hive.minikdc.TestJdbcNonKrbSASLWithMiniKdc.org.apache.hive.minikdc.TestJdbcNonKrbSASLWithMiniKdc
> org.apache.hive.minikdc.TestJdbcWithMiniKdcSQLAuthBinary.org.apache.hive.minikdc.TestJdbcWithMiniKdcSQLAuthBinary
> org.apache.hive.minikdc.TestJdbcWithMiniKdcSQLAuthBinary.org.apache.hive.minikdc.TestJdbcWithMiniKdcSQLAuthBinary
> org.apache.hive.minikdc.TestMiniHiveKdc.testLogin
> org.apache.hive.minikdc.TestMiniHiveKdc.testLogin
> org.apache.hive.minikdc.TestJdbcWithDBTokenStore.org.apache.hive.minikdc.TestJdbcWithDBTokenStore
> org.apache.hive.minikdc.TestJdbcWithDBTokenStore.org.apache.hive.minikdc.TestJdbcWithDBTokenStore
> org.apache.hadoop.hive.ql.TestMetaStoreLimitPartitionRequest.testQueryWithInWithFallbackToORM
> org.apache.hive.jdbc.TestJdbcWithMiniHS2.testSelectThriftSerializeInTasks
> org.apache.hive.jdbc.TestJdbcWithMiniHS2.testEmptyResultsetThriftSerializeInTasks
> org.apache.hive.jdbc.TestJdbcWithMiniHS2.testParallelCompilation2
> org.apache.hive.jdbc.TestJdbcWithMiniHS2.testJoinThriftSerializeInTasks
> org.apache.hive.jdbc.TestJdbcWithMiniHS2.testParallelCompilation
> org.apache.hive.jdbc.TestJdbcWithMiniHS2.testConcurrentStatements
> org.apache.hive.jdbc.TestJdbcWithMiniHS2.testFloatCast2DoubleThriftSerializeInTasks
> org.apache.hive.jdbc.TestJdbcWithMiniHS2.testEnableThriftSerializeInTasks
> org.apache.hive.service.cli.TestEmbeddedThriftBinaryCLIService.testExecuteStatementParallel
> {noformat}
> The exception is
> {noformat}
> java.lang.ExceptionInInitializerError: null
>   at sun.security.krb5.Config.getRealmFromDNS(Config.java:1102)
>   at sun.security.krb5.Config.getDefaultRealm(Config.java:987)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:483)
>   at 
> org.apache.hadoop.security.authentication.util.KerberosUtil.getDefaultRealm(KerberosUtil.java:110)
>   at 
> org.apache.hadoop.security.HadoopKerberosName.setConfiguration(HadoopKerberosName.java:63)
>   at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:332)
>   at 
> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:317)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:907)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:873)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:740)
>   at 
> org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:261)
>   at 
> org.apache.hadoop.conf.Configuration$Reso

[jira] [Commented] (HADOOP-15271) Remove unicode multibyte characters from JavaDoc

2018-03-05 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16385979#comment-16385979
 ] 

Takanobu Asanuma commented on HADOOP-15271:
---

π lets the build down. Updated the patch.

> Remove unicode multibyte characters from JavaDoc
> 
>
> Key: HADOOP-15271
> URL: https://issues.apache.org/jira/browse/HADOOP-15271
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
> Environment: Java 9.0.4, Applied HADOOP-12760 and HDFS-11610
>Reporter: Akira Ajisaka
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-15271.1.patch, HADOOP-15271.2.patch
>
>
> {{mvn package -Pdist,native -Dtar -DskipTests}} fails.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.0-M1:jar (module-javadocs) 
> on project hadoop-common: MavenReportException: Error while generating 
> Javadoc: 
> [ERROR] Exit code: 1 - javadoc: warning - The old Doclet and Taglet APIs in 
> the packages
> [ERROR] com.sun.javadoc, com.sun.tools.doclets and their implementations
> [ERROR] are planned to be removed in a future JDK release. These
> [ERROR] components have been superseded by the new APIs in jdk.javadoc.doclet.
> [ERROR] Users are strongly recommended to migrate to the new APIs.
> [ERROR] 
> /home/centos/git/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java:1652:
>  error: unmappable character (0xE2) for encoding US-ASCII
> [ERROR]* closed automatically ???these the marked paths will be deleted 
> as a result.
> [ERROR]   ^
> [ERROR] 
> /home/centos/git/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java:1652:
>  error: unmappable character (0x80) for encoding US-ASCII
> [ERROR]* closed automatically ???these the marked paths will be deleted 
> as a result.
> [ERROR]^
> [ERROR] 
> /home/centos/git/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java:1652:
>  error: unmappable character (0x94) for encoding US-ASCII
> [ERROR]* closed automatically ???these the marked paths will be deleted 
> as a result.
> {noformat}
> JDK9 JavaDoc cannot treat non-ascii characters due to 
> https://bugs.openjdk.java.net/browse/JDK-8188649.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15271) Remove unicode multibyte characters from JavaDoc

2018-03-05 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HADOOP-15271:
--
Attachment: HADOOP-15271.2.patch

> Remove unicode multibyte characters from JavaDoc
> 
>
> Key: HADOOP-15271
> URL: https://issues.apache.org/jira/browse/HADOOP-15271
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
> Environment: Java 9.0.4, Applied HADOOP-12760 and HDFS-11610
>Reporter: Akira Ajisaka
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-15271.1.patch, HADOOP-15271.2.patch
>
>
> {{mvn package -Pdist,native -Dtar -DskipTests}} fails.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.0-M1:jar (module-javadocs) 
> on project hadoop-common: MavenReportException: Error while generating 
> Javadoc: 
> [ERROR] Exit code: 1 - javadoc: warning - The old Doclet and Taglet APIs in 
> the packages
> [ERROR] com.sun.javadoc, com.sun.tools.doclets and their implementations
> [ERROR] are planned to be removed in a future JDK release. These
> [ERROR] components have been superseded by the new APIs in jdk.javadoc.doclet.
> [ERROR] Users are strongly recommended to migrate to the new APIs.
> [ERROR] 
> /home/centos/git/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java:1652:
>  error: unmappable character (0xE2) for encoding US-ASCII
> [ERROR]* closed automatically ???these the marked paths will be deleted 
> as a result.
> [ERROR]   ^
> [ERROR] 
> /home/centos/git/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java:1652:
>  error: unmappable character (0x80) for encoding US-ASCII
> [ERROR]* closed automatically ???these the marked paths will be deleted 
> as a result.
> [ERROR]^
> [ERROR] 
> /home/centos/git/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java:1652:
>  error: unmappable character (0x94) for encoding US-ASCII
> [ERROR]* closed automatically ???these the marked paths will be deleted 
> as a result.
> {noformat}
> JDK9 JavaDoc cannot treat non-ascii characters due to 
> https://bugs.openjdk.java.net/browse/JDK-8188649.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15287) JDK9 JavaDoc build fails due to one-character underscore identifiers in hadoop-yarn-common

2018-03-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16385867#comment-16385867
 ] 

genericqa commented on HADOOP-15287:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
28m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
58s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HADOOP-15287 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913000/HADOOP-15287.1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  |
| uname | Linux 50a3e7fa29fc 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e8c5be6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14250/testReport/ |
| Max. process+thread count | 334 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14250/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> JDK9 JavaDoc build fails due to one-character underscore identifiers in 
> hadoop-yarn-common
> --
>
> Key: HADOOP-15287
> URL: https://issues.apache.org/jira/browse/HADOOP-15287
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentati

  1   2   >