[jira] [Commented] (HADOOP-15390) Yarn RM logs flooded by DelegationTokenRenewer trying to renew KMS tokens

2018-04-16 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440376#comment-16440376
 ] 

Xiao Chen commented on HADOOP-15390:


Failed test doesn't look related and passed locally

> Yarn RM logs flooded by DelegationTokenRenewer trying to renew KMS tokens
> -
>
> Key: HADOOP-15390
> URL: https://issues.apache.org/jira/browse/HADOOP-15390
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-15390.01.patch
>
>
> When looking at a recent issue with [~rkanter] and [~yufeigu], we found that 
> the RM log in a cluster was flooded by KMS token renewal errors below:
> {noformat}
> $ tail -9 hadoop-cmf-yarn-RESOURCEMANAGER.log
> 2018-04-11 11:34:09,367 WARN 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$KMSTokenRenewer: 
> keyProvider null cannot renew dt.
> 2018-04-11 11:34:09,367 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer:
>  Renewed delegation-token= [Kind: kms-dt, Service: KMSIP:16000, Ident: 
> (kms-dt owner=user, renewer=yarn, realUser=, issueDate=1522192283334, 
> maxDate=1522797083334, sequenceNumber=15108613, masterKeyId=2674);exp=0; 
> apps=[]], for []
> 2018-04-11 11:34:09,367 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer:
>  Renew Kind: kms-dt, Service: KMSIP:16000, Ident: (kms-dt owner=user, 
> renewer=yarn, realUser=, issueDate=1522192283334, maxDate=1522797083334, 
> sequenceNumber=15108613, masterKeyId=2674);exp=0; apps=[] in -1523446449367 
> ms, appId = []
> ...
> 2018-04-11 11:34:09,367 WARN 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$KMSTokenRenewer: 
> keyProvider null cannot renew dt.
> 2018-04-11 11:34:09,367 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer:
>  Renewed delegation-token= [Kind: kms-dt, Service: KMSIP:16000, Ident: 
> (kms-dt owner=user, renewer=yarn, realUser=, issueDate=1522192283334, 
> maxDate=1522797083334, sequenceNumber=15108613, masterKeyId=2674);exp=0; 
> apps=[]], for []
> 2018-04-11 11:34:09,367 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer:
>  Renew Kind: kms-dt, Service: KMSIP:16000, Ident: (kms-dt owner=user, 
> renewer=yarn, realUser=, issueDate=1522192283334, maxDate=1522797083334, 
> sequenceNumber=15108613, masterKeyId=2674);exp=0; apps=[] in -1523446449367 
> ms, appId = []
> {noformat}
> Further inspection shows the KMS IP is from another cluster. The RM is before 
> HADOOP-14445, so needs to read from config. The config rightfully doesn't 
> have the other cluster's KMS configured.
> Although HADOOP-14445 will make this a non-issue by creating the provider 
> from token service, we should fix 2 things here:
> - KMS token renewer should throw instead of return 0. Returning 0 when not 
> able to renew shall be considered a bug in the renewer.
> - Yarn RM's {{DelegationTokenRenewer}} service should validate the return and 
> not go into this busy loop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15304) [JDK10] Migrate from com.sun.tools.doclets to the replacement

2018-04-16 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440322#comment-16440322
 ] 

Takanobu Asanuma commented on HADOOP-15304:
---

Hi [~ajisakaa], what do you think about upgrading the overall version of 
commons-lang3 by another jira? If you are ok, I'd like to work on the task.

> [JDK10] Migrate from com.sun.tools.doclets to the replacement
> -
>
> Key: HADOOP-15304
> URL: https://issues.apache.org/jira/browse/HADOOP-15304
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-15304.01.patch, HADOOP-15304.02.patch
>
>
> com.sun.tools.doclets.* packages were removed in Java 10. 
> [https://bugs.openjdk.java.net/browse/JDK-8177511]
> This causes hadoop-annotations module to fail.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) 
> on project hadoop-annotations: Compilation failure: Compilation failure:
> [ERROR] 
> /Users/ajisaka/git/hadoop/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/IncludePublicAnnotationsStandardDoclet.java:[61,20]
>  cannot find symbol
> [ERROR] symbol:   method 
> validOptions(java.lang.String[][],com.sun.javadoc.DocErrorReporter)
> [ERROR] location: class com.sun.tools.doclets.standard.Standard
> [ERROR] 
> /Users/ajisaka/git/hadoop/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/ExcludePrivateAnnotationsStandardDoclet.java:[56,20]
>  cannot find symbol
> [ERROR] symbol:   method 
> validOptions(java.lang.String[][],com.sun.javadoc.DocErrorReporter)
> [ERROR] location: class com.sun.tools.doclets.standard.Standard
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15205) maven release: missing source attachments for hadoop-mapreduce-client-core

2018-04-16 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440246#comment-16440246
 ] 

Konstantin Shvachko commented on HADOOP-15205:
--

Thanks [~eddyxu] for looking into this. I suggest we update the how-to-release 
page for now, but we should fix it so that "mvn deploy -Psign -DskipTests" 
worked. I am definitely not qualified to deal with maven builds. Do you think 
you can track this, Eddy?

> maven release: missing source attachments for hadoop-mapreduce-client-core
> --
>
> Key: HADOOP-15205
> URL: https://issues.apache.org/jira/browse/HADOOP-15205
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.5, 3.0.0
>Reporter: Zoltan Haindrich
>Priority: Major
>
> I wanted to use the source attachment; however it looks like since 2.7.5 that 
> artifact is not present at maven central ; it looks like the last release 
> which had source attachments / javadocs was 2.7.4
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.4/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.5/
> this seems to be not limited to mapreduce; as the same change is present for 
> yarn-common as well
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-yarn-common/2.7.4/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-yarn-common/2.7.5/
> and also hadoop-common
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/2.7.4/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/2.7.5/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/3.0.0/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14756) S3Guard: expose capability query in MetadataStore and add tests of authoritative mode

2018-04-16 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440241#comment-16440241
 ] 

Aaron Fabbri commented on HADOOP-14756:
---

Thanks for the v2 patch!
{quote}
Right now the MetadataStore#getDiagnostics javadoc describes that the 
information from the returned map is for debugging only. If this information 
still valid? If it is true, is it really a good place to store capabilities 
then?
{quote}
I guess we are expanding the definition to "for debugging and testing only". We 
can add an explicit "getProperty()" API later on if we want. For now I am OK 
with using the getDiagnostics method.

{quote}
I use final class with private constructor for MetadataStoreCapabilities - to 
store constants because I've seen in the project this is the general way (e.g 
org.apache.hadoop.fs.s3a.Constants) to store constants. Is this sufficient, or 
should I use interface - where all String constants are public static final by 
default? Which choice is preferred?
{quote}
Not sure it matters, but I slightly prefer {{final class}}, as it seems to 
capture intent better (not an interface to be implemented).

{noformat}
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java
@@ -1154,6 +1154,8 @@ private static void checkPathMetadata(PathMetadata meta) {
   map.put(READ_CAPACITY, throughput.getReadCapacityUnits().toString());
   map.put(WRITE_CAPACITY, throughput.getWriteCapacityUnits().toString());
   map.put(TABLE, desc.toString());
+  map.put(MetadataStoreCapabilities.PERSISTS_AUTHORITATIVE_BIT,
+Boolean.toString(true));
{noformat}
This is actually {{false}}, and you will know your test is correct when it 
fails if this one is set to {{true}}.

{noformat}
+
+  @Test
+  public void testListChildrenAuthoritative() throws IOException {
+Assume.assumeFalse("Missing elements should not be allowed "
++ "to run this test.", allowMissing());
{noformat}
Correct, but it would be nice to run this test on LocalMetadataStore, since it 
is currently the only one that persists the authoritative bit.  You might want 
to switch this to something like {{Assume.assumeFalse(fs.hasMetadataStore())}}.

{noformat}
+Assume.assumeTrue("MetadataStore should be capable for authoritative "
++ "storage of directories to run this test.",
+isMetadataStoreAuthoritative());
+
+setupListStatus();
+
+DirListingMetadata dirMeta = ms.listChildren(strToPath("/a1/b1"));
+dirMeta.setAuthoritative(true);
+dirMeta.put(makeFileStatus("/a1/b1/file_new", 100));
+ms.put(dirMeta);
+
+assertTrue(dirMeta.isAuthoritative());
+dirMeta = ms.listChildren(strToPath("/a1/b1"));
+assertListingsEqual(dirMeta.getListing(), "/a1/b1/file1", "/a1/b1/file2",
+"/a1/b1/c1", "/a1/b1/file_new");
+  }
{noformat}
I think if you move that last {{assertTrue(dirMeta.isAuthoritative())}} to the 
end of the function it will be correct.  As is you are not testing to see if 
the bit was persisted by the {{MetadataStore}}.

> S3Guard: expose capability query in MetadataStore and add tests of 
> authoritative mode
> -
>
> Key: HADOOP-14756
> URL: https://issues.apache.org/jira/browse/HADOOP-14756
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-14756.001.patch, HADOOP-14756.002.patch
>
>
> {{MetadataStoreTestBase.testListChildren}} would be improved with the ability 
> to query the features offered by the store, and the outcome of {{put()}}, so 
> probe the correctness of the authoritative mode
> # Add predicate to MetadataStore interface  
> {{supportsAuthoritativeDirectories()}} or similar
> # If #1 is true, assert that directory is fully cached after changes
> # Add "isNew" flag to MetadataStore.put(DirListingMetadata); use to verify 
> when changes are made



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15390) Yarn RM logs flooded by DelegationTokenRenewer trying to renew KMS tokens

2018-04-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440223#comment-16440223
 ] 

genericqa commented on HADOOP-15390:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
27s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
36s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 42s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}229m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HADOOP-15390 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919282/HADOOP-15390.01.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 36aeb5e6b71b 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 

[jira] [Commented] (HADOOP-15390) Yarn RM logs flooded by DelegationTokenRenewer trying to renew KMS tokens

2018-04-16 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439992#comment-16439992
 ] 

Xiao Chen commented on HADOOP-15390:


Patch 1 does the 2 changes described. Added a dummy line in TestKMS for 
pre-commit coverage.

> Yarn RM logs flooded by DelegationTokenRenewer trying to renew KMS tokens
> -
>
> Key: HADOOP-15390
> URL: https://issues.apache.org/jira/browse/HADOOP-15390
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-15390.01.patch
>
>
> When looking at a recent issue with [~rkanter] and [~yufeigu], we found that 
> the RM log in a cluster was flooded by KMS token renewal errors below:
> {noformat}
> $ tail -9 hadoop-cmf-yarn-RESOURCEMANAGER.log
> 2018-04-11 11:34:09,367 WARN 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$KMSTokenRenewer: 
> keyProvider null cannot renew dt.
> 2018-04-11 11:34:09,367 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer:
>  Renewed delegation-token= [Kind: kms-dt, Service: KMSIP:16000, Ident: 
> (kms-dt owner=user, renewer=yarn, realUser=, issueDate=1522192283334, 
> maxDate=1522797083334, sequenceNumber=15108613, masterKeyId=2674);exp=0; 
> apps=[]], for []
> 2018-04-11 11:34:09,367 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer:
>  Renew Kind: kms-dt, Service: KMSIP:16000, Ident: (kms-dt owner=user, 
> renewer=yarn, realUser=, issueDate=1522192283334, maxDate=1522797083334, 
> sequenceNumber=15108613, masterKeyId=2674);exp=0; apps=[] in -1523446449367 
> ms, appId = []
> ...
> 2018-04-11 11:34:09,367 WARN 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$KMSTokenRenewer: 
> keyProvider null cannot renew dt.
> 2018-04-11 11:34:09,367 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer:
>  Renewed delegation-token= [Kind: kms-dt, Service: KMSIP:16000, Ident: 
> (kms-dt owner=user, renewer=yarn, realUser=, issueDate=1522192283334, 
> maxDate=1522797083334, sequenceNumber=15108613, masterKeyId=2674);exp=0; 
> apps=[]], for []
> 2018-04-11 11:34:09,367 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer:
>  Renew Kind: kms-dt, Service: KMSIP:16000, Ident: (kms-dt owner=user, 
> renewer=yarn, realUser=, issueDate=1522192283334, maxDate=1522797083334, 
> sequenceNumber=15108613, masterKeyId=2674);exp=0; apps=[] in -1523446449367 
> ms, appId = []
> {noformat}
> Further inspection shows the KMS IP is from another cluster. The RM is before 
> HADOOP-14445, so needs to read from config. The config rightfully doesn't 
> have the other cluster's KMS configured.
> Although HADOOP-14445 will make this a non-issue by creating the provider 
> from token service, we should fix 2 things here:
> - KMS token renewer should throw instead of return 0. Returning 0 when not 
> able to renew shall be considered a bug in the renewer.
> - Yarn RM's {{DelegationTokenRenewer}} service should validate the return and 
> not go into this busy loop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15390) Yarn RM logs flooded by DelegationTokenRenewer trying to renew KMS tokens

2018-04-16 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-15390:
---
Status: Patch Available  (was: Open)

> Yarn RM logs flooded by DelegationTokenRenewer trying to renew KMS tokens
> -
>
> Key: HADOOP-15390
> URL: https://issues.apache.org/jira/browse/HADOOP-15390
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-15390.01.patch
>
>
> When looking at a recent issue with [~rkanter] and [~yufeigu], we found that 
> the RM log in a cluster was flooded by KMS token renewal errors below:
> {noformat}
> $ tail -9 hadoop-cmf-yarn-RESOURCEMANAGER.log
> 2018-04-11 11:34:09,367 WARN 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$KMSTokenRenewer: 
> keyProvider null cannot renew dt.
> 2018-04-11 11:34:09,367 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer:
>  Renewed delegation-token= [Kind: kms-dt, Service: KMSIP:16000, Ident: 
> (kms-dt owner=user, renewer=yarn, realUser=, issueDate=1522192283334, 
> maxDate=1522797083334, sequenceNumber=15108613, masterKeyId=2674);exp=0; 
> apps=[]], for []
> 2018-04-11 11:34:09,367 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer:
>  Renew Kind: kms-dt, Service: KMSIP:16000, Ident: (kms-dt owner=user, 
> renewer=yarn, realUser=, issueDate=1522192283334, maxDate=1522797083334, 
> sequenceNumber=15108613, masterKeyId=2674);exp=0; apps=[] in -1523446449367 
> ms, appId = []
> ...
> 2018-04-11 11:34:09,367 WARN 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$KMSTokenRenewer: 
> keyProvider null cannot renew dt.
> 2018-04-11 11:34:09,367 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer:
>  Renewed delegation-token= [Kind: kms-dt, Service: KMSIP:16000, Ident: 
> (kms-dt owner=user, renewer=yarn, realUser=, issueDate=1522192283334, 
> maxDate=1522797083334, sequenceNumber=15108613, masterKeyId=2674);exp=0; 
> apps=[]], for []
> 2018-04-11 11:34:09,367 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer:
>  Renew Kind: kms-dt, Service: KMSIP:16000, Ident: (kms-dt owner=user, 
> renewer=yarn, realUser=, issueDate=1522192283334, maxDate=1522797083334, 
> sequenceNumber=15108613, masterKeyId=2674);exp=0; apps=[] in -1523446449367 
> ms, appId = []
> {noformat}
> Further inspection shows the KMS IP is from another cluster. The RM is before 
> HADOOP-14445, so needs to read from config. The config rightfully doesn't 
> have the other cluster's KMS configured.
> Although HADOOP-14445 will make this a non-issue by creating the provider 
> from token service, we should fix 2 things here:
> - KMS token renewer should throw instead of return 0. Returning 0 when not 
> able to renew shall be considered a bug in the renewer.
> - Yarn RM's {{DelegationTokenRenewer}} service should validate the return and 
> not go into this busy loop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15390) Yarn RM logs flooded by DelegationTokenRenewer trying to renew KMS tokens

2018-04-16 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-15390:
---
Attachment: HADOOP-15390.01.patch

> Yarn RM logs flooded by DelegationTokenRenewer trying to renew KMS tokens
> -
>
> Key: HADOOP-15390
> URL: https://issues.apache.org/jira/browse/HADOOP-15390
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-15390.01.patch
>
>
> When looking at a recent issue with [~rkanter] and [~yufeigu], we found that 
> the RM log in a cluster was flooded by KMS token renewal errors below:
> {noformat}
> $ tail -9 hadoop-cmf-yarn-RESOURCEMANAGER.log
> 2018-04-11 11:34:09,367 WARN 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$KMSTokenRenewer: 
> keyProvider null cannot renew dt.
> 2018-04-11 11:34:09,367 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer:
>  Renewed delegation-token= [Kind: kms-dt, Service: KMSIP:16000, Ident: 
> (kms-dt owner=user, renewer=yarn, realUser=, issueDate=1522192283334, 
> maxDate=1522797083334, sequenceNumber=15108613, masterKeyId=2674);exp=0; 
> apps=[]], for []
> 2018-04-11 11:34:09,367 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer:
>  Renew Kind: kms-dt, Service: KMSIP:16000, Ident: (kms-dt owner=user, 
> renewer=yarn, realUser=, issueDate=1522192283334, maxDate=1522797083334, 
> sequenceNumber=15108613, masterKeyId=2674);exp=0; apps=[] in -1523446449367 
> ms, appId = []
> ...
> 2018-04-11 11:34:09,367 WARN 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$KMSTokenRenewer: 
> keyProvider null cannot renew dt.
> 2018-04-11 11:34:09,367 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer:
>  Renewed delegation-token= [Kind: kms-dt, Service: KMSIP:16000, Ident: 
> (kms-dt owner=user, renewer=yarn, realUser=, issueDate=1522192283334, 
> maxDate=1522797083334, sequenceNumber=15108613, masterKeyId=2674);exp=0; 
> apps=[]], for []
> 2018-04-11 11:34:09,367 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer:
>  Renew Kind: kms-dt, Service: KMSIP:16000, Ident: (kms-dt owner=user, 
> renewer=yarn, realUser=, issueDate=1522192283334, maxDate=1522797083334, 
> sequenceNumber=15108613, masterKeyId=2674);exp=0; apps=[] in -1523446449367 
> ms, appId = []
> {noformat}
> Further inspection shows the KMS IP is from another cluster. The RM is before 
> HADOOP-14445, so needs to read from config. The config rightfully doesn't 
> have the other cluster's KMS configured.
> Although HADOOP-14445 will make this a non-issue by creating the provider 
> from token service, we should fix 2 things here:
> - KMS token renewer should throw instead of return 0. Returning 0 when not 
> able to renew shall be considered a bug in the renewer.
> - Yarn RM's {{DelegationTokenRenewer}} service should validate the return and 
> not go into this busy loop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15390) Yarn RM logs flooded by DelegationTokenRenewer trying to renew KMS tokens

2018-04-16 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen reassigned HADOOP-15390:
--

Assignee: Xiao Chen

> Yarn RM logs flooded by DelegationTokenRenewer trying to renew KMS tokens
> -
>
> Key: HADOOP-15390
> URL: https://issues.apache.org/jira/browse/HADOOP-15390
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
>
> When looking at a recent issue with [~rkanter] and [~yufeigu], we found that 
> the RM log in a cluster was flooded by KMS token renewal errors below:
> {noformat}
> $ tail -9 hadoop-cmf-yarn-RESOURCEMANAGER.log
> 2018-04-11 11:34:09,367 WARN 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$KMSTokenRenewer: 
> keyProvider null cannot renew dt.
> 2018-04-11 11:34:09,367 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer:
>  Renewed delegation-token= [Kind: kms-dt, Service: KMSIP:16000, Ident: 
> (kms-dt owner=user, renewer=yarn, realUser=, issueDate=1522192283334, 
> maxDate=1522797083334, sequenceNumber=15108613, masterKeyId=2674);exp=0; 
> apps=[]], for []
> 2018-04-11 11:34:09,367 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer:
>  Renew Kind: kms-dt, Service: KMSIP:16000, Ident: (kms-dt owner=user, 
> renewer=yarn, realUser=, issueDate=1522192283334, maxDate=1522797083334, 
> sequenceNumber=15108613, masterKeyId=2674);exp=0; apps=[] in -1523446449367 
> ms, appId = []
> ...
> 2018-04-11 11:34:09,367 WARN 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$KMSTokenRenewer: 
> keyProvider null cannot renew dt.
> 2018-04-11 11:34:09,367 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer:
>  Renewed delegation-token= [Kind: kms-dt, Service: KMSIP:16000, Ident: 
> (kms-dt owner=user, renewer=yarn, realUser=, issueDate=1522192283334, 
> maxDate=1522797083334, sequenceNumber=15108613, masterKeyId=2674);exp=0; 
> apps=[]], for []
> 2018-04-11 11:34:09,367 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer:
>  Renew Kind: kms-dt, Service: KMSIP:16000, Ident: (kms-dt owner=user, 
> renewer=yarn, realUser=, issueDate=1522192283334, maxDate=1522797083334, 
> sequenceNumber=15108613, masterKeyId=2674);exp=0; apps=[] in -1523446449367 
> ms, appId = []
> {noformat}
> Further inspection shows the KMS IP is from another cluster. The RM is before 
> HADOOP-14445, so needs to read from config. The config rightfully doesn't 
> have the other cluster's KMS configured.
> Although HADOOP-14445 will make this a non-issue by creating the provider 
> from token service, we should fix 2 things here:
> - KMS token renewer should throw instead of return 0. Returning 0 when not 
> able to renew shall be considered a bug in the renewer.
> - Yarn RM's {{DelegationTokenRenewer}} service should validate the return and 
> not go into this busy loop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15390) Yarn RM logs flooded by DelegationTokenRenewer trying to renew KMS tokens

2018-04-16 Thread Xiao Chen (JIRA)
Xiao Chen created HADOOP-15390:
--

 Summary: Yarn RM logs flooded by DelegationTokenRenewer trying to 
renew KMS tokens
 Key: HADOOP-15390
 URL: https://issues.apache.org/jira/browse/HADOOP-15390
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Xiao Chen


When looking at a recent issue with [~rkanter] and [~yufeigu], we found that 
the RM log in a cluster was flooded by KMS token renewal errors below:
{noformat}
$ tail -9 hadoop-cmf-yarn-RESOURCEMANAGER.log
2018-04-11 11:34:09,367 WARN 
org.apache.hadoop.crypto.key.kms.KMSClientProvider$KMSTokenRenewer: keyProvider 
null cannot renew dt.
2018-04-11 11:34:09,367 INFO 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: 
Renewed delegation-token= [Kind: kms-dt, Service: KMSIP:16000, Ident: (kms-dt 
owner=user, renewer=yarn, realUser=, issueDate=1522192283334, 
maxDate=1522797083334, sequenceNumber=15108613, masterKeyId=2674);exp=0; 
apps=[]], for []
2018-04-11 11:34:09,367 INFO 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: 
Renew Kind: kms-dt, Service: KMSIP:16000, Ident: (kms-dt owner=user, 
renewer=yarn, realUser=, issueDate=1522192283334, maxDate=1522797083334, 
sequenceNumber=15108613, masterKeyId=2674);exp=0; apps=[] in -1523446449367 ms, 
appId = []
...
2018-04-11 11:34:09,367 WARN 
org.apache.hadoop.crypto.key.kms.KMSClientProvider$KMSTokenRenewer: keyProvider 
null cannot renew dt.
2018-04-11 11:34:09,367 INFO 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: 
Renewed delegation-token= [Kind: kms-dt, Service: KMSIP:16000, Ident: (kms-dt 
owner=user, renewer=yarn, realUser=, issueDate=1522192283334, 
maxDate=1522797083334, sequenceNumber=15108613, masterKeyId=2674);exp=0; 
apps=[]], for []
2018-04-11 11:34:09,367 INFO 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: 
Renew Kind: kms-dt, Service: KMSIP:16000, Ident: (kms-dt owner=user, 
renewer=yarn, realUser=, issueDate=1522192283334, maxDate=1522797083334, 
sequenceNumber=15108613, masterKeyId=2674);exp=0; apps=[] in -1523446449367 ms, 
appId = []
{noformat}

Further inspection shows the KMS IP is from another cluster. The RM is before 
HADOOP-14445, so needs to read from config. The config rightfully doesn't have 
the other cluster's KMS configured.

Although HADOOP-14445 will make this a non-issue by creating the provider from 
token service, we should fix 2 things here:
- KMS token renewer should throw instead of return 0. Returning 0 when not able 
to renew shall be considered a bug in the renewer.
- Yarn RM's {{DelegationTokenRenewer}} service should validate the return and 
not go into this busy loop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15369) Avoid usage of ${project.version} in parent poms

2018-04-16 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-15369:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

Thanks [~elek] for the contribution and all for the reviews. I've committed the 
patch to trunk. 

> Avoid usage of ${project.version} in parent poms
> 
>
> Key: HADOOP-15369
> URL: https://issues.apache.org/jira/browse/HADOOP-15369
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.2.0
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15369-trnk.001.patch
>
>
> hadoop-project/pom.xml and hadoop-project-dist/pom.xml use 
> _${project.version}_ variable in dependencyManagement and plugin dependencies.
> Unfortunatelly it could not work if we use different version in a child 
> project as ${project.version} variable is resolved *after* the inheritance.
> From [maven 
> doc|https://maven.apache.org/guides/introduction/introduction-to-the-pom.html#Project_Inheritance]:
> {quote}
> For example, to access the project.version variable, you would reference it 
> like so:
>   ${project.version}
> One factor to note is that these variables are processed after inheritance as 
> outlined above. This means that if a parent project uses a variable, then its 
> definition in the child, not the parent, will be the one eventually used.
> {quote}
> The community voted to keep ozone in-tree but use a different release cycle. 
> To achieve this we need different version for selected subproject therefor we 
> can't use ${project.version} any more. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15369) Avoid usage of ${project.version} in parent poms

2018-04-16 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439645#comment-16439645
 ] 

Xiaoyu Yao edited comment on HADOOP-15369 at 4/16/18 5:16 PM:
--

I agree that we need to document this. [~bharatviswa], the 
maven-enforcer-plugin message added with this patch already contains the 
information about new required variable {{hadoop.version}}.

[~elek], can you add the additional step as a followup to release to the hadoop 
wiki if this has not been added yet? 


was (Author: xyao):
I agree that we need to document this. [~bharatviswa], the 
maven-enforcer-plugin message added with this patch already contains the 
information about new required variable {{hadoop.version}}.

[~elek], can you add the additional step to release to the hadoop wiki if this 
has not been added yet? 

> Avoid usage of ${project.version} in parent poms
> 
>
> Key: HADOOP-15369
> URL: https://issues.apache.org/jira/browse/HADOOP-15369
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.2.0
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HADOOP-15369-trnk.001.patch
>
>
> hadoop-project/pom.xml and hadoop-project-dist/pom.xml use 
> _${project.version}_ variable in dependencyManagement and plugin dependencies.
> Unfortunatelly it could not work if we use different version in a child 
> project as ${project.version} variable is resolved *after* the inheritance.
> From [maven 
> doc|https://maven.apache.org/guides/introduction/introduction-to-the-pom.html#Project_Inheritance]:
> {quote}
> For example, to access the project.version variable, you would reference it 
> like so:
>   ${project.version}
> One factor to note is that these variables are processed after inheritance as 
> outlined above. This means that if a parent project uses a variable, then its 
> definition in the child, not the parent, will be the one eventually used.
> {quote}
> The community voted to keep ozone in-tree but use a different release cycle. 
> To achieve this we need different version for selected subproject therefor we 
> can't use ${project.version} any more. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15369) Avoid usage of ${project.version} in parent poms

2018-04-16 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439645#comment-16439645
 ] 

Xiaoyu Yao edited comment on HADOOP-15369 at 4/16/18 4:16 PM:
--

I agree that we need to document this. [~bharatviswa], the 
maven-enforcer-plugin message added with this patch already contains the 
information about new required variable {{hadoop.version}}.

[~elek], can you add the additional step to release to the hadoop wiki if this 
has not been added yet? 


was (Author: xyao):
I agree that we need to document this. [~bharatviswa], the 
maven-enforcer-plugin message already contains the information about the 
hadoop.version is required.

[~elek], can you add the additional step to release to the hadoop wiki if this 
has not been added yet? 

> Avoid usage of ${project.version} in parent poms
> 
>
> Key: HADOOP-15369
> URL: https://issues.apache.org/jira/browse/HADOOP-15369
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.2.0
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HADOOP-15369-trnk.001.patch
>
>
> hadoop-project/pom.xml and hadoop-project-dist/pom.xml use 
> _${project.version}_ variable in dependencyManagement and plugin dependencies.
> Unfortunatelly it could not work if we use different version in a child 
> project as ${project.version} variable is resolved *after* the inheritance.
> From [maven 
> doc|https://maven.apache.org/guides/introduction/introduction-to-the-pom.html#Project_Inheritance]:
> {quote}
> For example, to access the project.version variable, you would reference it 
> like so:
>   ${project.version}
> One factor to note is that these variables are processed after inheritance as 
> outlined above. This means that if a parent project uses a variable, then its 
> definition in the child, not the parent, will be the one eventually used.
> {quote}
> The community voted to keep ozone in-tree but use a different release cycle. 
> To achieve this we need different version for selected subproject therefor we 
> can't use ${project.version} any more. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15369) Avoid usage of ${project.version} in parent poms

2018-04-16 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439645#comment-16439645
 ] 

Xiaoyu Yao edited comment on HADOOP-15369 at 4/16/18 4:15 PM:
--

I agree that we need to document this. [~bharatviswa], the 
maven-enforcer-plugin message already contains the information about the 
hadoop.version is required.

[~elek], can you add the additional step to release to the hadoop wiki if this 
has not been added yet? 


was (Author: xyao):
I agree with [~bharatviswa] that we need to document this. [~elek] , can you 
add the additional step to release after this patch in the hadoop wiki if this 
has not been added yet? 

> Avoid usage of ${project.version} in parent poms
> 
>
> Key: HADOOP-15369
> URL: https://issues.apache.org/jira/browse/HADOOP-15369
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.2.0
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HADOOP-15369-trnk.001.patch
>
>
> hadoop-project/pom.xml and hadoop-project-dist/pom.xml use 
> _${project.version}_ variable in dependencyManagement and plugin dependencies.
> Unfortunatelly it could not work if we use different version in a child 
> project as ${project.version} variable is resolved *after* the inheritance.
> From [maven 
> doc|https://maven.apache.org/guides/introduction/introduction-to-the-pom.html#Project_Inheritance]:
> {quote}
> For example, to access the project.version variable, you would reference it 
> like so:
>   ${project.version}
> One factor to note is that these variables are processed after inheritance as 
> outlined above. This means that if a parent project uses a variable, then its 
> definition in the child, not the parent, will be the one eventually used.
> {quote}
> The community voted to keep ozone in-tree but use a different release cycle. 
> To achieve this we need different version for selected subproject therefor we 
> can't use ${project.version} any more. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15369) Avoid usage of ${project.version} in parent poms

2018-04-16 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439645#comment-16439645
 ] 

Xiaoyu Yao commented on HADOOP-15369:
-

I agree with [~bharatviswa] that we need to document this. [~elek] , can you 
add the additional step to release after this patch in the hadoop wiki if this 
has not been added yet? 

> Avoid usage of ${project.version} in parent poms
> 
>
> Key: HADOOP-15369
> URL: https://issues.apache.org/jira/browse/HADOOP-15369
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.2.0
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HADOOP-15369-trnk.001.patch
>
>
> hadoop-project/pom.xml and hadoop-project-dist/pom.xml use 
> _${project.version}_ variable in dependencyManagement and plugin dependencies.
> Unfortunatelly it could not work if we use different version in a child 
> project as ${project.version} variable is resolved *after* the inheritance.
> From [maven 
> doc|https://maven.apache.org/guides/introduction/introduction-to-the-pom.html#Project_Inheritance]:
> {quote}
> For example, to access the project.version variable, you would reference it 
> like so:
>   ${project.version}
> One factor to note is that these variables are processed after inheritance as 
> outlined above. This means that if a parent project uses a variable, then its 
> definition in the child, not the parent, will be the one eventually used.
> {quote}
> The community voted to keep ozone in-tree but use a different release cycle. 
> To achieve this we need different version for selected subproject therefor we 
> can't use ${project.version} any more. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15388) LocalFilesystem#rename(Path, Path, Options.Rename...) does not handle crc files

2018-04-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439616#comment-16439616
 ] 

genericqa commented on HADOOP-15388:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 34m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 31m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 31m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 40s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
9s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}142m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HADOOP-15388 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919217/HADOOP-15388.01.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8ae4e27196d4 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 896b473 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14491/testReport/ |
| Max. process+thread count | 1503 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14491/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> LocalFilesystem#rename(Path, Path, Options.Rename...) does not handle crc 
> files
> 

[jira] [Updated] (HADOOP-15386) FileSystemContractBaseTest#testMoveFileUnderParent duplicates testRenameFileToSelf

2018-04-16 Thread Igor Dvorzhak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Dvorzhak updated HADOOP-15386:
---
Description: 
{{FileSystemContractBaseTest#testMoveFileUnderParent}} test copy-pastes 
{{testRenameFileToSelf}} test, i.e. it tests copying to self instead of copying 
under parent.

Attached patch fixes {{testMoveFileUnderParent}} to test copying under parent.

  was:
{{FileSystemContractBaseTest#testMoveFileUnderParent}} test copy-pastes 
{{#testRenameFileToSelf}} test, i.e. it tests copying to self instead of 
copying under parent.

Attached patch fixes {{#testMoveFileUnderParent}} to test copying under parent.


> FileSystemContractBaseTest#testMoveFileUnderParent duplicates 
> testRenameFileToSelf
> --
>
> Key: HADOOP-15386
> URL: https://issues.apache.org/jira/browse/HADOOP-15386
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, fs
>Affects Versions: 3.0.1
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Minor
> Attachments: HADOOP-15386.001.patch
>
>
> {{FileSystemContractBaseTest#testMoveFileUnderParent}} test copy-pastes 
> {{testRenameFileToSelf}} test, i.e. it tests copying to self instead of 
> copying under parent.
> Attached patch fixes {{testMoveFileUnderParent}} to test copying under parent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15386) FileSystemContractBaseTest#testMoveFileUnderParent duplicates testRenameFileToSelf

2018-04-16 Thread Igor Dvorzhak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Dvorzhak updated HADOOP-15386:
---
Description: 
{{FileSystemContractBaseTest#testMoveFileUnderParent}} test copy-pastes 
{{#testRenameFileToSelf}} test, i.e. it tests copying to self instead of 
copying under parent.

Attached patch fixes {{#testMoveFileUnderParent}} to test copying under parent.

  was:
{{FileSystemContractBaseTest#testMoveFileUnderParent}} test copy-pastes 
{{#testRenameFileToSelf}} test, i.e. it tests copying to self instead of 
copying under parent.

Attached patch fixes {{testMoveFileUnderParent}} to test copying under parent.


> FileSystemContractBaseTest#testMoveFileUnderParent duplicates 
> testRenameFileToSelf
> --
>
> Key: HADOOP-15386
> URL: https://issues.apache.org/jira/browse/HADOOP-15386
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, fs
>Affects Versions: 3.0.1
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Minor
> Attachments: HADOOP-15386.001.patch
>
>
> {{FileSystemContractBaseTest#testMoveFileUnderParent}} test copy-pastes 
> {{#testRenameFileToSelf}} test, i.e. it tests copying to self instead of 
> copying under parent.
> Attached patch fixes {{#testMoveFileUnderParent}} to test copying under 
> parent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15386) FileSystemContractBaseTest#testMoveFileUnderParent duplicates testRenameFileToSelf

2018-04-16 Thread Igor Dvorzhak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Dvorzhak updated HADOOP-15386:
---
Description: 
{{FileSystemContractBaseTest#testMoveFileUnderParent}} test copy-pastes 
{{#testRenameFileToSelf}} test, i.e. it tests copying to self instead of 
copying under parent.

Attached patch fixes {{testMoveFileUnderParent}} to test copying under parent.

  was:
{{testMoveFileUnderParent}} test in {{FileSystemContractBaseTest}} class 
copy-pastes {{testRenameFileToSelf}} test, i.e. it tests copying to self 
instead of copying under parent.

Attached patch fixes {{testMoveFileUnderParent}} to test copying under parent.


> FileSystemContractBaseTest#testMoveFileUnderParent duplicates 
> testRenameFileToSelf
> --
>
> Key: HADOOP-15386
> URL: https://issues.apache.org/jira/browse/HADOOP-15386
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, fs
>Affects Versions: 3.0.1
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Minor
> Attachments: HADOOP-15386.001.patch
>
>
> {{FileSystemContractBaseTest#testMoveFileUnderParent}} test copy-pastes 
> {{#testRenameFileToSelf}} test, i.e. it tests copying to self instead of 
> copying under parent.
> Attached patch fixes {{testMoveFileUnderParent}} to test copying under parent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15386) FileSystemContractBaseTest#testMoveFileUnderParent duplicates testRenameFileToSelf

2018-04-16 Thread Igor Dvorzhak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Dvorzhak updated HADOOP-15386:
---
Summary: FileSystemContractBaseTest#testMoveFileUnderParent duplicates 
testRenameFileToSelf  (was: testMoveFileUnderParent duplicates 
testRenameFileToSelf)

> FileSystemContractBaseTest#testMoveFileUnderParent duplicates 
> testRenameFileToSelf
> --
>
> Key: HADOOP-15386
> URL: https://issues.apache.org/jira/browse/HADOOP-15386
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, fs
>Affects Versions: 3.0.1
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Minor
> Attachments: HADOOP-15386.001.patch
>
>
> {{testMoveFileUnderParent}} test in {{FileSystemContractBaseTest}} class 
> copy-pastes {{testRenameFileToSelf}} test, i.e. it tests copying to self 
> instead of copying under parent.
> Attached patch fixes {{testMoveFileUnderParent}} to test copying under parent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15389) Hadoop contains both Jackson 2.9.4 and 2.7.8 jars

2018-04-16 Thread Dmitry Chuyko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Chuyko resolved HADOOP-15389.

Resolution: Invalid

Sorry, cannot be reproduced anymore in clean workspace.

> Hadoop contains both Jackson 2.9.4 and 2.7.8 jars
> -
>
> Key: HADOOP-15389
> URL: https://issues.apache.org/jira/browse/HADOOP-15389
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.2.0
>Reporter: Dmitry Chuyko
>Priority: Blocker
>
> I build hadoop-3.2.0-SNAPSHOT distribution from scratch. Resulting package 
> has following in hadoop-3.2.0-SNAPSHOT/share/hadoop/hdfs/lib/:
> jackson-annotations-2.7.8.jar
> jackson-annotations-2.9.4.jar
> jackson-core-2.7.8.jar
> jackson-core-2.9.4.jar
> jackson-databind-2.7.8.jar
> jackson-databind-2.9.4.jar
> As a result DataNode does not start with following error:
> java.lang.NoSuchFieldError: ACCEPT_CASE_INSENSITIVE_PROPERTIES
>  at 
> com.fasterxml.jackson.databind.deser.BeanDeserializerBase.createContextual(BeanDeserializerBase.java:747)
>  at 
> com.fasterxml.jackson.databind.DeserializationContext.handleSecondaryContextualization(DeserializationContext.java:682)
>  at 
> com.fasterxml.jackson.databind.DeserializationContext.findRootValueDeserializer(DeserializationContext.java:482)
>  at 
> com.fasterxml.jackson.databind.ObjectReader._prefetchRootDeserializer(ObjectReader.java:1938)
>  at com.fasterxml.jackson.databind.ObjectReader.(ObjectReader.java:189)
>  at 
> com.fasterxml.jackson.databind.ObjectMapper._newReader(ObjectMapper.java:658)
>  at 
> com.fasterxml.jackson.databind.ObjectMapper.readerFor(ObjectMapper.java:3517)
>  at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.(FsVolumeImpl.java:109)
>  at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImplBuilder.build(FsVolumeImplBuilder.java:76)
>  at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.addVolume(FsDatasetImpl.java:426)
>  at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.(FsDatasetImpl.java:316)
>  at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:34)
>  at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:30)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1719)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1665)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:390)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816)
>  at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15386) testMoveFileUnderParent duplicates testRenameFileToSelf

2018-04-16 Thread Igor Dvorzhak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Dvorzhak updated HADOOP-15386:
---
Description: 
{{testMoveFileUnderParent}} test in {{FileSystemContractBaseTest}} class 
copy-pastes {{testRenameFileToSelf}} test, i.e. it tests copying to self 
instead of copying under parent.

Attached patch fixes {{testMoveFileUnderParent}} to test copying under parent.

  was:
`testMoveFileUnderParent` test in `FileSystemContractBaseTest` class 
copy-pastes `testRenameFileToSelf` test, i.e. it tests copying to self instead 
of copying under parent.

Attached patch fixes `testMoveFileUnderParent` to test copying under parent.


> testMoveFileUnderParent duplicates testRenameFileToSelf
> ---
>
> Key: HADOOP-15386
> URL: https://issues.apache.org/jira/browse/HADOOP-15386
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, fs
>Affects Versions: 3.0.1
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Minor
> Attachments: HADOOP-15386.001.patch
>
>
> {{testMoveFileUnderParent}} test in {{FileSystemContractBaseTest}} class 
> copy-pastes {{testRenameFileToSelf}} test, i.e. it tests copying to self 
> instead of copying under parent.
> Attached patch fixes {{testMoveFileUnderParent}} to test copying under parent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15386) testMoveFileUnderParent duplicates testRenameFileToSelf

2018-04-16 Thread Igor Dvorzhak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Dvorzhak updated HADOOP-15386:
---
Description: 
`testMoveFileUnderParent` test in `FileSystemContractBaseTest` class 
copy-pastes `testRenameFileToSelf` test, i.e. it tests copying to self instead 
of copying under parent.

Attached patch fixes `testMoveFileUnderParent` to test copying under parent.

  was:
`testMoveFileUnderParent` test in `FileSystemContractBaseTest` class 
copy-pastes `testRenameFileToSelf` test - it tests copying to self instead of 
copying under parent.

Attached patch fixes `testMoveFileUnderParent` to test copying under parent.


> testMoveFileUnderParent duplicates testRenameFileToSelf
> ---
>
> Key: HADOOP-15386
> URL: https://issues.apache.org/jira/browse/HADOOP-15386
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, fs
>Affects Versions: 3.0.1
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Minor
> Attachments: HADOOP-15386.001.patch
>
>
> `testMoveFileUnderParent` test in `FileSystemContractBaseTest` class 
> copy-pastes `testRenameFileToSelf` test, i.e. it tests copying to self 
> instead of copying under parent.
> Attached patch fixes `testMoveFileUnderParent` to test copying under parent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15389) Hadoop contains both Jackson 2.9.4 and 2.7.8 jars

2018-04-16 Thread Dmitry Chuyko (JIRA)
Dmitry Chuyko created HADOOP-15389:
--

 Summary: Hadoop contains both Jackson 2.9.4 and 2.7.8 jars
 Key: HADOOP-15389
 URL: https://issues.apache.org/jira/browse/HADOOP-15389
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.1.0, 3.2.0
Reporter: Dmitry Chuyko


I build hadoop-3.2.0-SNAPSHOT distribution from scratch. Resulting package has 
following in hadoop-3.2.0-SNAPSHOT/share/hadoop/hdfs/lib/:

jackson-annotations-2.7.8.jar
jackson-annotations-2.9.4.jar
jackson-core-2.7.8.jar
jackson-core-2.9.4.jar
jackson-databind-2.7.8.jar
jackson-databind-2.9.4.jar

As a result DataNode does not start with following error:

java.lang.NoSuchFieldError: ACCEPT_CASE_INSENSITIVE_PROPERTIES
 at 
com.fasterxml.jackson.databind.deser.BeanDeserializerBase.createContextual(BeanDeserializerBase.java:747)
 at 
com.fasterxml.jackson.databind.DeserializationContext.handleSecondaryContextualization(DeserializationContext.java:682)
 at 
com.fasterxml.jackson.databind.DeserializationContext.findRootValueDeserializer(DeserializationContext.java:482)
 at 
com.fasterxml.jackson.databind.ObjectReader._prefetchRootDeserializer(ObjectReader.java:1938)
 at com.fasterxml.jackson.databind.ObjectReader.(ObjectReader.java:189)
 at 
com.fasterxml.jackson.databind.ObjectMapper._newReader(ObjectMapper.java:658)
 at 
com.fasterxml.jackson.databind.ObjectMapper.readerFor(ObjectMapper.java:3517)
 at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.(FsVolumeImpl.java:109)
 at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImplBuilder.build(FsVolumeImplBuilder.java:76)
 at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.addVolume(FsDatasetImpl.java:426)
 at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.(FsDatasetImpl.java:316)
 at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:34)
 at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:30)
 at 
org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1719)
 at 
org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1665)
 at 
org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:390)
 at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
 at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816)
 at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15388) LocalFilesystem#rename(Path, Path, Options.Rename...) does not handle crc files

2018-04-16 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-15388:
--
Status: Patch Available  (was: Open)

> LocalFilesystem#rename(Path, Path, Options.Rename...) does not handle crc 
> files
> ---
>
> Key: HADOOP-15388
> URL: https://issues.apache.org/jira/browse/HADOOP-15388
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-15388.01.patch
>
>
> ChecksumFilesystem#rename(Path, Path, Options.Rename...) is missing and 
> FilterFileSystem does not care with crc files. That causes abandoned crc 
> files in case of rename.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15388) LocalFilesystem#rename(Path, Path, Options.Rename...) does not handle crc files

2018-04-16 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-15388:
--
Attachment: HADOOP-15388.01.patch

> LocalFilesystem#rename(Path, Path, Options.Rename...) does not handle crc 
> files
> ---
>
> Key: HADOOP-15388
> URL: https://issues.apache.org/jira/browse/HADOOP-15388
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-15388.01.patch
>
>
> ChecksumFilesystem#rename(Path, Path, Options.Rename...) is missing and 
> FilterFileSystem does not care with crc files. That causes abandoned crc 
> files in case of rename.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-15388) LocalFilesystem#rename(Path, Path, Options.Rename...) does not handle crc files

2018-04-16 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor moved HDFS-13457 to HADOOP-15388:
--

Target Version/s: 3.1.0  (was: 3.1.0)
 Key: HADOOP-15388  (was: HDFS-13457)
 Project: Hadoop Common  (was: Hadoop HDFS)

> LocalFilesystem#rename(Path, Path, Options.Rename...) does not handle crc 
> files
> ---
>
> Key: HADOOP-15388
> URL: https://issues.apache.org/jira/browse/HADOOP-15388
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
>
> ChecksumFilesystem#rename(Path, Path, Options.Rename...) is missing and 
> FilterFileSystem does not care with crc files. That causes abandoned crc 
> files in case of rename.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15378) Hadoop client unable to relogin because a remote DataNode has an incorrect krb5.conf

2018-04-16 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439204#comment-16439204
 ] 

Steve Loughran commented on HADOOP-15378:
-

bq. unfortunately current CDH5 doesn't have KDiag (I thought of backporting it 
but I forgot).

There's a self contained version of KDiag designed tor run against older hadoop 
versions: https://github.com/steveloughran/kdiag

grab it, build against CDH, share with the support team. They'll appreciate it

> Hadoop client unable to relogin because a remote DataNode has an incorrect 
> krb5.conf
> 
>
> Key: HADOOP-15378
> URL: https://issues.apache.org/jira/browse/HADOOP-15378
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
> Environment: CDH5.8.3, Kerberized, Impala
>Reporter: Wei-Chiu Chuang
>Priority: Critical
>
> This is a very weird bug.
> We received a report where a Hadoop client (Impala Catalog server) failed to 
> relogin and crashed every several hours. Initial indication suggested the 
> symptom matched HADOOP-13433.
> But after we patched HADOOP-13433 (as well as HADOOP-15143), Impala Catalog 
> server still kept crashing.
>  
> {noformat}
> W0114 05:49:24.676743 41444 UserGroupInformation.java:1838] 
> PriviledgedActionException as:impala/host1.example@example.com 
> (auth:KERBEROS) 
> cause:org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException):
>  Failure to initialize security context
> W0114 05:49:24.680363 41444 UserGroupInformation.java:1137] The first 
> kerberos ticket is not TGT(the server principal is 
> hdfs/host2.example@example.com), remove and destroy it.
> W0114 05:49:24.680501 41444 UserGroupInformation.java:1137] The first 
> kerberos ticket is not TGT(the server principal is 
> hdfs/host3.example@example.com), remove and destroy it.
> W0114 05:49:24.680593 41444 UserGroupInformation.java:1153] Warning, no 
> kerberos ticket found while attempting to renew ticket{noformat}
> The error “Failure to initialize security context” is suspicious here. 
> Catalogd was unable to log in because of a Kerberos issue. The JDK expects 
> the first kerberos ticket of a principal to be a TGT, however it seems that 
> after this error, because it was unable to login successfully, the first 
> ticket was no longer a TGT. The patch HADOOP-13433 removed other tickets of 
> the principal, because it expects the TGT to be in the principal’s ticket, 
> which is untrue in this case. So finally, it removed all tickets.
> And then
> {noformat}
> W0114 05:49:24.681946 41443 UserGroupInformation.java:1838] 
> PriviledgedActionException as:impala/host1.example@example.com 
> (auth:KERBEROS) cause:javax.security.sasl.SaslException: GSS initiate failed 
> [Caused by GSSException: No valid credentials provided (Mechanism level: 
> Failed to find any Kerberos tgt)]
> {noformat}
> The error “Failed to find any Kerberos tgt” is typically an indication that 
> the user’s Kerberos ticket has expired. However, that’s definitely not the 
> case here, since it was just a little over 8 hours.
> After we patched HADOOP-13433, the error handling code exhibited NPE, as 
> reported in HADOOP-15143.
>  
> {code:java}
> I0114 05:50:26.758565 6384 RetryInvocationHandler.java:148] Exception while 
> invoking listCachePools of class ClientNamenodeProtocolTranslatorPB over 
> host4.example.com/10.0.121.66:8020 after 2 fail over attempts. Trying to fail 
> over immediately. Java exception follows: java.io.IOException: Failed on 
> local exception: java.io.IOException: Couldn't set up IO streams; Host 
> Details : local host is: "host1.example.com/10.0.121.45"; destination host 
> is: "host4.example.com":8020; at 
> org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) at 
> org.apache.hadoop.ipc.Client.call(Client.java:1506) at 
> org.apache.hadoop.ipc.Client.call(Client.java:1439) at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
>  at com.sun.proxy.$Proxy9.listCachePools(Unknown Source) at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.listCachePools(ClientNamenodeProtocolTranslatorPB.java:1261)
>  at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
>  at com.sun.proxy.$Proxy10.listCachePools(Unknown Source) at 
> org.apache.hadoop.hdfs.protocol.CachePoolIterator.makeRequest(CachePoolIterator.java:55)
>  at 

[jira] [Updated] (HADOOP-15387) Produce a shaded hadoop-cloudstorage JAR for applications to use

2018-04-16 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15387:

Summary: Produce a shaded hadoop-cloudstorage JAR for applications to use  
(was: Produce a shaded hadoop-cloudstorage JAR for downstream use)

> Produce a shaded hadoop-cloudstorage JAR for applications to use
> 
>
> Key: HADOOP-15387
> URL: https://issues.apache.org/jira/browse/HADOOP-15387
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/adl, fs/azure, fs/oss, fs/s3, fs/swift
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> Produce a maven-shaded hadoop-cloudstorage JAR for dowstream use so that
>  * Hadoop dependency choices don't control their decisions
>  * Little/No risk of their JAR changes breaking Hadoop bits they depend on
> This JAR would pull in the shaded hadoop-client JAR, and the aws-sdk-bundle 
> JAR, neither of which would be unshaded (so yes, upgrading aws-sdks would be 
> a bit risky, but double shading a pre-shaded 30MB JAR is excessive on 
> multiple levels.
> Metrics of success: Spark, Tez, etc can pick up and use



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15387) Produce a shaded hadoop-cloudstorage JAR for applications to use

2018-04-16 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15387:

Description: 
Produce a maven-shaded hadoop-cloudstorage JAR for dowstream use so that
 * Hadoop dependency choices don't control their decisions
 * Little/No risk of their JAR changes breaking Hadoop bits they depend on

This JAR would pull in the shaded hadoop-client JAR, and the aws-sdk-bundle 
JAR, neither of which would be unshaded (so yes, upgrading aws-sdks would be a 
bit risky, but double shading a pre-shaded 30MB JAR is excessive on multiple 
levels.

Metrics of success: Spark, Tez, Flink etc can pick up and use, and all are happy

  was:
Produce a maven-shaded hadoop-cloudstorage JAR for dowstream use so that
 * Hadoop dependency choices don't control their decisions
 * Little/No risk of their JAR changes breaking Hadoop bits they depend on

This JAR would pull in the shaded hadoop-client JAR, and the aws-sdk-bundle 
JAR, neither of which would be unshaded (so yes, upgrading aws-sdks would be a 
bit risky, but double shading a pre-shaded 30MB JAR is excessive on multiple 
levels.

Metrics of success: Spark, Tez, etc can pick up and use


> Produce a shaded hadoop-cloudstorage JAR for applications to use
> 
>
> Key: HADOOP-15387
> URL: https://issues.apache.org/jira/browse/HADOOP-15387
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/adl, fs/azure, fs/oss, fs/s3, fs/swift
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> Produce a maven-shaded hadoop-cloudstorage JAR for dowstream use so that
>  * Hadoop dependency choices don't control their decisions
>  * Little/No risk of their JAR changes breaking Hadoop bits they depend on
> This JAR would pull in the shaded hadoop-client JAR, and the aws-sdk-bundle 
> JAR, neither of which would be unshaded (so yes, upgrading aws-sdks would be 
> a bit risky, but double shading a pre-shaded 30MB JAR is excessive on 
> multiple levels.
> Metrics of success: Spark, Tez, Flink etc can pick up and use, and all are 
> happy



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15387) Produce a shaded hadoop-cloudstorage JAR for applications to use

2018-04-16 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439200#comment-16439200
 ] 

Steve Loughran commented on HADOOP-15387:
-

FYI [~bikassaha] [~devaraj] [~vanzin] [~fabbri] [~ajisakaa]

> Produce a shaded hadoop-cloudstorage JAR for applications to use
> 
>
> Key: HADOOP-15387
> URL: https://issues.apache.org/jira/browse/HADOOP-15387
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/adl, fs/azure, fs/oss, fs/s3, fs/swift
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> Produce a maven-shaded hadoop-cloudstorage JAR for dowstream use so that
>  * Hadoop dependency choices don't control their decisions
>  * Little/No risk of their JAR changes breaking Hadoop bits they depend on
> This JAR would pull in the shaded hadoop-client JAR, and the aws-sdk-bundle 
> JAR, neither of which would be unshaded (so yes, upgrading aws-sdks would be 
> a bit risky, but double shading a pre-shaded 30MB JAR is excessive on 
> multiple levels.
> Metrics of success: Spark, Tez, Flink etc can pick up and use, and all are 
> happy



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15387) Produce a shaded hadoop-cloudstorage JAR for downstream use

2018-04-16 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15387:
---

 Summary: Produce a shaded hadoop-cloudstorage JAR for downstream 
use
 Key: HADOOP-15387
 URL: https://issues.apache.org/jira/browse/HADOOP-15387
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs/adl, fs/azure, fs/oss, fs/s3, fs/swift
Affects Versions: 3.1.0
Reporter: Steve Loughran
Assignee: Steve Loughran


Produce a maven-shaded hadoop-cloudstorage JAR for dowstream use so that
 * Hadoop dependency choices don't control their decisions
 * Little/No risk of their JAR changes breaking Hadoop bits they depend on

This JAR would pull in the shaded hadoop-client JAR, and the aws-sdk-bundle 
JAR, neither of which would be unshaded (so yes, upgrading aws-sdks would be a 
bit risky, but double shading a pre-shaded 30MB JAR is excessive on multiple 
levels.

Metrics of success: Spark, Tez, etc can pick up and use



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15386) testMoveFileUnderParent duplicates testRenameFileToSelf

2018-04-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439062#comment-16439062
 ] 

genericqa commented on HADOOP-15386:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
20s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}123m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HADOOP-15386 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919160/HADOOP-15386.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ea4f86edb24e 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 896b473 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14490/testReport/ |
| Max. process+thread count | 1355 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14490/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> testMoveFileUnderParent duplicates testRenameFileToSelf
> ---
>
> Key: HADOOP-15386
>