[jira] [Commented] (HADOOP-15775) [JDK9] Add missing javax.activation-api dependency

2018-10-03 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16637797#comment-16637797
 ] 

Akira Ajisaka commented on HADOOP-15775:


Thanks [~tasanuma0829] for trying the patch.

05 patch
* Added the third party dependency to hadoop-common because this dependency is 
mainly used from HttpServer2.java (hadoop-common module).
* Excludes the dependency from the hadoop-common:compile in all the modules 
(without hadoop-minicluster) to avoid leaking the dependency.

> [JDK9] Add missing javax.activation-api dependency
> --
>
> Key: HADOOP-15775
> URL: https://issues.apache.org/jira/browse/HADOOP-15775
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Critical
> Attachments: HADOOP-15775.01.patch, HADOOP-15775.02.patch, 
> HADOOP-15775.03.patch, HADOOP-15775.04.patch, HADOOP-15775.05.patch
>
>
> Many unit tests fail due to missing java.activation module. This failure can 
> be fixed by adding javax.activation-api as third-party dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15775) [JDK9] Add missing javax.activation-api dependency

2018-10-03 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15775:
---
Attachment: HADOOP-15775.05.patch

> [JDK9] Add missing javax.activation-api dependency
> --
>
> Key: HADOOP-15775
> URL: https://issues.apache.org/jira/browse/HADOOP-15775
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Critical
> Attachments: HADOOP-15775.01.patch, HADOOP-15775.02.patch, 
> HADOOP-15775.03.patch, HADOOP-15775.04.patch, HADOOP-15775.05.patch
>
>
> Many unit tests fail due to missing java.activation module. This failure can 
> be fixed by adding javax.activation-api as third-party dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15817) Reuse Object Mapper in KMSJSONReader

2018-10-03 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HADOOP-15817:
---
Affects Version/s: 3.1.1

> Reuse Object Mapper in KMSJSONReader
> 
>
> Key: HADOOP-15817
> URL: https://issues.apache.org/jira/browse/HADOOP-15817
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.1.1
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>Priority: Major
>  Labels: performance
> Fix For: 2.10.0, 2.9.2, 3.0.4, 3.3.0, 3.1.2, 2.8.6
>
> Attachments: HADOOP-15817.001.patch
>
>
> Paying an expensive cost to construct object mapper deserializer cache.
>  
> {code:title=KMS Server Stack Trace}
> "qtp1926764753-117" #117 prio=5 os_prio=0 tid=0x0321c000 nid=0x1f0bd 
> runnable [0x2b4caabf7000]
>java.lang.Thread.State: RUNNABLE
> at 
> java.lang.reflect.Executable.sharedGetParameterAnnotations(Executable.java:553)
> at 
> java.lang.reflect.Constructor.getParameterAnnotations(Constructor.java:523)
> at 
> org.codehaus.jackson.map.introspect.AnnotatedClass._constructConstructor(AnnotatedClass.java:784)
> at 
> org.codehaus.jackson.map.introspect.AnnotatedClass.resolveCreators(AnnotatedClass.java:327)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.classWithCreators(BasicClassIntrospector.java:187)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.collectProperties(BasicClassIntrospector.java:157)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.forCreation(BasicClassIntrospector.java:119)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.forCreation(BasicClassIntrospector.java:16)
> at 
> org.codehaus.jackson.map.DeserializationConfig.introspectForCreation(DeserializationConfig.java:877)
> at 
> org.codehaus.jackson.map.deser.BasicDeserializerFactory.createMapDeserializer(BasicDeserializerFactory.java:430)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createDeserializer(StdDeserializerProvider.java:380)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createAndCache2(StdDeserializerProvider.java:310)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createAndCacheValueDeserializer(StdDeserializerProvider.java:290)
> - locked <0x000752fbbc28> (a java.util.HashMap)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider.findValueDeserializer(StdDeserializerProvider.java:159)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider.findTypedValueDeserializer(StdDeserializerProvider.java:180)
> at 
> org.codehaus.jackson.map.ObjectMapper._findRootDeserializer(ObjectMapper.java:2829)
> at 
> org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2728)
> at 
> org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1909)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSJSONReader.readFrom(KMSJSONReader.java:52)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSJSONReader.readFrom(KMSJSONReader.java:35)
> at 
> com.sun.jersey.spi.container.ContainerRequest.getEntity(ContainerRequest.java:474)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.EntityParamDispatchProvider$EntityInjectable.getValue(EntityParamDispatchProvider.java:123)
> at 
> com.sun.jersey.server.impl.inject.InjectableValuesProvider.getInjectableValues(InjectableValuesProvider.java:46)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$EntityParamInInvoker.getParams(AbstractResourceMethodDispatchProvider.java:153)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:203)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
> at 
> 

[jira] [Commented] (HADOOP-15817) Reuse Object Mapper in KMSJSONReader

2018-10-03 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16637745#comment-16637745
 ] 

Hudson commented on HADOOP-15817:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15112 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15112/])
HADOOP-15817. Reuse Object Mapper in KMSJSONReader. Contributed by (yqlin: rev 
81f635f47f0737eb551bef1aa55afdf7b268253d)
* (edit) 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java


> Reuse Object Mapper in KMSJSONReader
> 
>
> Key: HADOOP-15817
> URL: https://issues.apache.org/jira/browse/HADOOP-15817
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.1.1
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>Priority: Major
>  Labels: performance
> Fix For: 2.10.0, 2.9.2, 3.0.4, 3.3.0, 3.1.2, 2.8.6
>
> Attachments: HADOOP-15817.001.patch
>
>
> Paying an expensive cost to construct object mapper deserializer cache.
>  
> {code:title=KMS Server Stack Trace}
> "qtp1926764753-117" #117 prio=5 os_prio=0 tid=0x0321c000 nid=0x1f0bd 
> runnable [0x2b4caabf7000]
>java.lang.Thread.State: RUNNABLE
> at 
> java.lang.reflect.Executable.sharedGetParameterAnnotations(Executable.java:553)
> at 
> java.lang.reflect.Constructor.getParameterAnnotations(Constructor.java:523)
> at 
> org.codehaus.jackson.map.introspect.AnnotatedClass._constructConstructor(AnnotatedClass.java:784)
> at 
> org.codehaus.jackson.map.introspect.AnnotatedClass.resolveCreators(AnnotatedClass.java:327)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.classWithCreators(BasicClassIntrospector.java:187)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.collectProperties(BasicClassIntrospector.java:157)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.forCreation(BasicClassIntrospector.java:119)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.forCreation(BasicClassIntrospector.java:16)
> at 
> org.codehaus.jackson.map.DeserializationConfig.introspectForCreation(DeserializationConfig.java:877)
> at 
> org.codehaus.jackson.map.deser.BasicDeserializerFactory.createMapDeserializer(BasicDeserializerFactory.java:430)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createDeserializer(StdDeserializerProvider.java:380)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createAndCache2(StdDeserializerProvider.java:310)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createAndCacheValueDeserializer(StdDeserializerProvider.java:290)
> - locked <0x000752fbbc28> (a java.util.HashMap)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider.findValueDeserializer(StdDeserializerProvider.java:159)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider.findTypedValueDeserializer(StdDeserializerProvider.java:180)
> at 
> org.codehaus.jackson.map.ObjectMapper._findRootDeserializer(ObjectMapper.java:2829)
> at 
> org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2728)
> at 
> org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1909)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSJSONReader.readFrom(KMSJSONReader.java:52)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSJSONReader.readFrom(KMSJSONReader.java:35)
> at 
> com.sun.jersey.spi.container.ContainerRequest.getEntity(ContainerRequest.java:474)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.EntityParamDispatchProvider$EntityInjectable.getValue(EntityParamDispatchProvider.java:123)
> at 
> com.sun.jersey.server.impl.inject.InjectableValuesProvider.getInjectableValues(InjectableValuesProvider.java:46)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$EntityParamInInvoker.getParams(AbstractResourceMethodDispatchProvider.java:153)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:203)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> 

[jira] [Updated] (HADOOP-15817) Reuse Object Mapper in KMSJSONReader

2018-10-03 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HADOOP-15817:
---
Fix Version/s: 3.2.0

> Reuse Object Mapper in KMSJSONReader
> 
>
> Key: HADOOP-15817
> URL: https://issues.apache.org/jira/browse/HADOOP-15817
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.1.1
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>Priority: Major
>  Labels: performance
> Fix For: 2.10.0, 3.2.0, 2.9.2, 3.0.4, 3.3.0, 3.1.2, 2.8.6
>
> Attachments: HADOOP-15817.001.patch
>
>
> Paying an expensive cost to construct object mapper deserializer cache.
>  
> {code:title=KMS Server Stack Trace}
> "qtp1926764753-117" #117 prio=5 os_prio=0 tid=0x0321c000 nid=0x1f0bd 
> runnable [0x2b4caabf7000]
>java.lang.Thread.State: RUNNABLE
> at 
> java.lang.reflect.Executable.sharedGetParameterAnnotations(Executable.java:553)
> at 
> java.lang.reflect.Constructor.getParameterAnnotations(Constructor.java:523)
> at 
> org.codehaus.jackson.map.introspect.AnnotatedClass._constructConstructor(AnnotatedClass.java:784)
> at 
> org.codehaus.jackson.map.introspect.AnnotatedClass.resolveCreators(AnnotatedClass.java:327)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.classWithCreators(BasicClassIntrospector.java:187)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.collectProperties(BasicClassIntrospector.java:157)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.forCreation(BasicClassIntrospector.java:119)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.forCreation(BasicClassIntrospector.java:16)
> at 
> org.codehaus.jackson.map.DeserializationConfig.introspectForCreation(DeserializationConfig.java:877)
> at 
> org.codehaus.jackson.map.deser.BasicDeserializerFactory.createMapDeserializer(BasicDeserializerFactory.java:430)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createDeserializer(StdDeserializerProvider.java:380)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createAndCache2(StdDeserializerProvider.java:310)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createAndCacheValueDeserializer(StdDeserializerProvider.java:290)
> - locked <0x000752fbbc28> (a java.util.HashMap)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider.findValueDeserializer(StdDeserializerProvider.java:159)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider.findTypedValueDeserializer(StdDeserializerProvider.java:180)
> at 
> org.codehaus.jackson.map.ObjectMapper._findRootDeserializer(ObjectMapper.java:2829)
> at 
> org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2728)
> at 
> org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1909)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSJSONReader.readFrom(KMSJSONReader.java:52)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSJSONReader.readFrom(KMSJSONReader.java:35)
> at 
> com.sun.jersey.spi.container.ContainerRequest.getEntity(ContainerRequest.java:474)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.EntityParamDispatchProvider$EntityInjectable.getValue(EntityParamDispatchProvider.java:123)
> at 
> com.sun.jersey.server.impl.inject.InjectableValuesProvider.getInjectableValues(InjectableValuesProvider.java:46)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$EntityParamInInvoker.getParams(AbstractResourceMethodDispatchProvider.java:153)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:203)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
> at 
> 

[jira] [Comment Edited] (HADOOP-15817) Reuse Object Mapper in KMSJSONReader

2018-10-03 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16637743#comment-16637743
 ] 

Yiqun Lin edited comment on HADOOP-15817 at 10/4/18 3:06 AM:
-

I have committed this to trunk, branch-3.2, branch-3.1, branch-3.0, branch-2, 
branch-2.8 and branch-2.9.

Thanks [~jeagles] for the contribution and thanks [~ajisakaa] for the review.


was (Author: linyiqun):
I have committed this to trunk, branch-3.1, branch-3.0, branch-2, branch-2.8 
and branch-2.9.

Thanks [~jeagles] for the contribution and thanks [~ajisakaa] for the review.

> Reuse Object Mapper in KMSJSONReader
> 
>
> Key: HADOOP-15817
> URL: https://issues.apache.org/jira/browse/HADOOP-15817
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.1.1
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>Priority: Major
>  Labels: performance
> Fix For: 2.10.0, 3.2.0, 2.9.2, 3.0.4, 3.3.0, 3.1.2, 2.8.6
>
> Attachments: HADOOP-15817.001.patch
>
>
> Paying an expensive cost to construct object mapper deserializer cache.
>  
> {code:title=KMS Server Stack Trace}
> "qtp1926764753-117" #117 prio=5 os_prio=0 tid=0x0321c000 nid=0x1f0bd 
> runnable [0x2b4caabf7000]
>java.lang.Thread.State: RUNNABLE
> at 
> java.lang.reflect.Executable.sharedGetParameterAnnotations(Executable.java:553)
> at 
> java.lang.reflect.Constructor.getParameterAnnotations(Constructor.java:523)
> at 
> org.codehaus.jackson.map.introspect.AnnotatedClass._constructConstructor(AnnotatedClass.java:784)
> at 
> org.codehaus.jackson.map.introspect.AnnotatedClass.resolveCreators(AnnotatedClass.java:327)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.classWithCreators(BasicClassIntrospector.java:187)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.collectProperties(BasicClassIntrospector.java:157)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.forCreation(BasicClassIntrospector.java:119)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.forCreation(BasicClassIntrospector.java:16)
> at 
> org.codehaus.jackson.map.DeserializationConfig.introspectForCreation(DeserializationConfig.java:877)
> at 
> org.codehaus.jackson.map.deser.BasicDeserializerFactory.createMapDeserializer(BasicDeserializerFactory.java:430)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createDeserializer(StdDeserializerProvider.java:380)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createAndCache2(StdDeserializerProvider.java:310)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createAndCacheValueDeserializer(StdDeserializerProvider.java:290)
> - locked <0x000752fbbc28> (a java.util.HashMap)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider.findValueDeserializer(StdDeserializerProvider.java:159)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider.findTypedValueDeserializer(StdDeserializerProvider.java:180)
> at 
> org.codehaus.jackson.map.ObjectMapper._findRootDeserializer(ObjectMapper.java:2829)
> at 
> org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2728)
> at 
> org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1909)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSJSONReader.readFrom(KMSJSONReader.java:52)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSJSONReader.readFrom(KMSJSONReader.java:35)
> at 
> com.sun.jersey.spi.container.ContainerRequest.getEntity(ContainerRequest.java:474)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.EntityParamDispatchProvider$EntityInjectable.getValue(EntityParamDispatchProvider.java:123)
> at 
> com.sun.jersey.server.impl.inject.InjectableValuesProvider.getInjectableValues(InjectableValuesProvider.java:46)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$EntityParamInInvoker.getParams(AbstractResourceMethodDispatchProvider.java:153)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:203)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> 

[jira] [Updated] (HADOOP-15817) Reuse Object Mapper in KMSJSONReader

2018-10-03 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HADOOP-15817:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.6
   3.1.2
   3.3.0
   3.0.4
   2.9.2
   2.10.0
   Status: Resolved  (was: Patch Available)

I have committed this to trunk, branch-3.1, branch-3.0, branch-2, branch-2.8 
and branch-2.9.

Thanks [~jeagles] for the contribution and thanks [~ajisakaa] for the review.

> Reuse Object Mapper in KMSJSONReader
> 
>
> Key: HADOOP-15817
> URL: https://issues.apache.org/jira/browse/HADOOP-15817
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.1.1
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>Priority: Major
>  Labels: performance
> Fix For: 2.10.0, 2.9.2, 3.0.4, 3.3.0, 3.1.2, 2.8.6
>
> Attachments: HADOOP-15817.001.patch
>
>
> Paying an expensive cost to construct object mapper deserializer cache.
>  
> {code:title=KMS Server Stack Trace}
> "qtp1926764753-117" #117 prio=5 os_prio=0 tid=0x0321c000 nid=0x1f0bd 
> runnable [0x2b4caabf7000]
>java.lang.Thread.State: RUNNABLE
> at 
> java.lang.reflect.Executable.sharedGetParameterAnnotations(Executable.java:553)
> at 
> java.lang.reflect.Constructor.getParameterAnnotations(Constructor.java:523)
> at 
> org.codehaus.jackson.map.introspect.AnnotatedClass._constructConstructor(AnnotatedClass.java:784)
> at 
> org.codehaus.jackson.map.introspect.AnnotatedClass.resolveCreators(AnnotatedClass.java:327)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.classWithCreators(BasicClassIntrospector.java:187)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.collectProperties(BasicClassIntrospector.java:157)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.forCreation(BasicClassIntrospector.java:119)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.forCreation(BasicClassIntrospector.java:16)
> at 
> org.codehaus.jackson.map.DeserializationConfig.introspectForCreation(DeserializationConfig.java:877)
> at 
> org.codehaus.jackson.map.deser.BasicDeserializerFactory.createMapDeserializer(BasicDeserializerFactory.java:430)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createDeserializer(StdDeserializerProvider.java:380)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createAndCache2(StdDeserializerProvider.java:310)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createAndCacheValueDeserializer(StdDeserializerProvider.java:290)
> - locked <0x000752fbbc28> (a java.util.HashMap)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider.findValueDeserializer(StdDeserializerProvider.java:159)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider.findTypedValueDeserializer(StdDeserializerProvider.java:180)
> at 
> org.codehaus.jackson.map.ObjectMapper._findRootDeserializer(ObjectMapper.java:2829)
> at 
> org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2728)
> at 
> org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1909)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSJSONReader.readFrom(KMSJSONReader.java:52)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSJSONReader.readFrom(KMSJSONReader.java:35)
> at 
> com.sun.jersey.spi.container.ContainerRequest.getEntity(ContainerRequest.java:474)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.EntityParamDispatchProvider$EntityInjectable.getValue(EntityParamDispatchProvider.java:123)
> at 
> com.sun.jersey.server.impl.inject.InjectableValuesProvider.getInjectableValues(InjectableValuesProvider.java:46)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$EntityParamInInvoker.getParams(AbstractResourceMethodDispatchProvider.java:153)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:203)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> 

[jira] [Commented] (HADOOP-15817) Reuse Object Mapper in KMSJSONReader

2018-10-03 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16637729#comment-16637729
 ] 

Yiqun Lin commented on HADOOP-15817:


LGTM, +1. Committing this.

> Reuse Object Mapper in KMSJSONReader
> 
>
> Key: HADOOP-15817
> URL: https://issues.apache.org/jira/browse/HADOOP-15817
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>Priority: Major
>  Labels: performance
> Attachments: HADOOP-15817.001.patch
>
>
> Paying an expensive cost to construct object mapper deserializer cache.
>  
> {code:title=KMS Server Stack Trace}
> "qtp1926764753-117" #117 prio=5 os_prio=0 tid=0x0321c000 nid=0x1f0bd 
> runnable [0x2b4caabf7000]
>java.lang.Thread.State: RUNNABLE
> at 
> java.lang.reflect.Executable.sharedGetParameterAnnotations(Executable.java:553)
> at 
> java.lang.reflect.Constructor.getParameterAnnotations(Constructor.java:523)
> at 
> org.codehaus.jackson.map.introspect.AnnotatedClass._constructConstructor(AnnotatedClass.java:784)
> at 
> org.codehaus.jackson.map.introspect.AnnotatedClass.resolveCreators(AnnotatedClass.java:327)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.classWithCreators(BasicClassIntrospector.java:187)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.collectProperties(BasicClassIntrospector.java:157)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.forCreation(BasicClassIntrospector.java:119)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.forCreation(BasicClassIntrospector.java:16)
> at 
> org.codehaus.jackson.map.DeserializationConfig.introspectForCreation(DeserializationConfig.java:877)
> at 
> org.codehaus.jackson.map.deser.BasicDeserializerFactory.createMapDeserializer(BasicDeserializerFactory.java:430)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createDeserializer(StdDeserializerProvider.java:380)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createAndCache2(StdDeserializerProvider.java:310)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createAndCacheValueDeserializer(StdDeserializerProvider.java:290)
> - locked <0x000752fbbc28> (a java.util.HashMap)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider.findValueDeserializer(StdDeserializerProvider.java:159)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider.findTypedValueDeserializer(StdDeserializerProvider.java:180)
> at 
> org.codehaus.jackson.map.ObjectMapper._findRootDeserializer(ObjectMapper.java:2829)
> at 
> org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2728)
> at 
> org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1909)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSJSONReader.readFrom(KMSJSONReader.java:52)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSJSONReader.readFrom(KMSJSONReader.java:35)
> at 
> com.sun.jersey.spi.container.ContainerRequest.getEntity(ContainerRequest.java:474)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.EntityParamDispatchProvider$EntityInjectable.getValue(EntityParamDispatchProvider.java:123)
> at 
> com.sun.jersey.server.impl.inject.InjectableValuesProvider.getInjectableValues(InjectableValuesProvider.java:46)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$EntityParamInInvoker.getParams(AbstractResourceMethodDispatchProvider.java:153)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:203)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
> at 
> 

[jira] [Commented] (HADOOP-15817) Reuse Object Mapper in KMSJSONReader

2018-10-03 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16637682#comment-16637682
 ] 

Akira Ajisaka commented on HADOOP-15817:


+1

> Reuse Object Mapper in KMSJSONReader
> 
>
> Key: HADOOP-15817
> URL: https://issues.apache.org/jira/browse/HADOOP-15817
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>Priority: Major
>  Labels: performance
> Attachments: HADOOP-15817.001.patch
>
>
> Paying an expensive cost to construct object mapper deserializer cache.
>  
> {code:title=KMS Server Stack Trace}
> "qtp1926764753-117" #117 prio=5 os_prio=0 tid=0x0321c000 nid=0x1f0bd 
> runnable [0x2b4caabf7000]
>java.lang.Thread.State: RUNNABLE
> at 
> java.lang.reflect.Executable.sharedGetParameterAnnotations(Executable.java:553)
> at 
> java.lang.reflect.Constructor.getParameterAnnotations(Constructor.java:523)
> at 
> org.codehaus.jackson.map.introspect.AnnotatedClass._constructConstructor(AnnotatedClass.java:784)
> at 
> org.codehaus.jackson.map.introspect.AnnotatedClass.resolveCreators(AnnotatedClass.java:327)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.classWithCreators(BasicClassIntrospector.java:187)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.collectProperties(BasicClassIntrospector.java:157)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.forCreation(BasicClassIntrospector.java:119)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.forCreation(BasicClassIntrospector.java:16)
> at 
> org.codehaus.jackson.map.DeserializationConfig.introspectForCreation(DeserializationConfig.java:877)
> at 
> org.codehaus.jackson.map.deser.BasicDeserializerFactory.createMapDeserializer(BasicDeserializerFactory.java:430)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createDeserializer(StdDeserializerProvider.java:380)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createAndCache2(StdDeserializerProvider.java:310)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createAndCacheValueDeserializer(StdDeserializerProvider.java:290)
> - locked <0x000752fbbc28> (a java.util.HashMap)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider.findValueDeserializer(StdDeserializerProvider.java:159)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider.findTypedValueDeserializer(StdDeserializerProvider.java:180)
> at 
> org.codehaus.jackson.map.ObjectMapper._findRootDeserializer(ObjectMapper.java:2829)
> at 
> org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2728)
> at 
> org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1909)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSJSONReader.readFrom(KMSJSONReader.java:52)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSJSONReader.readFrom(KMSJSONReader.java:35)
> at 
> com.sun.jersey.spi.container.ContainerRequest.getEntity(ContainerRequest.java:474)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.EntityParamDispatchProvider$EntityInjectable.getValue(EntityParamDispatchProvider.java:123)
> at 
> com.sun.jersey.server.impl.inject.InjectableValuesProvider.getInjectableValues(InjectableValuesProvider.java:46)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$EntityParamInInvoker.getParams(AbstractResourceMethodDispatchProvider.java:153)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:203)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
>  

[jira] [Commented] (HADOOP-15775) [JDK9] Add missing javax.activation-api dependency

2018-10-03 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16637674#comment-16637674
 ] 

Takanobu Asanuma commented on HADOOP-15775:
---

Thanks for working on this, [~ajisakaa], and thanks for the review, [~apurtell].

These tests also fail due to {{ClassNotFoundException: 
javax.activation.DataSource}} with JDK 11+28.
{noformat}
org.apache.hadoop.crypto.key.kms.server.TestKMSWithZK
org.apache.hadoop.crypto.key.kms.server.TestKMS
org.apache.hadoop.yarn.server.federation.policies.manager.TestPriorityBroadcastPolicyManager
org.apache.hadoop.yarn.server.federation.policies.manager.TestHomePolicyManager
org.apache.hadoop.yarn.server.federation.policies.manager.TestWeightedLocalityPolicyManager
org.apache.hadoop.yarn.server.federation.policies.router.TestLoadBasedRouterPolicy
org.apache.hadoop.yarn.server.federation.policies.router.TestWeightedRandomRouterPolicy
org.apache.hadoop.yarn.server.federation.policies.router.TestPriorityRouterPolicy
org.apache.hadoop.yarn.server.federation.policies.router.TestRejectRouterPolicy
org.apache.hadoop.yarn.server.federation.policies.router.TestUniformRandomRouterPolicy
org.apache.hadoop.yarn.server.federation.policies.router.TestHashBasedRouterPolicy
org.apache.hadoop.yarn.server.federation.policies.amrmproxy.TestHomeAMRMProxyPolicy
org.apache.hadoop.yarn.server.federation.policies.amrmproxy.TestLocalityMulticastAMRMProxyPolicy
org.apache.hadoop.yarn.server.federation.policies.amrmproxy.TestRejectAMRMProxyPolicy
org.apache.hadoop.yarn.server.federation.policies.amrmproxy.TestBroadcastAMRMProxyFederationPolicy
org.apache.hadoop.yarn.server.federation.policies.TestRouterPolicyFacade
org.apache.hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer
org.apache.hadoop.yarn.server.timeline.webapp.TestTimelineWebServices
org.apache.hadoop.yarn.server.timeline.webapp.TestTimelineWebServicesWithSSL
org.apache.hadoop.yarn.server.timeline.security.TestTimelineAuthenticationFilterForV1
{noformat}
Seems we also need to add the dependency here:
{noformat}
 hadoop-common-project/hadoop-kms/pom.xml
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
{noformat}

> [JDK9] Add missing javax.activation-api dependency
> --
>
> Key: HADOOP-15775
> URL: https://issues.apache.org/jira/browse/HADOOP-15775
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Critical
> Attachments: HADOOP-15775.01.patch, HADOOP-15775.02.patch, 
> HADOOP-15775.03.patch, HADOOP-15775.04.patch
>
>
> Many unit tests fail due to missing java.activation module. This failure can 
> be fixed by adding javax.activation-api as third-party dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15809) ABFS: better exception handling when making getAccessToken call

2018-10-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16637661#comment-16637661
 ] 

Hadoop QA commented on HADOOP-15809:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} hadoop-tools/hadoop-azure: The patch generated 1 
new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
10s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15809 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942332/HADOOP-15809-001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2dab8134c254 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1dc0adf |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15286/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15286/testReport/ |
| Max. process+thread count | 305 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 

[jira] [Commented] (HADOOP-15708) Reading values from Configuration before adding deprecations make it impossible to read value with deprecated key

2018-10-03 Thread Robert Kanter (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16637659#comment-16637659
 ] 

Robert Kanter commented on HADOOP-15708:


Sorry, yeah; that was before the patch.  I was just investigating what the 
current behavior is.  As [~ste...@apache.org] said, we have to be very careful 
about changing this class.  The code seems okay to me though, but let's see if 
anyone else has thoughts about the consequences of changing this behavior.

> Reading values from Configuration before adding deprecations make it 
> impossible to read value with deprecated key
> -
>
> Key: HADOOP-15708
> URL: https://issues.apache.org/jira/browse/HADOOP-15708
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.2.0
>Reporter: Szilard Nemeth
>Assignee: Zoltan Siegl
>Priority: Minor
> Attachments: HADOOP-15708-testcase.patch, HADOOP-15708.001.patch, 
> HADOOP-15708.002.patch, HADOOP-15708.003.patch
>
>
> Hadoop Common contains a widely used Configuration class.
>  This class can handle deprecations of properties, e.g. if property 'A' gets 
> deprecated with an alternative property key 'B', users can access property 
> values with keys 'A' and 'B'.
>  Unfortunately, this does not work in one case.
>  When a config file is specified (for instance, XML) and a property is read 
> with the config.get() method, the config is loaded from the file at this 
> time. 
>  If the deprecation mapping is not yet specified by the time any config value 
> is retrieved and the XML config refers to a deprecated key, then the 
> deprecation mapping specified, the config value cannot be retrieved neither 
> with the deprecated nor with the new key.
>  The attached patch contains a testcase that reproduces this wrong behavior.
> Here are the steps outlined what the testcase does:
>  1. Creates an XML config file with a deprecated property
>  2. Adds the config to the Configuration object
>  3. Retrieves the config with its deprecated key (it does not really matter 
> which property the user gets, could be any)
>  4. Specifies the deprecation rules including the one defined in the config
>  5. Prints and asserts the property retrieved from the config with both the 
> deprecated and the new property keys.
> For reference, here is the log of one execution that actually shows what the 
> issue is:
> {noformat}
> Loaded items: 1
> Looked up property value with name hadoop.zk.address: null
> Looked up property value with name yarn.resourcemanager.zk-address: 
> dummyZkAddress
> Contents of config file: [, , 
> yarn.resourcemanager.zk-addressdummyZkAddress,
>  ]
> Looked up property value with name hadoop.zk.address: null
> 2018-08-31 10:10:06,484 INFO  Configuration.deprecation 
> (Configuration.java:logDeprecation(1397)) - yarn.resourcemanager.zk-address 
> is deprecated. Instead, use hadoop.zk.address
> Looked up property value with name hadoop.zk.address: null
> Looked up property value with name hadoop.zk.address: null
> java.lang.AssertionError: 
> Expected :dummyZkAddress
> Actual   :null
> {noformat}
> *As it's visible from the output and the code, the issue is really that if 
> the config is retrieved either with the deprecated or the new value, 
> Configuration both wants to serve the value with the new key.*
>  *If the mapping is not specified before any retrieval happened, the value is 
> only stored under the deprecated key but not the new key.*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15817) Reuse Object Mapper in KMSJSONReader

2018-10-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16637632#comment-16637632
 ] 

Hadoop QA commented on HADOOP-15817:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
6s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 88m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15817 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942315/HADOOP-15817.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f249e638023d 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1dc0adf |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15285/testReport/ |
| Max. process+thread count | 308 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-kms U: 
hadoop-common-project/hadoop-kms |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15285/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Reuse Object Mapper in KMSJSONReader

[jira] [Updated] (HADOOP-15809) ABFS: better exception handling when making getAccessToken call

2018-10-03 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-15809:
-
Status: Patch Available  (was: Open)

> ABFS: better exception handling when making getAccessToken call
> ---
>
> Key: HADOOP-15809
> URL: https://issues.apache.org/jira/browse/HADOOP-15809
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15809-001.patch
>
>
> Currently in *getTokenSingleCall()*: if it get a HTTP failure response, it 
> tries to consume inputStream in httpUrlConnection, which will *always* lead 
> to an *IOException* and this exception never get checked in 
> *AzureADAuthenticator*. 
>  As a result the httpStatus code is never checked in the retry policy of 
> AzureADAuthenticator. Tthat IOException will be caught by AbfsRestOperation, 
> which will keep on retrying.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15809) ABFS: better exception handling when making getAccessToken call

2018-10-03 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16637611#comment-16637611
 ] 

Da Zhou commented on HADOOP-15809:
--

Uploading 001 patch:
 - in *getTokenSingleCall(),* when get a failure response, consume 
*errorStream*.
 - Once get a *HttpException* when executing *AbfsRestOperation*, *stop retry*. 
The reason is because HttpException can only be thrown from 
getTokenSingleCall() when the retry policy in AzureADAuthenticator determines 
there is no need to retry.
 - Wrap the *HttpException* into a *AbfsRestOperationException*.

Tests passed:
 Namespace enabled account, using Oauth:
 Tests run: 35, Failures: 0, Errors: 0, Skipped: 0
 Tests run: 307, Failures: 0, Errors: 0, Skipped: 35
 Tests run: 165, Failures: 0, Errors: 0, Skipped: 21

Namespace not enabled account, using Shared Key:
 Tests run: 35, Failures: 0, Errors: 0, Skipped: 0
 Tests run: 307, Failures: 0, Errors: 0, Skipped: 199
 Tests run: 165, Failures: 0, Errors: 0, Skipped: 15

> ABFS: better exception handling when making getAccessToken call
> ---
>
> Key: HADOOP-15809
> URL: https://issues.apache.org/jira/browse/HADOOP-15809
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15809-001.patch
>
>
> Currently in *getTokenSingleCall()*: if it get a HTTP failure response, it 
> tries to consume inputStream in httpUrlConnection, which will *always* lead 
> to an *IOException* and this exception never get checked in 
> *AzureADAuthenticator*. 
>  As a result the httpStatus code is never checked in the retry policy of 
> AzureADAuthenticator. Tthat IOException will be caught by AbfsRestOperation, 
> which will keep on retrying.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15809) ABFS: better exception handling when making getAccessToken call

2018-10-03 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-15809:
-
Description: 
Currently in *getTokenSingleCall()*: if it get a HTTP failure response, it 
tries to consume inputStream in httpUrlConnection, which will *always* lead to 
an *IOException* and this exception never get checked in 
*AzureADAuthenticator*. 
 As a result the httpStatus code is never checked in the retry policy of 
AzureADAuthenticator. Tthat IOException will be caught by AbfsRestOperation, 
which will keep on retrying.

  was:
Currently in getTokenSingleCall(): if it get a  HTTP failure response, it tries 
to consume inputStream in httpUrlConnection, which will *always* lead to an 
IOException and this exception never get checked in AzureADAuthenticator.  
As a result the httpStatus code is never checked in the retry policy of 
AzureADAuthenticator. And that IOException will be caught by AbfsRestOperation, 
which will keeps on retrying.


> ABFS: better exception handling when making getAccessToken call
> ---
>
> Key: HADOOP-15809
> URL: https://issues.apache.org/jira/browse/HADOOP-15809
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15809-001.patch
>
>
> Currently in *getTokenSingleCall()*: if it get a HTTP failure response, it 
> tries to consume inputStream in httpUrlConnection, which will *always* lead 
> to an *IOException* and this exception never get checked in 
> *AzureADAuthenticator*. 
>  As a result the httpStatus code is never checked in the retry policy of 
> AzureADAuthenticator. Tthat IOException will be caught by AbfsRestOperation, 
> which will keep on retrying.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15809) ABFS: better exception handling when making getAccessToken call

2018-10-03 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-15809:
-
Description: 
Currently in getTokenSingleCall(): if it get a  HTTP failure response, it tries 
to consume inputStream in httpUrlConnection, which will *always* lead to an 
IOException and this exception never get checked in AzureADAuthenticator.  
As a result the httpStatus code is never checked in the retry policy of 
AzureADAuthenticator. And that IOException will be caught by AbfsRestOperation, 
which will keeps on retrying.

  was:Currently getAccessToken throws only IOException and it is never checked 
for cases like 401, 403, which lead to unnecessary retry, this should be fixed.


> ABFS: better exception handling when making getAccessToken call
> ---
>
> Key: HADOOP-15809
> URL: https://issues.apache.org/jira/browse/HADOOP-15809
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15809-001.patch
>
>
> Currently in getTokenSingleCall(): if it get a  HTTP failure response, it 
> tries to consume inputStream in httpUrlConnection, which will *always* lead 
> to an IOException and this exception never get checked in 
> AzureADAuthenticator.  
> As a result the httpStatus code is never checked in the retry policy of 
> AzureADAuthenticator. And that IOException will be caught by 
> AbfsRestOperation, which will keeps on retrying.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15809) ABFS: better exception handling when making getAccessToken call

2018-10-03 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-15809:
-
Attachment: HADOOP-15809-001.patch

> ABFS: better exception handling when making getAccessToken call
> ---
>
> Key: HADOOP-15809
> URL: https://issues.apache.org/jira/browse/HADOOP-15809
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15809-001.patch
>
>
> Currently getAccessToken throws only IOException and it is never checked for 
> cases like 401, 403, which lead to unnecessary retry, this should be fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15817) Reuse Object Mapper in KMSJSONReader

2018-10-03 Thread Jonathan Eagles (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HADOOP-15817:
-
Attachment: HADOOP-15817.001.patch

> Reuse Object Mapper in KMSJSONReader
> 
>
> Key: HADOOP-15817
> URL: https://issues.apache.org/jira/browse/HADOOP-15817
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>Priority: Major
>  Labels: performance
> Attachments: HADOOP-15817.001.patch
>
>
> Paying an expensive cost to construct object mapper deserializer cache.
>  
> {code:title=KMS Server Stack Trace}
> "qtp1926764753-117" #117 prio=5 os_prio=0 tid=0x0321c000 nid=0x1f0bd 
> runnable [0x2b4caabf7000]
>java.lang.Thread.State: RUNNABLE
> at 
> java.lang.reflect.Executable.sharedGetParameterAnnotations(Executable.java:553)
> at 
> java.lang.reflect.Constructor.getParameterAnnotations(Constructor.java:523)
> at 
> org.codehaus.jackson.map.introspect.AnnotatedClass._constructConstructor(AnnotatedClass.java:784)
> at 
> org.codehaus.jackson.map.introspect.AnnotatedClass.resolveCreators(AnnotatedClass.java:327)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.classWithCreators(BasicClassIntrospector.java:187)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.collectProperties(BasicClassIntrospector.java:157)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.forCreation(BasicClassIntrospector.java:119)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.forCreation(BasicClassIntrospector.java:16)
> at 
> org.codehaus.jackson.map.DeserializationConfig.introspectForCreation(DeserializationConfig.java:877)
> at 
> org.codehaus.jackson.map.deser.BasicDeserializerFactory.createMapDeserializer(BasicDeserializerFactory.java:430)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createDeserializer(StdDeserializerProvider.java:380)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createAndCache2(StdDeserializerProvider.java:310)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createAndCacheValueDeserializer(StdDeserializerProvider.java:290)
> - locked <0x000752fbbc28> (a java.util.HashMap)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider.findValueDeserializer(StdDeserializerProvider.java:159)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider.findTypedValueDeserializer(StdDeserializerProvider.java:180)
> at 
> org.codehaus.jackson.map.ObjectMapper._findRootDeserializer(ObjectMapper.java:2829)
> at 
> org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2728)
> at 
> org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1909)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSJSONReader.readFrom(KMSJSONReader.java:52)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSJSONReader.readFrom(KMSJSONReader.java:35)
> at 
> com.sun.jersey.spi.container.ContainerRequest.getEntity(ContainerRequest.java:474)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.EntityParamDispatchProvider$EntityInjectable.getValue(EntityParamDispatchProvider.java:123)
> at 
> com.sun.jersey.server.impl.inject.InjectableValuesProvider.getInjectableValues(InjectableValuesProvider.java:46)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$EntityParamInInvoker.getParams(AbstractResourceMethodDispatchProvider.java:153)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:203)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
>   

[jira] [Updated] (HADOOP-15817) Reuse Object Mapper in KMSJSONReader

2018-10-03 Thread Jonathan Eagles (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HADOOP-15817:
-
Status: Patch Available  (was: Open)

> Reuse Object Mapper in KMSJSONReader
> 
>
> Key: HADOOP-15817
> URL: https://issues.apache.org/jira/browse/HADOOP-15817
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>Priority: Major
>  Labels: performance
> Attachments: HADOOP-15817.001.patch
>
>
> Paying an expensive cost to construct object mapper deserializer cache.
>  
> {code:title=KMS Server Stack Trace}
> "qtp1926764753-117" #117 prio=5 os_prio=0 tid=0x0321c000 nid=0x1f0bd 
> runnable [0x2b4caabf7000]
>java.lang.Thread.State: RUNNABLE
> at 
> java.lang.reflect.Executable.sharedGetParameterAnnotations(Executable.java:553)
> at 
> java.lang.reflect.Constructor.getParameterAnnotations(Constructor.java:523)
> at 
> org.codehaus.jackson.map.introspect.AnnotatedClass._constructConstructor(AnnotatedClass.java:784)
> at 
> org.codehaus.jackson.map.introspect.AnnotatedClass.resolveCreators(AnnotatedClass.java:327)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.classWithCreators(BasicClassIntrospector.java:187)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.collectProperties(BasicClassIntrospector.java:157)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.forCreation(BasicClassIntrospector.java:119)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.forCreation(BasicClassIntrospector.java:16)
> at 
> org.codehaus.jackson.map.DeserializationConfig.introspectForCreation(DeserializationConfig.java:877)
> at 
> org.codehaus.jackson.map.deser.BasicDeserializerFactory.createMapDeserializer(BasicDeserializerFactory.java:430)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createDeserializer(StdDeserializerProvider.java:380)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createAndCache2(StdDeserializerProvider.java:310)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createAndCacheValueDeserializer(StdDeserializerProvider.java:290)
> - locked <0x000752fbbc28> (a java.util.HashMap)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider.findValueDeserializer(StdDeserializerProvider.java:159)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider.findTypedValueDeserializer(StdDeserializerProvider.java:180)
> at 
> org.codehaus.jackson.map.ObjectMapper._findRootDeserializer(ObjectMapper.java:2829)
> at 
> org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2728)
> at 
> org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1909)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSJSONReader.readFrom(KMSJSONReader.java:52)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSJSONReader.readFrom(KMSJSONReader.java:35)
> at 
> com.sun.jersey.spi.container.ContainerRequest.getEntity(ContainerRequest.java:474)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.EntityParamDispatchProvider$EntityInjectable.getValue(EntityParamDispatchProvider.java:123)
> at 
> com.sun.jersey.server.impl.inject.InjectableValuesProvider.getInjectableValues(InjectableValuesProvider.java:46)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$EntityParamInInvoker.getParams(AbstractResourceMethodDispatchProvider.java:153)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:203)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
> 

[jira] [Commented] (HADOOP-15817) Reuse Object Mapper in KMSJSONReader

2018-10-03 Thread Jonathan Eagles (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16637387#comment-16637387
 ] 

Jonathan Eagles commented on HADOOP-15817:
--

Similar to HADOOP-15550, but including the fix for the reader as well.

> Reuse Object Mapper in KMSJSONReader
> 
>
> Key: HADOOP-15817
> URL: https://issues.apache.org/jira/browse/HADOOP-15817
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>Priority: Major
>  Labels: performance
>
> Paying an expensive cost to construct object mapper deserializer cache.
>  
> {code:title=KMS Server Stack Trace}
> "qtp1926764753-117" #117 prio=5 os_prio=0 tid=0x0321c000 nid=0x1f0bd 
> runnable [0x2b4caabf7000]
>java.lang.Thread.State: RUNNABLE
> at 
> java.lang.reflect.Executable.sharedGetParameterAnnotations(Executable.java:553)
> at 
> java.lang.reflect.Constructor.getParameterAnnotations(Constructor.java:523)
> at 
> org.codehaus.jackson.map.introspect.AnnotatedClass._constructConstructor(AnnotatedClass.java:784)
> at 
> org.codehaus.jackson.map.introspect.AnnotatedClass.resolveCreators(AnnotatedClass.java:327)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.classWithCreators(BasicClassIntrospector.java:187)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.collectProperties(BasicClassIntrospector.java:157)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.forCreation(BasicClassIntrospector.java:119)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.forCreation(BasicClassIntrospector.java:16)
> at 
> org.codehaus.jackson.map.DeserializationConfig.introspectForCreation(DeserializationConfig.java:877)
> at 
> org.codehaus.jackson.map.deser.BasicDeserializerFactory.createMapDeserializer(BasicDeserializerFactory.java:430)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createDeserializer(StdDeserializerProvider.java:380)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createAndCache2(StdDeserializerProvider.java:310)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createAndCacheValueDeserializer(StdDeserializerProvider.java:290)
> - locked <0x000752fbbc28> (a java.util.HashMap)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider.findValueDeserializer(StdDeserializerProvider.java:159)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider.findTypedValueDeserializer(StdDeserializerProvider.java:180)
> at 
> org.codehaus.jackson.map.ObjectMapper._findRootDeserializer(ObjectMapper.java:2829)
> at 
> org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2728)
> at 
> org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1909)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSJSONReader.readFrom(KMSJSONReader.java:52)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSJSONReader.readFrom(KMSJSONReader.java:35)
> at 
> com.sun.jersey.spi.container.ContainerRequest.getEntity(ContainerRequest.java:474)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.EntityParamDispatchProvider$EntityInjectable.getValue(EntityParamDispatchProvider.java:123)
> at 
> com.sun.jersey.server.impl.inject.InjectableValuesProvider.getInjectableValues(InjectableValuesProvider.java:46)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$EntityParamInInvoker.getParams(AbstractResourceMethodDispatchProvider.java:153)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:203)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
> at 
> 

[jira] [Assigned] (HADOOP-15817) Reuse Object Mapper in KMSJSONReader

2018-10-03 Thread Jonathan Eagles (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles reassigned HADOOP-15817:


Assignee: Jonathan Eagles

> Reuse Object Mapper in KMSJSONReader
> 
>
> Key: HADOOP-15817
> URL: https://issues.apache.org/jira/browse/HADOOP-15817
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>Priority: Major
>  Labels: performance
>
> Paying an expensive cost to construct object mapper deserializer cache.
>  
> {code:title=KMS Server Stack Trace}
> "qtp1926764753-117" #117 prio=5 os_prio=0 tid=0x0321c000 nid=0x1f0bd 
> runnable [0x2b4caabf7000]
>java.lang.Thread.State: RUNNABLE
> at 
> java.lang.reflect.Executable.sharedGetParameterAnnotations(Executable.java:553)
> at 
> java.lang.reflect.Constructor.getParameterAnnotations(Constructor.java:523)
> at 
> org.codehaus.jackson.map.introspect.AnnotatedClass._constructConstructor(AnnotatedClass.java:784)
> at 
> org.codehaus.jackson.map.introspect.AnnotatedClass.resolveCreators(AnnotatedClass.java:327)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.classWithCreators(BasicClassIntrospector.java:187)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.collectProperties(BasicClassIntrospector.java:157)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.forCreation(BasicClassIntrospector.java:119)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.forCreation(BasicClassIntrospector.java:16)
> at 
> org.codehaus.jackson.map.DeserializationConfig.introspectForCreation(DeserializationConfig.java:877)
> at 
> org.codehaus.jackson.map.deser.BasicDeserializerFactory.createMapDeserializer(BasicDeserializerFactory.java:430)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createDeserializer(StdDeserializerProvider.java:380)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createAndCache2(StdDeserializerProvider.java:310)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createAndCacheValueDeserializer(StdDeserializerProvider.java:290)
> - locked <0x000752fbbc28> (a java.util.HashMap)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider.findValueDeserializer(StdDeserializerProvider.java:159)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider.findTypedValueDeserializer(StdDeserializerProvider.java:180)
> at 
> org.codehaus.jackson.map.ObjectMapper._findRootDeserializer(ObjectMapper.java:2829)
> at 
> org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2728)
> at 
> org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1909)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSJSONReader.readFrom(KMSJSONReader.java:52)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSJSONReader.readFrom(KMSJSONReader.java:35)
> at 
> com.sun.jersey.spi.container.ContainerRequest.getEntity(ContainerRequest.java:474)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.EntityParamDispatchProvider$EntityInjectable.getValue(EntityParamDispatchProvider.java:123)
> at 
> com.sun.jersey.server.impl.inject.InjectableValuesProvider.getInjectableValues(InjectableValuesProvider.java:46)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$EntityParamInInvoker.getParams(AbstractResourceMethodDispatchProvider.java:153)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:203)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
> at 
> 

[jira] [Updated] (HADOOP-15817) Reuse Object Mapper in KMSJSONReader

2018-10-03 Thread Jonathan Eagles (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HADOOP-15817:
-
Labels: performance  (was: )

> Reuse Object Mapper in KMSJSONReader
> 
>
> Key: HADOOP-15817
> URL: https://issues.apache.org/jira/browse/HADOOP-15817
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Jonathan Eagles
>Priority: Major
>  Labels: performance
>
> Paying an expensive cost to construct object mapper deserializer cache.
>  
> {code:title=KMS Server Stack Trace}
> "qtp1926764753-117" #117 prio=5 os_prio=0 tid=0x0321c000 nid=0x1f0bd 
> runnable [0x2b4caabf7000]
>java.lang.Thread.State: RUNNABLE
> at 
> java.lang.reflect.Executable.sharedGetParameterAnnotations(Executable.java:553)
> at 
> java.lang.reflect.Constructor.getParameterAnnotations(Constructor.java:523)
> at 
> org.codehaus.jackson.map.introspect.AnnotatedClass._constructConstructor(AnnotatedClass.java:784)
> at 
> org.codehaus.jackson.map.introspect.AnnotatedClass.resolveCreators(AnnotatedClass.java:327)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.classWithCreators(BasicClassIntrospector.java:187)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.collectProperties(BasicClassIntrospector.java:157)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.forCreation(BasicClassIntrospector.java:119)
> at 
> org.codehaus.jackson.map.introspect.BasicClassIntrospector.forCreation(BasicClassIntrospector.java:16)
> at 
> org.codehaus.jackson.map.DeserializationConfig.introspectForCreation(DeserializationConfig.java:877)
> at 
> org.codehaus.jackson.map.deser.BasicDeserializerFactory.createMapDeserializer(BasicDeserializerFactory.java:430)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createDeserializer(StdDeserializerProvider.java:380)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createAndCache2(StdDeserializerProvider.java:310)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createAndCacheValueDeserializer(StdDeserializerProvider.java:290)
> - locked <0x000752fbbc28> (a java.util.HashMap)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider.findValueDeserializer(StdDeserializerProvider.java:159)
> at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider.findTypedValueDeserializer(StdDeserializerProvider.java:180)
> at 
> org.codehaus.jackson.map.ObjectMapper._findRootDeserializer(ObjectMapper.java:2829)
> at 
> org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2728)
> at 
> org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1909)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSJSONReader.readFrom(KMSJSONReader.java:52)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSJSONReader.readFrom(KMSJSONReader.java:35)
> at 
> com.sun.jersey.spi.container.ContainerRequest.getEntity(ContainerRequest.java:474)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.EntityParamDispatchProvider$EntityInjectable.getValue(EntityParamDispatchProvider.java:123)
> at 
> com.sun.jersey.server.impl.inject.InjectableValuesProvider.getInjectableValues(InjectableValuesProvider.java:46)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$EntityParamInInvoker.getParams(AbstractResourceMethodDispatchProvider.java:153)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:203)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
> at 
> 

[jira] [Created] (HADOOP-15817) Reuse Object Mapper in KMSJSONReader

2018-10-03 Thread Jonathan Eagles (JIRA)
Jonathan Eagles created HADOOP-15817:


 Summary: Reuse Object Mapper in KMSJSONReader
 Key: HADOOP-15817
 URL: https://issues.apache.org/jira/browse/HADOOP-15817
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms
Reporter: Jonathan Eagles


Paying an expensive cost to construct object mapper deserializer cache.
 
{code:title=KMS Server Stack Trace}
"qtp1926764753-117" #117 prio=5 os_prio=0 tid=0x0321c000 nid=0x1f0bd 
runnable [0x2b4caabf7000]
   java.lang.Thread.State: RUNNABLE
at 
java.lang.reflect.Executable.sharedGetParameterAnnotations(Executable.java:553)
at 
java.lang.reflect.Constructor.getParameterAnnotations(Constructor.java:523)
at 
org.codehaus.jackson.map.introspect.AnnotatedClass._constructConstructor(AnnotatedClass.java:784)
at 
org.codehaus.jackson.map.introspect.AnnotatedClass.resolveCreators(AnnotatedClass.java:327)
at 
org.codehaus.jackson.map.introspect.BasicClassIntrospector.classWithCreators(BasicClassIntrospector.java:187)
at 
org.codehaus.jackson.map.introspect.BasicClassIntrospector.collectProperties(BasicClassIntrospector.java:157)
at 
org.codehaus.jackson.map.introspect.BasicClassIntrospector.forCreation(BasicClassIntrospector.java:119)
at 
org.codehaus.jackson.map.introspect.BasicClassIntrospector.forCreation(BasicClassIntrospector.java:16)
at 
org.codehaus.jackson.map.DeserializationConfig.introspectForCreation(DeserializationConfig.java:877)
at 
org.codehaus.jackson.map.deser.BasicDeserializerFactory.createMapDeserializer(BasicDeserializerFactory.java:430)
at 
org.codehaus.jackson.map.deser.StdDeserializerProvider._createDeserializer(StdDeserializerProvider.java:380)
at 
org.codehaus.jackson.map.deser.StdDeserializerProvider._createAndCache2(StdDeserializerProvider.java:310)
at 
org.codehaus.jackson.map.deser.StdDeserializerProvider._createAndCacheValueDeserializer(StdDeserializerProvider.java:290)
- locked <0x000752fbbc28> (a java.util.HashMap)
at 
org.codehaus.jackson.map.deser.StdDeserializerProvider.findValueDeserializer(StdDeserializerProvider.java:159)
at 
org.codehaus.jackson.map.deser.StdDeserializerProvider.findTypedValueDeserializer(StdDeserializerProvider.java:180)
at 
org.codehaus.jackson.map.ObjectMapper._findRootDeserializer(ObjectMapper.java:2829)
at 
org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2728)
at 
org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1909)
at 
org.apache.hadoop.crypto.key.kms.server.KMSJSONReader.readFrom(KMSJSONReader.java:52)
at 
org.apache.hadoop.crypto.key.kms.server.KMSJSONReader.readFrom(KMSJSONReader.java:35)
at 
com.sun.jersey.spi.container.ContainerRequest.getEntity(ContainerRequest.java:474)
at 
com.sun.jersey.server.impl.model.method.dispatch.EntityParamDispatchProvider$EntityInjectable.getValue(EntityParamDispatchProvider.java:123)
at 
com.sun.jersey.server.impl.inject.InjectableValuesProvider.getInjectableValues(InjectableValuesProvider.java:46)
at 
com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$EntityParamInInvoker.getParams(AbstractResourceMethodDispatchProvider.java:153)
at 
com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:203)
at 
com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
at 
com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at 
com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at 
com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
at 
com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
at 
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
at 

[jira] [Commented] (HADOOP-15808) Harden Token service loader use

2018-10-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16637263#comment-16637263
 ] 

Hadoop QA commented on HADOOP-15808:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 19m  
5s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 56s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 107 unchanged - 3 fixed = 108 total (was 110) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
48s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}122m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15808 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942276/HADOOP-15808-002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c2df4ab07b5e 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7051bd7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15284/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15284/artifact/out/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15284/testReport/ |
| Max. process+thread count | 1747 (vs. ulimit of 1) |
| modules 

[jira] [Commented] (HADOOP-15814) Maven 3.3.3 unable to parse pom file

2018-10-03 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16637195#comment-16637195
 ] 

Wei-Chiu Chuang commented on HADOOP-15814:
--

Thank you [~tasanuma0829]!

> Maven 3.3.3 unable to parse pom file
> 
>
> Key: HADOOP-15814
> URL: https://issues.apache.org/jira/browse/HADOOP-15814
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.2.0, 3.3.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Fix For: 3.2.0, 3.3.0
>
> Attachments: HADOOP-15814.001.patch
>
>
> Found via HDFS-13952.
> Reproducible on Maven 3.3.3, but not a problem for Maven 3.5.0 and above.
> I had to make the following change in order to compile. Looks like a problem 
> after the ABFS merge.
> {code}
> diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
> index cd38376..4c2c267 100644
> --- a/hadoop-project/pom.xml
> +++ b/hadoop-project/pom.xml
> @@ -1656,7 +1656,9 @@
>maven-javadoc-plugin
>${maven-javadoc-plugin.version}
>
> --Xmaxwarns 1
> +
> +  -Xmaxwarns 1
> +
>
>  
>  
> {code}
> Otherwise it gives me this error:
> {quote}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar (module-javadocs) on 
> project hadoop-project: Unable to parse configuration of mojo 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar for parameter 
> additionalOptions: Cannot assign configuration entry 'additionalOptions' with 
> value '-Xmaxwarns 1' of type java.lang.String to property of type 
> java.lang.String[] -> [Help 1]
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop pull request #423: YARN-6416. SIGNAL_CMD argument number is wrong

2018-10-03 Thread vbmudalige
GitHub user vbmudalige opened a pull request:

https://github.com/apache/hadoop/pull/423

YARN-6416. SIGNAL_CMD argument number is wrong



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vbmudalige/hadoop YARN-6416

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/423.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #423


commit b2532663babcf4ef4560fb8067b922e61638679e
Author: Vidura Mudalige 
Date:   2018-10-03T15:13:44Z

YARN-6416. SIGNAL_CMD argument number is wrong




---

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15815) Upgrade Eclipse Jetty version due to security concerns

2018-10-03 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16637046#comment-16637046
 ] 

ASF GitHub Bot commented on HADOOP-15815:
-

GitHub user borisvu opened a pull request:

https://github.com/apache/hadoop/pull/422

Updating insecure version of Jetty to the lattest

Fixes https://issues.apache.org/jira/browse/HADOOP-15815

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/borisvu/hadoop HADOOP-15815

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/422.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #422


commit fdc49996daee83c01361d24e3f16885a42c1f527
Author: Boris Vulikh 
Date:   2018-10-03T13:44:30Z

Updating insecure version of Jetty to the lattest




> Upgrade Eclipse Jetty version due to security concerns
> --
>
> Key: HADOOP-15815
> URL: https://issues.apache.org/jira/browse/HADOOP-15815
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.1.1
>Reporter: Boris Vulikh
>Priority: Major
>
> * 
> [CVE-2017-7657|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7657]
>  * 
> [CVE-2017-7658|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7658]
>  * 
> [CVE-2017-7656|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7656]
>  * 
> [CVE-2018-12536|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-12536]
> We should upgrade the dependency to version 9.3.24 or the latest, if possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop pull request #422: Updating insecure version of Jetty to the lattest

2018-10-03 Thread borisvu
GitHub user borisvu opened a pull request:

https://github.com/apache/hadoop/pull/422

Updating insecure version of Jetty to the lattest

Fixes https://issues.apache.org/jira/browse/HADOOP-15815

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/borisvu/hadoop HADOOP-15815

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/422.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #422


commit fdc49996daee83c01361d24e3f16885a42c1f527
Author: Boris Vulikh 
Date:   2018-10-03T13:44:30Z

Updating insecure version of Jetty to the lattest




---

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15791) Remove Ozone related sources from the 3.2 branch

2018-10-03 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16637034#comment-16637034
 ] 

Sunil Govindan commented on HADOOP-15791:
-

Hi [~elek].

Thanks for the patch. Jenkins gave some error in ASF 
(hadoop-hdds/common/target). However i could not find this in branch-3.2. Could 
u pls confirm this. Thank You.

> Remove Ozone related sources from the 3.2 branch
> 
>
> Key: HADOOP-15791
> URL: https://issues.apache.org/jira/browse/HADOOP-15791
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HADOOP-15791-branch-3.2.002.patch, HADOOP-15791.001.patch
>
>
> As it is discussed at HDDS-341 and written in the original proposal of Ozone 
> merge, we can remove all the ozone/hdds projects from the 3.2 release branch.
> {quote}
>  * On trunk (as opposed to release branches) HDSL will be a separate module 
> in Hadoop's source tree. This will enable the HDSL to work on their trunk and 
> the Hadoop trunk without making releases for every change.
>   * Hadoop's trunk will only build HDSL if a non-default profile is enabled.
>   * When Hadoop creates a release branch, the RM will delete the HDSL module 
> from the branch.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15811) Optimizations for Java's TLS performance

2018-10-03 Thread Daryn Sharp (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16637020#comment-16637020
 ] 

Daryn Sharp commented on HADOOP-15811:
--

Should also note that Oracle recommends using /dev/urandom with their products:

[Avoiding JVM Delays Caused by Random Number 
Generation|https://docs.oracle.com/cd/E13209_01/wlcp/wlss40/configwlss/jvmrand.html]

Rather sad the intrinsics were disabled due to a failed unit test on Solaris 
when compiled with sun's compiler...


 

> Optimizations for Java's TLS performance
> 
>
> Key: HADOOP-15811
> URL: https://issues.apache.org/jira/browse/HADOOP-15811
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 1.0.0
>Reporter: Daryn Sharp
>Priority: Major
>
> Java defaults to using /dev/random and disables intrinsic methods used in hot 
> code paths.  Both cause highly synchronized impls to be used that 
> significantly degrade performance.
> * -Djava.security.egd=file:/dev/urandom
> * -XX:+UseMontgomerySquareIntrinsic
> * -XX:+UseMontgomeryMultiplyIntrinsic
> * -XX:+UseSquareToLenIntrinsic
> * -XX:+UseMultiplyToLenIntrinsic
> These settings significantly boost KMS server performance.  Under load, 
> threads are not jammed in the SSLEngine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15808) Harden Token service loader use

2018-10-03 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16637017#comment-16637017
 ] 

Steve Loughran commented on HADOOP-15808:
-

HADOOP-15808: patch 002: remove state checks for no class found on decode; 
leave callers to deal with it. 

That's not ideal, as most of the code doesn't bother to handle decoded token == 
null, but its clear that some bits of code does, and is happy for things to 
return null. Someone would need to go through every single use of 
decodeIdentifier and review/patch it, which is beyond the scope of this patch.

* Improve error text on TestSaslRPC failures (used in tracking down problem)
* fix checkstyle.


> Harden Token service loader use
> ---
>
> Key: HADOOP-15808
> URL: https://issues.apache.org/jira/browse/HADOOP-15808
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.9.1, 3.1.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15808-001.patch, HADOOP-15808-002.patch
>
>
> The Hadoop token service loading (identifiers, renewers...) works provided 
> there's no problems loading any registered implementation. If there's a 
> classloading or classcasting problem, the exception raised will stop all 
> token support working; possibly the application not starting.
> This matters for S3A/HADOOP-14556 as things may not load if aws-sdk isn't on 
> the classpath. It probably lurks in the wasb/abfs support too, but things 
> have worked there because the installations with DT support there have always 
> had correctly set up classpaths.
> Fix: do what we did for the FS service loader. Catch failures to instantiate 
> a service provider impl and skip it



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15808) Harden Token service loader use

2018-10-03 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15808:

Status: Patch Available  (was: Open)

> Harden Token service loader use
> ---
>
> Key: HADOOP-15808
> URL: https://issues.apache.org/jira/browse/HADOOP-15808
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.9.1, 3.1.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15808-001.patch, HADOOP-15808-002.patch
>
>
> The Hadoop token service loading (identifiers, renewers...) works provided 
> there's no problems loading any registered implementation. If there's a 
> classloading or classcasting problem, the exception raised will stop all 
> token support working; possibly the application not starting.
> This matters for S3A/HADOOP-14556 as things may not load if aws-sdk isn't on 
> the classpath. It probably lurks in the wasb/abfs support too, but things 
> have worked there because the installations with DT support there have always 
> had correctly set up classpaths.
> Fix: do what we did for the FS service loader. Catch failures to instantiate 
> a service provider impl and skip it



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15808) Harden Token service loader use

2018-10-03 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15808:

Attachment: HADOOP-15808-002.patch

> Harden Token service loader use
> ---
>
> Key: HADOOP-15808
> URL: https://issues.apache.org/jira/browse/HADOOP-15808
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.9.1, 3.1.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15808-001.patch, HADOOP-15808-002.patch
>
>
> The Hadoop token service loading (identifiers, renewers...) works provided 
> there's no problems loading any registered implementation. If there's a 
> classloading or classcasting problem, the exception raised will stop all 
> token support working; possibly the application not starting.
> This matters for S3A/HADOOP-14556 as things may not load if aws-sdk isn't on 
> the classpath. It probably lurks in the wasb/abfs support too, but things 
> have worked there because the installations with DT support there have always 
> had correctly set up classpaths.
> Fix: do what we did for the FS service loader. Catch failures to instantiate 
> a service provider impl and skip it



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15792) typo in AzureBlobFileSystem.getIsNamespaceEnabeld

2018-10-03 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16637013#comment-16637013
 ] 

Steve Loughran commented on HADOOP-15792:
-

cherry picked into the 3.2 branch after I added the (higher priority) 
HADOOP-15795 patch. That way the two branches are consistent. (test rerun 
against azure amsterdam; usual timeouts of the largest scale tests, but 
otherwise all well

> typo in AzureBlobFileSystem.getIsNamespaceEnabeld
> -
>
> Key: HADOOP-15792
> URL: https://issues.apache.org/jira/browse/HADOOP-15792
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Abhishek Modi
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HADOOP-15792-002.patch, HADOOP-15792.001.patch
>
>
> There's a typo in the visible-for-test method 
> {{AzureBlobFileSystem.getIsNamespaceEnabeld}}
> Trivial to fix, just postponing until after the 3.2.x branch



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15792) typo in AzureBlobFileSystem.getIsNamespaceEnabeld

2018-10-03 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15792:

Fix Version/s: (was: 3.3.0)
   3.2.0

> typo in AzureBlobFileSystem.getIsNamespaceEnabeld
> -
>
> Key: HADOOP-15792
> URL: https://issues.apache.org/jira/browse/HADOOP-15792
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Abhishek Modi
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HADOOP-15792-002.patch, HADOOP-15792.001.patch
>
>
> There's a typo in the visible-for-test method 
> {{AzureBlobFileSystem.getIsNamespaceEnabeld}}
> Trivial to fix, just postponing until after the 3.2.x branch



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15815) Upgrade Eclipse Jetty version due to security concerns

2018-10-03 Thread Kihwal Lee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16637009#comment-16637009
 ] 

Kihwal Lee commented on HADOOP-15815:
-

We've been internally using 9.3.24.v20180605 and not seen any issues. I think 
we can safely update it in all 3.x lines.

> Upgrade Eclipse Jetty version due to security concerns
> --
>
> Key: HADOOP-15815
> URL: https://issues.apache.org/jira/browse/HADOOP-15815
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.1.1
>Reporter: Boris Vulikh
>Priority: Major
>
> * 
> [CVE-2017-7657|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7657]
>  * 
> [CVE-2017-7658|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7658]
>  * 
> [CVE-2017-7656|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7656]
>  * 
> [CVE-2018-12536|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-12536]
> We should upgrade the dependency to version 9.3.24 or the latest, if possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15808) Harden Token service loader use

2018-10-03 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15808:

Status: Open  (was: Patch Available)

> Harden Token service loader use
> ---
>
> Key: HADOOP-15808
> URL: https://issues.apache.org/jira/browse/HADOOP-15808
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.9.1, 3.1.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15808-001.patch
>
>
> The Hadoop token service loading (identifiers, renewers...) works provided 
> there's no problems loading any registered implementation. If there's a 
> classloading or classcasting problem, the exception raised will stop all 
> token support working; possibly the application not starting.
> This matters for S3A/HADOOP-14556 as things may not load if aws-sdk isn't on 
> the classpath. It probably lurks in the wasb/abfs support too, but things 
> have worked there because the installations with DT support there have always 
> had correctly set up classpaths.
> Fix: do what we did for the FS service loader. Catch failures to instantiate 
> a service provider impl and skip it



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15808) Harden Token service loader use

2018-10-03 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636928#comment-16636928
 ] 

Steve Loughran commented on HADOOP-15808:
-


Checkstyle is minor, test failure not.
{code}
[ERROR]   TestSaslRPC.testKerberosServer:692->assertAuthEquals:929
[ERROR]   TestSaslRPC.testKerberosServer:692->assertAuthEquals:929
[ERROR]   TestSaslRPC.testKerberosServer:692->assertAuthEquals:929
[ERROR]   TestSaslRPC.testKerberosServer:692->assertAuthEquals:929
[ERROR]   TestSaslRPC.testKerberosServer:692->assertAuthEquals:929
[ERROR]   TestSaslRPC.testNoClientFallbackToSimple:575->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testNoClientFallbackToSimple:575->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testNoClientFallbackToSimple:575->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testNoClientFallbackToSimple:575->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testNoClientFallbackToSimple:575->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testSimpleServerWithTokens:622->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testSimpleServerWithTokens:622->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testSimpleServerWithTokens:622->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testSimpleServerWithTokens:622->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testSimpleServerWithTokens:622->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testSimpleServer:563->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testSimpleServer:563->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testSimpleServer:563->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testSimpleServer:563->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testSimpleServer:563->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testTokenOnlyServer:661->assertAuthEquals:929
[ERROR]   TestSaslRPC.testTokenOnlyServer:661->assertAuthEquals:929
[ERROR]   TestSaslRPC.testTokenOnlyServer:661->assertAuthEquals:929
[ERROR]   TestSaslRPC.testTokenOnlyServer:661->assertAuthEquals:929
[ERROR]   TestSaslRPC.testTokenOnlyServer:661->assertAuthEquals:929
{code}

I'm going to conclude that making the existing token check code check for 
null-nes breaks those few bits of code which actually expect it, and that a 
stricter method is going to have to go in, even if it's just some utility 
wrapper we can put around 95% of uses of decodeIdentifier in the hadoop code 
itself

> Harden Token service loader use
> ---
>
> Key: HADOOP-15808
> URL: https://issues.apache.org/jira/browse/HADOOP-15808
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.9.1, 3.1.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15808-001.patch
>
>
> The Hadoop token service loading (identifiers, renewers...) works provided 
> there's no problems loading any registered implementation. If there's a 
> classloading or classcasting problem, the exception raised will stop all 
> token support working; possibly the application not starting.
> This matters for S3A/HADOOP-14556 as things may not load if aws-sdk isn't on 
> the classpath. It 

[jira] [Comment Edited] (HADOOP-15813) Enable more reliable SSL connection reuse

2018-10-03 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636927#comment-16636927
 ] 

Steve Loughran edited comment on HADOOP-15813 at 10/3/18 1:14 PM:
--

FWIW, the abfs connector in the hadoop-azure lib switched to a higher 
performance SSL library than the JVM's own; 



was (Author: ste...@apache.org):
FWIW, the abfs connector in the hadoop-azure lib switched to a higher 
performance SSL library than the JVM's own; 

Checkstyle is minor, test failure not.
{code}
[ERROR]   TestSaslRPC.testKerberosServer:692->assertAuthEquals:929
[ERROR]   TestSaslRPC.testKerberosServer:692->assertAuthEquals:929
[ERROR]   TestSaslRPC.testKerberosServer:692->assertAuthEquals:929
[ERROR]   TestSaslRPC.testKerberosServer:692->assertAuthEquals:929
[ERROR]   TestSaslRPC.testKerberosServer:692->assertAuthEquals:929
[ERROR]   TestSaslRPC.testNoClientFallbackToSimple:575->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testNoClientFallbackToSimple:575->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testNoClientFallbackToSimple:575->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testNoClientFallbackToSimple:575->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testNoClientFallbackToSimple:575->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testSimpleServerWithTokens:622->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testSimpleServerWithTokens:622->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testSimpleServerWithTokens:622->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testSimpleServerWithTokens:622->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testSimpleServerWithTokens:622->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testSimpleServer:563->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testSimpleServer:563->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testSimpleServer:563->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testSimpleServer:563->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testSimpleServer:563->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testTokenOnlyServer:661->assertAuthEquals:929
[ERROR]   TestSaslRPC.testTokenOnlyServer:661->assertAuthEquals:929
[ERROR]   TestSaslRPC.testTokenOnlyServer:661->assertAuthEquals:929
[ERROR]   TestSaslRPC.testTokenOnlyServer:661->assertAuthEquals:929
[ERROR]   TestSaslRPC.testTokenOnlyServer:661->assertAuthEquals:929
{code}

I'm going to conclude that making the existing token check code check for 
null-nes breaks those few bits of code which actually expect it, and that a 
stricter method is going to have to go in, even if it's just some utility 
wrapper we can put around 95% of uses of decodeIdentifier in the hadoop code 
itself

> Enable more reliable SSL connection reuse
> -
>
> Key: HADOOP-15813
> URL: https://issues.apache.org/jira/browse/HADOOP-15813
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Major
> Attachments: HADOOP-15813.patch
>
>
> The java keep-alive cache relies on instance equivalence of the SSL socket 
> 

[jira] [Commented] (HADOOP-15813) Enable more reliable SSL connection reuse

2018-10-03 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636927#comment-16636927
 ] 

Steve Loughran commented on HADOOP-15813:
-

FWIW, the abfs connector in the hadoop-azure lib switched to a higher 
performance SSL library than the JVM's own; 

Checkstyle is minor, test failure not.
{code}
[ERROR]   TestSaslRPC.testKerberosServer:692->assertAuthEquals:929
[ERROR]   TestSaslRPC.testKerberosServer:692->assertAuthEquals:929
[ERROR]   TestSaslRPC.testKerberosServer:692->assertAuthEquals:929
[ERROR]   TestSaslRPC.testKerberosServer:692->assertAuthEquals:929
[ERROR]   TestSaslRPC.testKerberosServer:692->assertAuthEquals:929
[ERROR]   TestSaslRPC.testNoClientFallbackToSimple:575->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testNoClientFallbackToSimple:575->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testNoClientFallbackToSimple:575->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testNoClientFallbackToSimple:575->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testNoClientFallbackToSimple:575->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testSimpleServerWithTokens:622->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testSimpleServerWithTokens:622->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testSimpleServerWithTokens:622->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testSimpleServerWithTokens:622->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testSimpleServerWithTokens:622->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testSimpleServer:563->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testSimpleServer:563->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testSimpleServer:563->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testSimpleServer:563->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testSimpleServer:563->assertAuthEquals:923 
expected:<[SIMPLE]> but was:<[java.lang.IllegalStateException: Unknown/Unloaded 
token identifier for token kind ]>
[ERROR]   TestSaslRPC.testTokenOnlyServer:661->assertAuthEquals:929
[ERROR]   TestSaslRPC.testTokenOnlyServer:661->assertAuthEquals:929
[ERROR]   TestSaslRPC.testTokenOnlyServer:661->assertAuthEquals:929
[ERROR]   TestSaslRPC.testTokenOnlyServer:661->assertAuthEquals:929
[ERROR]   TestSaslRPC.testTokenOnlyServer:661->assertAuthEquals:929
{code}

I'm going to conclude that making the existing token check code check for 
null-nes breaks those few bits of code which actually expect it, and that a 
stricter method is going to have to go in, even if it's just some utility 
wrapper we can put around 95% of uses of decodeIdentifier in the hadoop code 
itself

> Enable more reliable SSL connection reuse
> -
>
> Key: HADOOP-15813
> URL: https://issues.apache.org/jira/browse/HADOOP-15813
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Major
> Attachments: HADOOP-15813.patch
>
>
> The java keep-alive cache relies on instance equivalence of the SSL socket 
> factory.  In many java versions, SSLContext#getSocketFactory always returns a 
> new instance which completely breaks the cache.  Clients flooding a service 
> with lingering per-request connections that 

[jira] [Commented] (HADOOP-15795) Make HTTPS the default protocol for ABFS

2018-10-03 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636888#comment-16636888
 ] 

Hudson commented on HADOOP-15795:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15109 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15109/])
HADOOP-15795. Make HTTPS the default protocol for ABFS. Contributed by (stevel: 
rev 7051bd78b17b2666c2fa0f61823920285a060a76)
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFileStatus.java
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestClientUrlScheme.java
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestOauthOverAbfsScheme.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
* (delete) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestOauthFailOverHttp.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/SecureAzureBlobFileSystem.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/contract/AbfsFileSystemContract.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/FileSystemConfigurations.java


> Make HTTPS the default protocol for ABFS
> 
>
> Key: HADOOP-15795
> URL: https://issues.apache.org/jira/browse/HADOOP-15795
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15795-001.patch, HADOOP-15795-002.patch, 
> HADOOP-15795-003.patch, HADOOP-15795-004.patch, HADOOP-15795-005.patch
>
>
>  HTTPS should be used as default in ABFS, but also  we provide a 
> configuration key for user to disable it in non-secure mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15795) Make HTTPS the default protocol for ABFS

2018-10-03 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15795:

   Resolution: Fixed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

+!

committed to branch-3.2+

> Make HTTPS the default protocol for ABFS
> 
>
> Key: HADOOP-15795
> URL: https://issues.apache.org/jira/browse/HADOOP-15795
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15795-001.patch, HADOOP-15795-002.patch, 
> HADOOP-15795-003.patch, HADOOP-15795-004.patch, HADOOP-15795-005.patch
>
>
>  HTTPS should be used as default in ABFS, but also  we provide a 
> configuration key for user to disable it in non-secure mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15795) Make HTTPS the default protocol for ABFS

2018-10-03 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15795:

Summary: Make HTTPS the default protocol for ABFS  (was: Making HTTPS as 
default for ABFS)

> Make HTTPS the default protocol for ABFS
> 
>
> Key: HADOOP-15795
> URL: https://issues.apache.org/jira/browse/HADOOP-15795
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15795-001.patch, HADOOP-15795-002.patch, 
> HADOOP-15795-003.patch, HADOOP-15795-004.patch, HADOOP-15795-005.patch
>
>
>  HTTPS should be used as default in ABFS, but also  we provide a 
> configuration key for user to disable it in non-secure mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15814) Maven 3.3.3 unable to parse pom file

2018-10-03 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636585#comment-16636585
 ] 

Hudson commented on HADOOP-15814:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15105 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15105/])
HADOOP-15814. Maven 3.3.3 unable to parse pom file. Contributed by (tasanuma: 
rev 2626f46691e1e1ad09967d0931a79b95e308c8b8)
* (edit) hadoop-project/pom.xml


> Maven 3.3.3 unable to parse pom file
> 
>
> Key: HADOOP-15814
> URL: https://issues.apache.org/jira/browse/HADOOP-15814
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.2.0, 3.3.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Fix For: 3.2.0, 3.3.0
>
> Attachments: HADOOP-15814.001.patch
>
>
> Found via HDFS-13952.
> Reproducible on Maven 3.3.3, but not a problem for Maven 3.5.0 and above.
> I had to make the following change in order to compile. Looks like a problem 
> after the ABFS merge.
> {code}
> diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
> index cd38376..4c2c267 100644
> --- a/hadoop-project/pom.xml
> +++ b/hadoop-project/pom.xml
> @@ -1656,7 +1656,9 @@
>maven-javadoc-plugin
>${maven-javadoc-plugin.version}
>
> --Xmaxwarns 1
> +
> +  -Xmaxwarns 1
> +
>
>  
>  
> {code}
> Otherwise it gives me this error:
> {quote}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar (module-javadocs) on 
> project hadoop-project: Unable to parse configuration of mojo 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar for parameter 
> additionalOptions: Cannot assign configuration entry 'additionalOptions' with 
> value '-Xmaxwarns 1' of type java.lang.String to property of type 
> java.lang.String[] -> [Help 1]
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15621) S3Guard: Implement time-based (TTL) expiry for Authoritative Directory Listing

2018-10-03 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636566#comment-16636566
 ] 

Hudson commented on HADOOP-15621:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15104 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15104/])
HADOOP-15621 2/2  S3Guard: Implement time-based (TTL) expiry for (fabbri: rev 
4f752d442b437b3d297232cfdc8b04aa55c53c5e)
* (add) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardTtl.java
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/ExpirableMetadata.java


> S3Guard: Implement time-based (TTL) expiry for Authoritative Directory Listing
> --
>
> Key: HADOOP-15621
> URL: https://issues.apache.org/jira/browse/HADOOP-15621
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15621.001.patch, HADOOP-15621.002.patch
>
>
> Similar to HADOOP-13649, I think we should add a TTL (time to live) feature 
> to the Dynamo metadata store (MS) for S3Guard.
> This is a similar concept to an "online algorithm" version of the CLI prune() 
> function, which is the "offline algorithm".
> Why: 
>  1. Self healing (soft state): since we do not implement transactions around 
> modification of the two systems (s3 and metadata store), certain failures can 
> lead to inconsistency between S3 and the metadata store (MS) state. Having a 
> time to live (TTL) on each entry in S3Guard means that any inconsistencies 
> will be time bound. Thus "wait and restart your job" becomes a valid, if 
> ugly, way to get around any issues with FS client failure leaving things in a 
> bad state.
>  2. We could make manual invocation of `hadoop s3guard prune ...` 
> unnecessary, depending on the implementation.
>  3. Makes it possible to fix the problem that dynamo MS prune() doesn't prune 
> directories due to the lack of true modification time.
> How:
>  I think we need a new column in the dynamo table "entry last written time". 
> This is updated each time the entry is written to dynamo.
>  After that we can either
>  1. Have the client simply ignore / elide any entries that are older than the 
> configured TTL.
>  2. Have the client delete entries older than the TTL.
> The issue with #2 is it will increase latency if done inline in the context 
> of an FS operation. We could mitigate this some by using an async helper 
> thread, or probabilistically doing it "some times" to amortize the expense of 
> deleting stale entries (allowing some batching as well).
> Caveats:
>  - Clock synchronization as usual is a concern. Many clusters already keep 
> clocks close enough via NTP. We should at least document the requirement 
> along with the configuration knob that enables the feature.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15814) Maven 3.3.3 unable to parse pom file

2018-10-03 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HADOOP-15814:
--
   Resolution: Fixed
Fix Version/s: 3.3.0
   3.2.0
   Status: Resolved  (was: Patch Available)

> Maven 3.3.3 unable to parse pom file
> 
>
> Key: HADOOP-15814
> URL: https://issues.apache.org/jira/browse/HADOOP-15814
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.2.0, 3.3.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Fix For: 3.2.0, 3.3.0
>
> Attachments: HADOOP-15814.001.patch
>
>
> Found via HDFS-13952.
> Reproducible on Maven 3.3.3, but not a problem for Maven 3.5.0 and above.
> I had to make the following change in order to compile. Looks like a problem 
> after the ABFS merge.
> {code}
> diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
> index cd38376..4c2c267 100644
> --- a/hadoop-project/pom.xml
> +++ b/hadoop-project/pom.xml
> @@ -1656,7 +1656,9 @@
>maven-javadoc-plugin
>${maven-javadoc-plugin.version}
>
> --Xmaxwarns 1
> +
> +  -Xmaxwarns 1
> +
>
>  
>  
> {code}
> Otherwise it gives me this error:
> {quote}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar (module-javadocs) on 
> project hadoop-project: Unable to parse configuration of mojo 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar for parameter 
> additionalOptions: Cannot assign configuration entry 'additionalOptions' with 
> value '-Xmaxwarns 1' of type java.lang.String to property of type 
> java.lang.String[] -> [Help 1]
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15816) Upgrade Apache Zookeeper version due to security concerns

2018-10-03 Thread Boris Vulikh (JIRA)
Boris Vulikh created HADOOP-15816:
-

 Summary: Upgrade Apache Zookeeper version due to security concerns
 Key: HADOOP-15816
 URL: https://issues.apache.org/jira/browse/HADOOP-15816
 Project: Hadoop Common
  Issue Type: Task
Reporter: Boris Vulikh
 Fix For: 3.1.1


* [CVE-2018-8012|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-8012]
 * 
[CVE-2017-5637|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-5637]

We should upgrade the dependency to version 3.4.11 or the latest, if possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15816) Upgrade Apache Zookeeper version due to security concerns

2018-10-03 Thread Boris Vulikh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boris Vulikh updated HADOOP-15816:
--
Affects Version/s: 3.1.1
Fix Version/s: (was: 3.1.1)

> Upgrade Apache Zookeeper version due to security concerns
> -
>
> Key: HADOOP-15816
> URL: https://issues.apache.org/jira/browse/HADOOP-15816
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.1.1
>Reporter: Boris Vulikh
>Priority: Major
>
> * 
> [CVE-2018-8012|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-8012]
>  * 
> [CVE-2017-5637|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-5637]
> We should upgrade the dependency to version 3.4.11 or the latest, if possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15814) Maven 3.3.3 unable to parse pom file

2018-10-03 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636558#comment-16636558
 ] 

Takanobu Asanuma commented on HADOOP-15814:
---

Committed to trunk and branch-3.2. Thanks for the contribution, [~jojochuang]!

> Maven 3.3.3 unable to parse pom file
> 
>
> Key: HADOOP-15814
> URL: https://issues.apache.org/jira/browse/HADOOP-15814
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.2.0, 3.3.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HADOOP-15814.001.patch
>
>
> Found via HDFS-13952.
> Reproducible on Maven 3.3.3, but not a problem for Maven 3.5.0 and above.
> I had to make the following change in order to compile. Looks like a problem 
> after the ABFS merge.
> {code}
> diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
> index cd38376..4c2c267 100644
> --- a/hadoop-project/pom.xml
> +++ b/hadoop-project/pom.xml
> @@ -1656,7 +1656,9 @@
>maven-javadoc-plugin
>${maven-javadoc-plugin.version}
>
> --Xmaxwarns 1
> +
> +  -Xmaxwarns 1
> +
>
>  
>  
> {code}
> Otherwise it gives me this error:
> {quote}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar (module-javadocs) on 
> project hadoop-project: Unable to parse configuration of mojo 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar for parameter 
> additionalOptions: Cannot assign configuration entry 'additionalOptions' with 
> value '-Xmaxwarns 1' of type java.lang.String to property of type 
> java.lang.String[] -> [Help 1]
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15815) Upgrade Eclipse Jetty version due to security concerns

2018-10-03 Thread Boris Vulikh (JIRA)
Boris Vulikh created HADOOP-15815:
-

 Summary: Upgrade Eclipse Jetty version due to security concerns
 Key: HADOOP-15815
 URL: https://issues.apache.org/jira/browse/HADOOP-15815
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.1.1
Reporter: Boris Vulikh


* [CVE-2017-7657|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7657]
 * 
[CVE-2017-7658|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7658]
 * 
[CVE-2017-7656|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7656]
 * 
[CVE-2018-12536|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-12536]

We should upgrade the dependency to version 9.3.24 or the latest, if possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15814) Maven 3.3.3 unable to parse pom file

2018-10-03 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636553#comment-16636553
 ] 

Takanobu Asanuma commented on HADOOP-15814:
---

Thanks for catching up this issue, [~jojochuang]! The patch corrects the 
format. +1. Will commit it later.

> Maven 3.3.3 unable to parse pom file
> 
>
> Key: HADOOP-15814
> URL: https://issues.apache.org/jira/browse/HADOOP-15814
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.2.0, 3.3.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HADOOP-15814.001.patch
>
>
> Found via HDFS-13952.
> Reproducible on Maven 3.3.3, but not a problem for Maven 3.5.0 and above.
> I had to make the following change in order to compile. Looks like a problem 
> after the ABFS merge.
> {code}
> diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
> index cd38376..4c2c267 100644
> --- a/hadoop-project/pom.xml
> +++ b/hadoop-project/pom.xml
> @@ -1656,7 +1656,9 @@
>maven-javadoc-plugin
>${maven-javadoc-plugin.version}
>
> --Xmaxwarns 1
> +
> +  -Xmaxwarns 1
> +
>
>  
>  
> {code}
> Otherwise it gives me this error:
> {quote}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar (module-javadocs) on 
> project hadoop-project: Unable to parse configuration of mojo 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar for parameter 
> additionalOptions: Cannot assign configuration entry 'additionalOptions' with 
> value '-Xmaxwarns 1' of type java.lang.String to property of type 
> java.lang.String[] -> [Help 1]
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15621) S3Guard: Implement time-based (TTL) expiry for Authoritative Directory Listing

2018-10-03 Thread Aaron Fabbri (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636549#comment-16636549
 ] 

Aaron Fabbri commented on HADOOP-15621:
---

Apologies, my mistake. Should be fixed now.

> S3Guard: Implement time-based (TTL) expiry for Authoritative Directory Listing
> --
>
> Key: HADOOP-15621
> URL: https://issues.apache.org/jira/browse/HADOOP-15621
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15621.001.patch, HADOOP-15621.002.patch
>
>
> Similar to HADOOP-13649, I think we should add a TTL (time to live) feature 
> to the Dynamo metadata store (MS) for S3Guard.
> This is a similar concept to an "online algorithm" version of the CLI prune() 
> function, which is the "offline algorithm".
> Why: 
>  1. Self healing (soft state): since we do not implement transactions around 
> modification of the two systems (s3 and metadata store), certain failures can 
> lead to inconsistency between S3 and the metadata store (MS) state. Having a 
> time to live (TTL) on each entry in S3Guard means that any inconsistencies 
> will be time bound. Thus "wait and restart your job" becomes a valid, if 
> ugly, way to get around any issues with FS client failure leaving things in a 
> bad state.
>  2. We could make manual invocation of `hadoop s3guard prune ...` 
> unnecessary, depending on the implementation.
>  3. Makes it possible to fix the problem that dynamo MS prune() doesn't prune 
> directories due to the lack of true modification time.
> How:
>  I think we need a new column in the dynamo table "entry last written time". 
> This is updated each time the entry is written to dynamo.
>  After that we can either
>  1. Have the client simply ignore / elide any entries that are older than the 
> configured TTL.
>  2. Have the client delete entries older than the TTL.
> The issue with #2 is it will increase latency if done inline in the context 
> of an FS operation. We could mitigate this some by using an async helper 
> thread, or probabilistically doing it "some times" to amortize the expense of 
> deleting stale entries (allowing some batching as well).
> Caveats:
>  - Clock synchronization as usual is a concern. Many clusters already keep 
> clocks close enough via NTP. We should at least document the requirement 
> along with the configuration knob that enables the feature.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop pull request #421: YARN-8788. mvn package -Pyarn-ui fails on JDK9

2018-10-03 Thread vbmudalige
Github user vbmudalige commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/421#discussion_r222198499
  
--- Diff: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml ---
@@ -202,6 +202,13 @@
 ro.isdc.wro4j
 wro4j-maven-plugin
 1.7.9
+  
+
+  org.mockito
+  mockito-core
+  2.18.0
+
+  
--- End diff --

@aajisaka, I made the changes. Please review.


---

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15795) Making HTTPS as default for ABFS

2018-10-03 Thread Thomas Marquardt (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636507#comment-16636507
 ] 

Thomas Marquardt commented on HADOOP-15795:
---

+1

> Making HTTPS as default for ABFS
> 
>
> Key: HADOOP-15795
> URL: https://issues.apache.org/jira/browse/HADOOP-15795
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15795-001.patch, HADOOP-15795-002.patch, 
> HADOOP-15795-003.patch, HADOOP-15795-004.patch, HADOOP-15795-005.patch
>
>
>  HTTPS should be used as default in ABFS, but also  we provide a 
> configuration key for user to disable it in non-secure mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15621) S3Guard: Implement time-based (TTL) expiry for Authoritative Directory Listing

2018-10-03 Thread Wilfred Spiegelenburg (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636502#comment-16636502
 ] 

Wilfred Spiegelenburg commented on HADOOP-15621:


[~gabor.bota] and [~fabbri]: this checkin has broken the build.

Based on the commit message two new files have been missed on checkin:
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/ExpirableMetadata.java
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardTtl.java

Can you please check, I cannot reopen this jira as i do not seem to have the 
permission in the hadoop-common project

> S3Guard: Implement time-based (TTL) expiry for Authoritative Directory Listing
> --
>
> Key: HADOOP-15621
> URL: https://issues.apache.org/jira/browse/HADOOP-15621
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15621.001.patch, HADOOP-15621.002.patch
>
>
> Similar to HADOOP-13649, I think we should add a TTL (time to live) feature 
> to the Dynamo metadata store (MS) for S3Guard.
> This is a similar concept to an "online algorithm" version of the CLI prune() 
> function, which is the "offline algorithm".
> Why: 
>  1. Self healing (soft state): since we do not implement transactions around 
> modification of the two systems (s3 and metadata store), certain failures can 
> lead to inconsistency between S3 and the metadata store (MS) state. Having a 
> time to live (TTL) on each entry in S3Guard means that any inconsistencies 
> will be time bound. Thus "wait and restart your job" becomes a valid, if 
> ugly, way to get around any issues with FS client failure leaving things in a 
> bad state.
>  2. We could make manual invocation of `hadoop s3guard prune ...` 
> unnecessary, depending on the implementation.
>  3. Makes it possible to fix the problem that dynamo MS prune() doesn't prune 
> directories due to the lack of true modification time.
> How:
>  I think we need a new column in the dynamo table "entry last written time". 
> This is updated each time the entry is written to dynamo.
>  After that we can either
>  1. Have the client simply ignore / elide any entries that are older than the 
> configured TTL.
>  2. Have the client delete entries older than the TTL.
> The issue with #2 is it will increase latency if done inline in the context 
> of an FS operation. We could mitigate this some by using an async helper 
> thread, or probabilistically doing it "some times" to amortize the expense of 
> deleting stale entries (allowing some batching as well).
> Caveats:
>  - Clock synchronization as usual is a concern. Many clusters already keep 
> clocks close enough via NTP. We should at least document the requirement 
> along with the configuration knob that enables the feature.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org