[jira] [Commented] (HDDS-1430) NPE if secure ozone if KMS uri is not defined.

2019-04-29 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16829736#comment-16829736
 ] 

Hudson commented on HDDS-1430:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16477 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16477/])
HDDS-1430. NPE if secure ozone if KMS uri is not defined. Contributed by 
(github: rev 95790bb7e5f59a53cd54bc4c7c7fd93d17173e55)
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/OzoneKMSUtil.java
* (edit) 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/OzoneFileSystem.java
* (add) 
hadoop-ozone/client/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneKMSUtil.java


> NPE if secure ozone if KMS uri is not defined.
> --
>
> Key: HDDS-1430
> URL: https://issues.apache.org/jira/browse/HDDS-1430
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Affects Versions: 0.4.0
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> OzoneKMSUtil.getKeyProvider throws NPE if KMS uri is not defined. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1430) NPE if secure ozone if KMS uri is not defined.

2019-04-11 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815855#comment-16815855
 ] 

Ajay Kumar commented on HDDS-1430:
--

{code}Exception in thread "main" java.lang.NullPointerException
at 
org.apache.hadoop.crypto.key.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:660)
at 
org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:96)
at 
org.apache.hadoop.util.KMSUtil.createKeyProviderFromUri(KMSUtil.java:83)
at 
org.apache.hadoop.ozone.client.rpc.OzoneKMSUtil.getKeyProvider(OzoneKMSUtil.java:131)
at 
org.apache.hadoop.ozone.client.rpc.RpcClient.getKeyProvider(RpcClient.java:979)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:54)
at com.sun.proxy.$Proxy17.getKeyProvider(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.hdds.tracing.TraceAllMethod.invoke(TraceAllMethod.java:66)
at com.sun.proxy.$Proxy17.getKeyProvider(Unknown Source)
at 
org.apache.hadoop.ozone.client.ObjectStore.getKeyProvider(ObjectStore.java:266)
at 
org.apache.hadoop.fs.ozone.BasicOzoneClientAdapterImpl.getKeyProvider(BasicOzoneClientAdapterImpl.java:281)
at 
org.apache.hadoop.fs.ozone.OzoneFileSystem.getKeyProvider(OzoneFileSystem.java:51)
at 
org.apache.hadoop.fs.ozone.OzoneFileSystem.getAdditionalTokenIssuers(OzoneFileSystem.java:62)
at 
org.apache.hadoop.security.token.DelegationTokenIssuer.collectDelegationTokens(DelegationTokenIssuer.java:104)
at 
org.apache.hadoop.security.token.DelegationTokenIssuer.addDelegationTokens(DelegationTokenIssuer.java:76)
at 
org.apache.spark.deploy.security.HadoopFSDelegationTokenProvider$$anonfun$org$apache$spark$deploy$security$HadoopFSDelegationTokenProvider$$fetchDelegationTokens$1.apply(HadoopFSDelegationTokenProvider.scala:98)
at 
org.apache.spark.deploy.security.HadoopFSDelegationTokenProvider$$anonfun$org$apache$spark$deploy$security$HadoopFSDelegationTokenProvider$$fetchDelegationTokens$1.apply(HadoopFSDelegationTokenProvider.scala:96)
at scala.collection.immutable.Set$Set1.foreach(Set.scala:94)
at 
org.apache.spark.deploy.security.HadoopFSDelegationTokenProvider.org$apache$spark$deploy$security$HadoopFSDelegationTokenProvider$$fetchDelegationTokens(HadoopFSDelegationTokenProvider.scala:96)
at 
org.apache.spark.deploy.security.HadoopFSDelegationTokenProvider.obtainDelegationTokens(HadoopFSDelegationTokenProvider.scala:49)
at 
org.apache.spark.deploy.security.HadoopDelegationTokenManager$$anonfun$obtainDelegationTokens$2.apply(HadoopDelegationTokenManager.scala:132)
at 
org.apache.spark.deploy.security.HadoopDelegationTokenManager$$anonfun$obtainDelegationTokens$2.apply(HadoopDelegationTokenManager.scala:130)
at 
scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at 
scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at 
scala.collection.MapLike$DefaultValuesIterable.foreach(MapLike.scala:206)
at 
scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
at 
org.apache.spark.deploy.security.HadoopDelegationTokenManager.obtainDelegationTokens(HadoopDelegationTokenManager.scala:130)
at 
org.apache.spark.deploy.yarn.security.YARNHadoopDelegationTokenManager.obtainDelegationTokens(YARNHadoopDelegationTokenManager.scala:59)
at 
org.apache.spark.deploy.yarn.Client.setupSecurityToken(Client.scala:309)
at 
org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:1013)
at 
org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:178)
at 
org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:57)
at 
org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:186)
at org.apache.spark.SparkContext.(SparkContext.scala:501)