[jira] [Commented] (HDDS-1423) Spark job fails to create ozone rpc client

2019-04-18 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821581#comment-16821581
 ] 

Ajay Kumar commented on HDDS-1423:
--

This issue was due to un-intermixing of different versions of hadoop jars. 
Closing it!!

> Spark job fails to create ozone rpc client
> --
>
> Key: HDDS-1423
> URL: https://issues.apache.org/jira/browse/HDDS-1423
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Affects Versions: 0.4.0
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Blocker
>
> spark job fails to run.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1423) Spark job fails to create ozone rpc client

2019-04-11 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815119#comment-16815119
 ] 

Elek, Marton commented on HDDS-1423:


Seems to be interesting. What are the details? Which spark and hadoop version? 
Yarn or standalone executor? Which spark job?

> Spark job fails to create ozone rpc client
> --
>
> Key: HDDS-1423
> URL: https://issues.apache.org/jira/browse/HDDS-1423
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Affects Versions: 0.4.0
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Blocker
>
> spark job fails to run.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1423) Spark job fails to create ozone rpc client

2019-04-10 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815067#comment-16815067
 ] 

Ajay Kumar commented on HDDS-1423:
--

{code}2019-04-10 23:11:46 ERROR OMFailoverProxyProvider:211 - Failed to connect 
to OM. Attempted 10 retries and 10 failovers
2019-04-10 23:11:46 ERROR OzoneClientFactory:294 - Couldn't create protocol 
class org.apache.hadoop.ozone.client.rpc.RpcClient exception:
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:291)
at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:169)
at 
org.apache.hadoop.fs.ozone.BasicOzoneClientAdapterImpl.(BasicOzoneClientAdapterImpl.java:131)
at 
org.apache.hadoop.fs.ozone.BasicOzoneClientAdapterImpl.(BasicOzoneClientAdapterImpl.java:95)
at 
org.apache.hadoop.fs.ozone.BasicOzoneClientAdapterImpl.(BasicOzoneClientAdapterImpl.java:80)
at 
org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl.(OzoneClientAdapterImpl.java:35)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.fs.ozone.OzoneClientAdapterFactory.lambda$createAdapter$1(OzoneClientAdapterFactory.java:66)
at 
org.apache.hadoop.fs.ozone.OzoneClientAdapterFactory.createAdapter(OzoneClientAdapterFactory.java:116)
at 
org.apache.hadoop.fs.ozone.OzoneClientAdapterFactory.createAdapter(OzoneClientAdapterFactory.java:62)
at 
org.apache.hadoop.fs.ozone.OzoneFileSystem.createAdapter(OzoneFileSystem.java:92)
at 
org.apache.hadoop.fs.ozone.BasicOzoneFileSystem.initialize(BasicOzoneFileSystem.java:146)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:172)
at 
org.apache.spark.deploy.yarn.Client$$anonfun$5.apply(Client.scala:121)
at 
org.apache.spark.deploy.yarn.Client$$anonfun$5.apply(Client.scala:121)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.deploy.yarn.Client.(Client.scala:121)
at 
org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56)
at 
org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
at org.apache.spark.SparkContext.(SparkContext.scala:500)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2486)
at 
org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:930)
at 
org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:921)
at scala.Option.getOrElse(Option.scala:121)
at 
org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:921)
at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:31)
at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:879)
at 
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:197)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:136)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.io.IOException: DestHost:destPort om:9862 , LocalHost:localPort 
68987e5f884a/172.20.0.10:0. Failed on local exception: