NoSuchMethodException: org.apache.hadoop.hive.ql.metadata.Hive.loadDynamicPartitions writing to Hive

2017-02-14 Thread nimrodo
Hi,

I'm trying to write a DataFrame to a Hive partitioned table. This works fine
from spark-shell, however when I use spark-submit i get the following
exception:

Exception in thread "main" java.lang.NoSuchMethodException:
org.apache.hadoop.hive.ql.metadata.Hive.loadDynamicPartitions(org.apache.hadoop.fs.Path,
java.lang.String, java.util.Map, boolean, int, boolean, boolean, boolean)
at java.lang.Class.getMethod(Class.java:1665)
at
org.apache.spark.sql.hive.client.Shim.findMethod(HiveShim.scala:114)
at
org.apache.spark.sql.hive.client.Shim_v0_14.loadDynamicPartitionsMethod$lzycompute(HiveShim.scala:404)
at
org.apache.spark.sql.hive.client.Shim_v0_14.loadDynamicPartitionsMethod(HiveShim.scala:403)
at
org.apache.spark.sql.hive.client.Shim_v0_14.loadDynamicPartitions(HiveShim.scala:455)
at
org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$loadDynamicPartitions$1.apply$mcV$sp(ClientWrapper.scala:562)
at
org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$loadDynamicPartitions$1.apply(ClientWrapper.scala:562)
at
org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$loadDynamicPartitions$1.apply(ClientWrapper.scala:562)
at
org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$withHiveState$1.apply(ClientWrapper.scala:281)
at
org.apache.spark.sql.hive.client.ClientWrapper.liftedTree1$1(ClientWrapper.scala:228)
at
org.apache.spark.sql.hive.client.ClientWrapper.retryLocked(ClientWrapper.scala:227)
at
org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:270)
at
org.apache.spark.sql.hive.client.ClientWrapper.loadDynamicPartitions(ClientWrapper.scala:561)
at
org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult$lzycompute(InsertIntoHiveTable.scala:225)
at
org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult(InsertIntoHiveTable.scala:127)
at
org.apache.spark.sql.hive.execution.InsertIntoHiveTable.doExecute(InsertIntoHiveTable.scala:276)
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at
org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
at
org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
at
org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
at
org.apache.spark.sql.DataFrameWriter.insertInto(DataFrameWriter.scala:189)
at
org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:239)
at
org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:221)
at com.pelephone.TrueCallLoader$.main(TrueCallLoader.scala:175)
at com.pelephone.TrueCallLoader.main(TrueCallLoader.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at
org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

Can you help me finding the problem?

Nimrod



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/NoSuchMethodException-org-apache-hadoop-hive-ql-metadata-Hive-loadDynamicPartitions-writing-to-Hive-tp28388.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: writing to hive

2015-10-14 Thread Ted Yu
Can you show your query ?

Thanks

> On Oct 13, 2015, at 12:29 AM, Hafiz Mujadid  wrote:
> 
> hi!
> 
> I am following  this
> <http://hortonworks.com/hadoop-tutorial/using-hive-with-orc-from-apache-spark/>
>   
> tutorial to read and write from hive. But i am facing following exception
> when i run the code.
> 
> 15/10/12 14:57:36 INFO storage.BlockManagerMaster: Registered BlockManager
> 15/10/12 14:57:38 INFO scheduler.EventLoggingListener: Logging events to
> hdfs://host:9000/spark/logs/local-1444676256555
> Exception in thread "main" java.lang.VerifyError: Bad return type
> Exception Details:
>  Location:
> 
> org/apache/spark/sql/catalyst/expressions/Pmod.inputType()Lorg/apache/spark/sql/types/AbstractDataType;
> @3: areturn
>  Reason:
>Type 'org/apache/spark/sql/types/NumericType$' (current frame, stack[0])
> is not assignable to 'org/apache/spark/sql/types/AbstractDataType' (from
> method signature)
>  Current Frame:
>bci: @3
>flags: { }
>locals: { 'org/apache/spark/sql/catalyst/expressions/Pmod' }
>stack: { 'org/apache/spark/sql/types/NumericType$' }
>  Bytecode:
>000: b200 63b0
> 
>at java.lang.Class.getDeclaredConstructors0(Native Method)
>at java.lang.Class.privateGetDeclaredConstructors(Class.java:2595)
>at java.lang.Class.getConstructor0(Class.java:2895)
>at java.lang.Class.getDeclaredConstructor(Class.java:2066)
>at
> org.apache.spark.sql.catalyst.analysis.FunctionRegistry$$anonfun$4.apply(FunctionRegistry.scala:267)
>at
> org.apache.spark.sql.catalyst.analysis.FunctionRegistry$$anonfun$4.apply(FunctionRegistry.scala:267)
>at scala.util.Try$.apply(Try.scala:161)
>at
> org.apache.spark.sql.catalyst.analysis.FunctionRegistry$.expression(FunctionRegistry.scala:267)
>at
> org.apache.spark.sql.catalyst.analysis.FunctionRegistry$.(FunctionRegistry.scala:148)
>at
> org.apache.spark.sql.catalyst.analysis.FunctionRegistry$.(FunctionRegistry.scala)
>at
> org.apache.spark.sql.hive.HiveContext.functionRegistry$lzycompute(HiveContext.scala:414)
>at
> org.apache.spark.sql.hive.HiveContext.functionRegistry(HiveContext.scala:413)
>at
> org.apache.spark.sql.UDFRegistration.(UDFRegistration.scala:39)
>at org.apache.spark.sql.SQLContext.(SQLContext.scala:203)
>at
> org.apache.spark.sql.hive.HiveContext.(HiveContext.scala:72)
> 
> 
> Is there any suggestion how to read and write in hive?
> 
> thanks
> 
> 
> 
> --
> View this message in context: 
> http://apache-spark-user-list.1001560.n3.nabble.com/writing-to-hive-tp25046.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
> 
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
> 

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



writing to hive

2015-10-13 Thread Hafiz Mujadid
hi!

I am following  this
<http://hortonworks.com/hadoop-tutorial/using-hive-with-orc-from-apache-spark/> 
 
tutorial to read and write from hive. But i am facing following exception
when i run the code.

15/10/12 14:57:36 INFO storage.BlockManagerMaster: Registered BlockManager
15/10/12 14:57:38 INFO scheduler.EventLoggingListener: Logging events to
hdfs://host:9000/spark/logs/local-1444676256555
Exception in thread "main" java.lang.VerifyError: Bad return type
Exception Details:
  Location:
   
org/apache/spark/sql/catalyst/expressions/Pmod.inputType()Lorg/apache/spark/sql/types/AbstractDataType;
@3: areturn
  Reason:
Type 'org/apache/spark/sql/types/NumericType$' (current frame, stack[0])
is not assignable to 'org/apache/spark/sql/types/AbstractDataType' (from
method signature)
  Current Frame:
bci: @3
flags: { }
locals: { 'org/apache/spark/sql/catalyst/expressions/Pmod' }
stack: { 'org/apache/spark/sql/types/NumericType$' }
  Bytecode:
000: b200 63b0

at java.lang.Class.getDeclaredConstructors0(Native Method)
at java.lang.Class.privateGetDeclaredConstructors(Class.java:2595)
at java.lang.Class.getConstructor0(Class.java:2895)
at java.lang.Class.getDeclaredConstructor(Class.java:2066)
at
org.apache.spark.sql.catalyst.analysis.FunctionRegistry$$anonfun$4.apply(FunctionRegistry.scala:267)
at
org.apache.spark.sql.catalyst.analysis.FunctionRegistry$$anonfun$4.apply(FunctionRegistry.scala:267)
at scala.util.Try$.apply(Try.scala:161)
at
org.apache.spark.sql.catalyst.analysis.FunctionRegistry$.expression(FunctionRegistry.scala:267)
at
org.apache.spark.sql.catalyst.analysis.FunctionRegistry$.(FunctionRegistry.scala:148)
at
org.apache.spark.sql.catalyst.analysis.FunctionRegistry$.(FunctionRegistry.scala)
at
org.apache.spark.sql.hive.HiveContext.functionRegistry$lzycompute(HiveContext.scala:414)
at
org.apache.spark.sql.hive.HiveContext.functionRegistry(HiveContext.scala:413)
at
org.apache.spark.sql.UDFRegistration.(UDFRegistration.scala:39)
at org.apache.spark.sql.SQLContext.(SQLContext.scala:203)
at
org.apache.spark.sql.hive.HiveContext.(HiveContext.scala:72)


Is there any suggestion how to read and write in hive?

thanks



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/writing-to-hive-tp25046.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org