[ 
https://issues.apache.org/jira/browse/HIVE-7387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14295212#comment-14295212
 ] 

Tim Robertson commented on HIVE-7387:
-------------------------------------

This affects anyone trying to use a custom UDF from the Hive CLI when the UDF 
depends on later Guava methods too.  
Suggest reopening this as a valid issue.

> Guava version conflict between hadoop and spark [Spark-Branch]
> --------------------------------------------------------------
>
>                 Key: HIVE-7387
>                 URL: https://issues.apache.org/jira/browse/HIVE-7387
>             Project: Hive
>          Issue Type: Bug
>          Components: Spark
>            Reporter: Chengxiang Li
>            Assignee: Chengxiang Li
>         Attachments: HIVE-7387-spark.patch
>
>
> The guava conflict happens in hive driver compile stage, as in the follow 
> exception stacktrace, conflict happens while initiate spark RDD in 
> SparkClient, hive driver take both guava 11 from hadoop classpath and spark 
> assembly jar which contains guava 14 classes in its classpath, spark invoked 
> HashFunction.hasInt which method does not exists in guava 11 version, obvious 
> the guava 11 version HashFunction is loaded into the JVM, which lead to a  
> NoSuchMethodError during initiate spark RDD.
> {code}
> java.lang.NoSuchMethodError: 
> com.google.common.hash.HashFunction.hashInt(I)Lcom/google/common/hash/HashCode;
>       at 
> org.apache.spark.util.collection.OpenHashSet.org$apache$spark$util$collection$OpenHashSet$$hashcode(OpenHashSet.scala:261)
>       at 
> org.apache.spark.util.collection.OpenHashSet$mcI$sp.getPos$mcI$sp(OpenHashSet.scala:165)
>       at 
> org.apache.spark.util.collection.OpenHashSet$mcI$sp.contains$mcI$sp(OpenHashSet.scala:102)
>       at 
> org.apache.spark.util.SizeEstimator$$anonfun$visitArray$2.apply$mcVI$sp(SizeEstimator.scala:214)
>       at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
>       at 
> org.apache.spark.util.SizeEstimator$.visitArray(SizeEstimator.scala:210)
>       at 
> org.apache.spark.util.SizeEstimator$.visitSingleObject(SizeEstimator.scala:169)
>       at 
> org.apache.spark.util.SizeEstimator$.org$apache$spark$util$SizeEstimator$$estimate(SizeEstimator.scala:161)
>       at 
> org.apache.spark.util.SizeEstimator$.estimate(SizeEstimator.scala:155)
>       at org.apache.spark.storage.MemoryStore.putValues(MemoryStore.scala:75)
>       at org.apache.spark.storage.MemoryStore.putValues(MemoryStore.scala:92)
>       at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:661)
>       at org.apache.spark.storage.BlockManager.put(BlockManager.scala:546)
>       at 
> org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:812)
>       at 
> org.apache.spark.broadcast.HttpBroadcast.<init>(HttpBroadcast.scala:52)
>       at 
> org.apache.spark.broadcast.HttpBroadcastFactory.newBroadcast(HttpBroadcastFactory.scala:35)
>       at 
> org.apache.spark.broadcast.HttpBroadcastFactory.newBroadcast(HttpBroadcastFactory.scala:29)
>       at 
> org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
>       at org.apache.spark.SparkContext.broadcast(SparkContext.scala:776)
>       at org.apache.spark.rdd.HadoopRDD.<init>(HadoopRDD.scala:112)
>       at org.apache.spark.SparkContext.hadoopRDD(SparkContext.scala:527)
>       at 
> org.apache.spark.api.java.JavaSparkContext.hadoopRDD(JavaSparkContext.scala:307)
>       at 
> org.apache.hadoop.hive.ql.exec.spark.SparkClient.createRDD(SparkClient.java:204)
>       at 
> org.apache.hadoop.hive.ql.exec.spark.SparkClient.execute(SparkClient.java:167)
>       at 
> org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:32)
>       at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:159)
>       at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
>       at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:72)
> {code}
> NO PRECOMMIT TESTS. This is for spark branch only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to