[jira] [Updated] (HIVE-7387) Guava version conflict between hadoop and spark [Spark-Branch]

2014-07-10 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-7387:
--

Description: 
hadoop-hdfs and hadoop-comman have dependency on guava-11.0.2.jar, and spark 
dependent on guava-14.0.1.jar. guava-11.0.2 has API conflict with guava-14.0.1, 
as Hive CLI load both dependency into classpath currently, query failed on 
either spark engine or mr engine.

NO PRECOMMIT TESTS. This is for spark branch only.

  was:hadoop-hdfs and hadoop-comman have dependency on guava-11.0.2.jar, and 
spark dependent on guava-14.0.1.jar. guava-11.0.2 has API conflict with 
guava-14.0.1, as Hive CLI load both dependency into classpath currently, query 
failed on either spark engine or mr engine.


> Guava version conflict between hadoop and spark [Spark-Branch]
> --
>
> Key: HIVE-7387
> URL: https://issues.apache.org/jira/browse/HIVE-7387
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Reporter: Chengxiang Li
>
> hadoop-hdfs and hadoop-comman have dependency on guava-11.0.2.jar, and spark 
> dependent on guava-14.0.1.jar. guava-11.0.2 has API conflict with 
> guava-14.0.1, as Hive CLI load both dependency into classpath currently, 
> query failed on either spark engine or mr engine.
> NO PRECOMMIT TESTS. This is for spark branch only.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7387) Guava version conflict between hadoop and spark [Spark-Branch]

2014-07-10 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-7387:
--

Summary: Guava version conflict between hadoop and spark [Spark-Branch]  
(was: Guava version conflict between hadoop and spark)

> Guava version conflict between hadoop and spark [Spark-Branch]
> --
>
> Key: HIVE-7387
> URL: https://issues.apache.org/jira/browse/HIVE-7387
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Reporter: Chengxiang Li
>
> hadoop-hdfs and hadoop-comman have dependency on guava-11.0.2.jar, and spark 
> dependent on guava-14.0.1.jar. guava-11.0.2 has API conflict with 
> guava-14.0.1, as Hive CLI load both dependency into classpath currently, 
> query failed on either spark engine or mr engine.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7387) Guava version conflict between hadoop and spark [Spark-Branch]

2014-07-10 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-7387:


Description: 
hadoop-hdfs and hadoop-comman have dependency on guava-11.0.2.jar, and spark 
dependent on guava-14.0.1.jar. guava-11.0.2 has API conflict with guava-14.0.1, 
as Hive CLI load both dependency into classpath currently, query failed on 
either spark engine or mr engine.

java.lang.NoSuchMethodError: 
com.google.common.hash.HashFunction.hashInt(I)Lcom/google/common/hash/HashCode;
at 
org.apache.spark.util.collection.OpenHashSet.org$apache$spark$util$collection$OpenHashSet$$hashcode(OpenHashSet.scala:261)
at 
org.apache.spark.util.collection.OpenHashSet$mcI$sp.getPos$mcI$sp(OpenHashSet.scala:165)
at 
org.apache.spark.util.collection.OpenHashSet$mcI$sp.contains$mcI$sp(OpenHashSet.scala:102)
at 
org.apache.spark.util.SizeEstimator$$anonfun$visitArray$2.apply$mcVI$sp(SizeEstimator.scala:214)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at 
org.apache.spark.util.SizeEstimator$.visitArray(SizeEstimator.scala:210)
at 
org.apache.spark.util.SizeEstimator$.visitSingleObject(SizeEstimator.scala:169)
at 
org.apache.spark.util.SizeEstimator$.org$apache$spark$util$SizeEstimator$$estimate(SizeEstimator.scala:161)
at 
org.apache.spark.util.SizeEstimator$.estimate(SizeEstimator.scala:155)
at org.apache.spark.storage.MemoryStore.putValues(MemoryStore.scala:75)
at org.apache.spark.storage.MemoryStore.putValues(MemoryStore.scala:92)
at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:661)
at org.apache.spark.storage.BlockManager.put(BlockManager.scala:546)
at 
org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:812)
at 
org.apache.spark.broadcast.HttpBroadcast.(HttpBroadcast.scala:52)
at 
org.apache.spark.broadcast.HttpBroadcastFactory.newBroadcast(HttpBroadcastFactory.scala:35)
at 
org.apache.spark.broadcast.HttpBroadcastFactory.newBroadcast(HttpBroadcastFactory.scala:29)
at 
org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
at org.apache.spark.SparkContext.broadcast(SparkContext.scala:776)
at org.apache.spark.rdd.HadoopRDD.(HadoopRDD.scala:112)
at org.apache.spark.SparkContext.hadoopRDD(SparkContext.scala:527)
at 
org.apache.spark.api.java.JavaSparkContext.hadoopRDD(JavaSparkContext.scala:307)
at 
org.apache.hadoop.hive.ql.exec.spark.SparkClient.createRDD(SparkClient.java:204)
at 
org.apache.hadoop.hive.ql.exec.spark.SparkClient.execute(SparkClient.java:167)
at 
org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:32)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:159)
at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:72)

NO PRECOMMIT TESTS. This is for spark branch only.

  was:
hadoop-hdfs and hadoop-comman have dependency on guava-11.0.2.jar, and spark 
dependent on guava-14.0.1.jar. guava-11.0.2 has API conflict with guava-14.0.1, 
as Hive CLI load both dependency into classpath currently, query failed on 
either spark engine or mr engine.

NO PRECOMMIT TESTS. This is for spark branch only.


> Guava version conflict between hadoop and spark [Spark-Branch]
> --
>
> Key: HIVE-7387
> URL: https://issues.apache.org/jira/browse/HIVE-7387
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Reporter: Chengxiang Li
>
> hadoop-hdfs and hadoop-comman have dependency on guava-11.0.2.jar, and spark 
> dependent on guava-14.0.1.jar. guava-11.0.2 has API conflict with 
> guava-14.0.1, as Hive CLI load both dependency into classpath currently, 
> query failed on either spark engine or mr engine.
> java.lang.NoSuchMethodError: 
> com.google.common.hash.HashFunction.hashInt(I)Lcom/google/common/hash/HashCode;
>   at 
> org.apache.spark.util.collection.OpenHashSet.org$apache$spark$util$collection$OpenHashSet$$hashcode(OpenHashSet.scala:261)
>   at 
> org.apache.spark.util.collection.OpenHashSet$mcI$sp.getPos$mcI$sp(OpenHashSet.scala:165)
>   at 
> org.apache.spark.util.collection.OpenHashSet$mcI$sp.contains$mcI$sp(OpenHashSet.scala:102)
>   at 
> org.apache.spark.util.SizeEstimator$$anonfun$visitArray$2.apply$mcVI$sp(SizeEstimator.scala:214)
>   at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
>   at 
> org.apache.spark.util.SizeEstimator$.visitArray(SizeEstimator.scala:210)
>   at 
> org.apache.spark.util.SizeEstimator$.visitSingleObject(SizeEstimator.scala:169)
>   at 
> org.apache.spark.util.SizeEstimator$.org$apac

[jira] [Updated] (HIVE-7387) Guava version conflict between hadoop and spark [Spark-Branch]

2014-07-15 Thread Reynold Xin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reynold Xin updated HIVE-7387:
--

Description: 
hadoop-hdfs and hadoop-comman have dependency on guava-11.0.2.jar, and spark 
dependent on guava-14.0.1.jar. guava-11.0.2 has API conflict with guava-14.0.1, 
as Hive CLI load both dependency into classpath currently, query failed on 
either spark engine or mr engine.

{code}
java.lang.NoSuchMethodError: 
com.google.common.hash.HashFunction.hashInt(I)Lcom/google/common/hash/HashCode;
at 
org.apache.spark.util.collection.OpenHashSet.org$apache$spark$util$collection$OpenHashSet$$hashcode(OpenHashSet.scala:261)
at 
org.apache.spark.util.collection.OpenHashSet$mcI$sp.getPos$mcI$sp(OpenHashSet.scala:165)
at 
org.apache.spark.util.collection.OpenHashSet$mcI$sp.contains$mcI$sp(OpenHashSet.scala:102)
at 
org.apache.spark.util.SizeEstimator$$anonfun$visitArray$2.apply$mcVI$sp(SizeEstimator.scala:214)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at 
org.apache.spark.util.SizeEstimator$.visitArray(SizeEstimator.scala:210)
at 
org.apache.spark.util.SizeEstimator$.visitSingleObject(SizeEstimator.scala:169)
at 
org.apache.spark.util.SizeEstimator$.org$apache$spark$util$SizeEstimator$$estimate(SizeEstimator.scala:161)
at 
org.apache.spark.util.SizeEstimator$.estimate(SizeEstimator.scala:155)
at org.apache.spark.storage.MemoryStore.putValues(MemoryStore.scala:75)
at org.apache.spark.storage.MemoryStore.putValues(MemoryStore.scala:92)
at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:661)
at org.apache.spark.storage.BlockManager.put(BlockManager.scala:546)
at 
org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:812)
at 
org.apache.spark.broadcast.HttpBroadcast.(HttpBroadcast.scala:52)
at 
org.apache.spark.broadcast.HttpBroadcastFactory.newBroadcast(HttpBroadcastFactory.scala:35)
at 
org.apache.spark.broadcast.HttpBroadcastFactory.newBroadcast(HttpBroadcastFactory.scala:29)
at 
org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
at org.apache.spark.SparkContext.broadcast(SparkContext.scala:776)
at org.apache.spark.rdd.HadoopRDD.(HadoopRDD.scala:112)
at org.apache.spark.SparkContext.hadoopRDD(SparkContext.scala:527)
at 
org.apache.spark.api.java.JavaSparkContext.hadoopRDD(JavaSparkContext.scala:307)
at 
org.apache.hadoop.hive.ql.exec.spark.SparkClient.createRDD(SparkClient.java:204)
at 
org.apache.hadoop.hive.ql.exec.spark.SparkClient.execute(SparkClient.java:167)
at 
org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:32)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:159)
at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:72)
{code}

NO PRECOMMIT TESTS. This is for spark branch only.

  was:
hadoop-hdfs and hadoop-comman have dependency on guava-11.0.2.jar, and spark 
dependent on guava-14.0.1.jar. guava-11.0.2 has API conflict with guava-14.0.1, 
as Hive CLI load both dependency into classpath currently, query failed on 
either spark engine or mr engine.

java.lang.NoSuchMethodError: 
com.google.common.hash.HashFunction.hashInt(I)Lcom/google/common/hash/HashCode;
at 
org.apache.spark.util.collection.OpenHashSet.org$apache$spark$util$collection$OpenHashSet$$hashcode(OpenHashSet.scala:261)
at 
org.apache.spark.util.collection.OpenHashSet$mcI$sp.getPos$mcI$sp(OpenHashSet.scala:165)
at 
org.apache.spark.util.collection.OpenHashSet$mcI$sp.contains$mcI$sp(OpenHashSet.scala:102)
at 
org.apache.spark.util.SizeEstimator$$anonfun$visitArray$2.apply$mcVI$sp(SizeEstimator.scala:214)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at 
org.apache.spark.util.SizeEstimator$.visitArray(SizeEstimator.scala:210)
at 
org.apache.spark.util.SizeEstimator$.visitSingleObject(SizeEstimator.scala:169)
at 
org.apache.spark.util.SizeEstimator$.org$apache$spark$util$SizeEstimator$$estimate(SizeEstimator.scala:161)
at 
org.apache.spark.util.SizeEstimator$.estimate(SizeEstimator.scala:155)
at org.apache.spark.storage.MemoryStore.putValues(MemoryStore.scala:75)
at org.apache.spark.storage.MemoryStore.putValues(MemoryStore.scala:92)
at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:661)
at org.apache.spark.storage.BlockManager.put(BlockManager.scala:546)
at 
org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:812)
at 
org.apache.spark.broadcast.HttpBroadcast.(HttpBroadcast.scala:52)
at 
org.apache.spark.broadcast.HttpBroadcastFactory.newBroadcast(

[jira] [Updated] (HIVE-7387) Guava version conflict between hadoop and spark [Spark-Branch]

2014-07-17 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-7387:


Description: 
The guava conflict happens in hive driver compile stage, as in the follow 
exception stacktrace, conflict happens while initiate spark RDD in SparkClient, 
hive driver take both guava 11 from hadoop classpath and spark assembly jar 
which contains guava 14 classes in its classpath, spark invoked 
HashFunction.hasInt which method does not exists in guava 11 version, obvious 
the guava 11 version HashFunction is loaded into the JVM, which lead to a  
NoSuchMethodError during initiate spark RDD.

{code}
java.lang.NoSuchMethodError: 
com.google.common.hash.HashFunction.hashInt(I)Lcom/google/common/hash/HashCode;
at 
org.apache.spark.util.collection.OpenHashSet.org$apache$spark$util$collection$OpenHashSet$$hashcode(OpenHashSet.scala:261)
at 
org.apache.spark.util.collection.OpenHashSet$mcI$sp.getPos$mcI$sp(OpenHashSet.scala:165)
at 
org.apache.spark.util.collection.OpenHashSet$mcI$sp.contains$mcI$sp(OpenHashSet.scala:102)
at 
org.apache.spark.util.SizeEstimator$$anonfun$visitArray$2.apply$mcVI$sp(SizeEstimator.scala:214)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at 
org.apache.spark.util.SizeEstimator$.visitArray(SizeEstimator.scala:210)
at 
org.apache.spark.util.SizeEstimator$.visitSingleObject(SizeEstimator.scala:169)
at 
org.apache.spark.util.SizeEstimator$.org$apache$spark$util$SizeEstimator$$estimate(SizeEstimator.scala:161)
at 
org.apache.spark.util.SizeEstimator$.estimate(SizeEstimator.scala:155)
at org.apache.spark.storage.MemoryStore.putValues(MemoryStore.scala:75)
at org.apache.spark.storage.MemoryStore.putValues(MemoryStore.scala:92)
at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:661)
at org.apache.spark.storage.BlockManager.put(BlockManager.scala:546)
at 
org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:812)
at 
org.apache.spark.broadcast.HttpBroadcast.(HttpBroadcast.scala:52)
at 
org.apache.spark.broadcast.HttpBroadcastFactory.newBroadcast(HttpBroadcastFactory.scala:35)
at 
org.apache.spark.broadcast.HttpBroadcastFactory.newBroadcast(HttpBroadcastFactory.scala:29)
at 
org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
at org.apache.spark.SparkContext.broadcast(SparkContext.scala:776)
at org.apache.spark.rdd.HadoopRDD.(HadoopRDD.scala:112)
at org.apache.spark.SparkContext.hadoopRDD(SparkContext.scala:527)
at 
org.apache.spark.api.java.JavaSparkContext.hadoopRDD(JavaSparkContext.scala:307)
at 
org.apache.hadoop.hive.ql.exec.spark.SparkClient.createRDD(SparkClient.java:204)
at 
org.apache.hadoop.hive.ql.exec.spark.SparkClient.execute(SparkClient.java:167)
at 
org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:32)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:159)
at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:72)
{code}

NO PRECOMMIT TESTS. This is for spark branch only.

  was:
hadoop-hdfs and hadoop-comman have dependency on guava-11.0.2.jar, and spark 
dependent on guava-14.0.1.jar. guava-11.0.2 has API conflict with guava-14.0.1, 
as Hive CLI load both dependency into classpath currently, query failed on 
either spark engine or mr engine.

{code}
java.lang.NoSuchMethodError: 
com.google.common.hash.HashFunction.hashInt(I)Lcom/google/common/hash/HashCode;
at 
org.apache.spark.util.collection.OpenHashSet.org$apache$spark$util$collection$OpenHashSet$$hashcode(OpenHashSet.scala:261)
at 
org.apache.spark.util.collection.OpenHashSet$mcI$sp.getPos$mcI$sp(OpenHashSet.scala:165)
at 
org.apache.spark.util.collection.OpenHashSet$mcI$sp.contains$mcI$sp(OpenHashSet.scala:102)
at 
org.apache.spark.util.SizeEstimator$$anonfun$visitArray$2.apply$mcVI$sp(SizeEstimator.scala:214)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at 
org.apache.spark.util.SizeEstimator$.visitArray(SizeEstimator.scala:210)
at 
org.apache.spark.util.SizeEstimator$.visitSingleObject(SizeEstimator.scala:169)
at 
org.apache.spark.util.SizeEstimator$.org$apache$spark$util$SizeEstimator$$estimate(SizeEstimator.scala:161)
at 
org.apache.spark.util.SizeEstimator$.estimate(SizeEstimator.scala:155)
at org.apache.spark.storage.MemoryStore.putValues(MemoryStore.scala:75)
at org.apache.spark.storage.MemoryStore.putValues(MemoryStore.scala:92)
at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:661)
at org.apache.spark.storage.BlockManager.put(BlockManager.scala:

[jira] [Updated] (HIVE-7387) Guava version conflict between hadoop and spark [Spark-Branch]

2014-07-23 Thread Chao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao updated HIVE-7387:
---

Attachment: HIVE-7387-spark.patch

This patch provided by [~srowen] may solve the conflict with guava. I tested it 
with Spark v1.0.1.


> Guava version conflict between hadoop and spark [Spark-Branch]
> --
>
> Key: HIVE-7387
> URL: https://issues.apache.org/jira/browse/HIVE-7387
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Reporter: Chengxiang Li
>Assignee: Chengxiang Li
> Attachments: HIVE-7387-spark.patch
>
>
> The guava conflict happens in hive driver compile stage, as in the follow 
> exception stacktrace, conflict happens while initiate spark RDD in 
> SparkClient, hive driver take both guava 11 from hadoop classpath and spark 
> assembly jar which contains guava 14 classes in its classpath, spark invoked 
> HashFunction.hasInt which method does not exists in guava 11 version, obvious 
> the guava 11 version HashFunction is loaded into the JVM, which lead to a  
> NoSuchMethodError during initiate spark RDD.
> {code}
> java.lang.NoSuchMethodError: 
> com.google.common.hash.HashFunction.hashInt(I)Lcom/google/common/hash/HashCode;
>   at 
> org.apache.spark.util.collection.OpenHashSet.org$apache$spark$util$collection$OpenHashSet$$hashcode(OpenHashSet.scala:261)
>   at 
> org.apache.spark.util.collection.OpenHashSet$mcI$sp.getPos$mcI$sp(OpenHashSet.scala:165)
>   at 
> org.apache.spark.util.collection.OpenHashSet$mcI$sp.contains$mcI$sp(OpenHashSet.scala:102)
>   at 
> org.apache.spark.util.SizeEstimator$$anonfun$visitArray$2.apply$mcVI$sp(SizeEstimator.scala:214)
>   at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
>   at 
> org.apache.spark.util.SizeEstimator$.visitArray(SizeEstimator.scala:210)
>   at 
> org.apache.spark.util.SizeEstimator$.visitSingleObject(SizeEstimator.scala:169)
>   at 
> org.apache.spark.util.SizeEstimator$.org$apache$spark$util$SizeEstimator$$estimate(SizeEstimator.scala:161)
>   at 
> org.apache.spark.util.SizeEstimator$.estimate(SizeEstimator.scala:155)
>   at org.apache.spark.storage.MemoryStore.putValues(MemoryStore.scala:75)
>   at org.apache.spark.storage.MemoryStore.putValues(MemoryStore.scala:92)
>   at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:661)
>   at org.apache.spark.storage.BlockManager.put(BlockManager.scala:546)
>   at 
> org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:812)
>   at 
> org.apache.spark.broadcast.HttpBroadcast.(HttpBroadcast.scala:52)
>   at 
> org.apache.spark.broadcast.HttpBroadcastFactory.newBroadcast(HttpBroadcastFactory.scala:35)
>   at 
> org.apache.spark.broadcast.HttpBroadcastFactory.newBroadcast(HttpBroadcastFactory.scala:29)
>   at 
> org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
>   at org.apache.spark.SparkContext.broadcast(SparkContext.scala:776)
>   at org.apache.spark.rdd.HadoopRDD.(HadoopRDD.scala:112)
>   at org.apache.spark.SparkContext.hadoopRDD(SparkContext.scala:527)
>   at 
> org.apache.spark.api.java.JavaSparkContext.hadoopRDD(JavaSparkContext.scala:307)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.SparkClient.createRDD(SparkClient.java:204)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.SparkClient.execute(SparkClient.java:167)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:32)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:159)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
>   at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:72)
> {code}
> NO PRECOMMIT TESTS. This is for spark branch only.



--
This message was sent by Atlassian JIRA
(v6.2#6252)