[ 
https://issues.apache.org/jira/browse/HIVE-9970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15028065#comment-15028065
 ] 

TerrenceYTQ commented on HIVE-9970:
-----------------------------------

With the same problem "  ERROR util.Utils: uncaught error in thread 
SparkListenerBus, stopping SparkContext ", has anyone already solved it ??? 
---My spark 1.5.2   Hive 1.2.1 ,  build by myself with commands : 
   mvn -Pyarn -Phadoop-2.6 -Dhadoop.version=2.6.0 -Dscala-2.10 -DskipTests 
clean package –e

-------Error Log :
 15/11/26 09:39:40 INFO Configuration.deprecation: mapred.task.is.map is 
deprecated. Instead, use mapreduce.task.ismap
2015-11-26 09:39:40,245 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(569)) - 15/11/26 09:39:40 INFO exec.Utilities: 
Processing alias dc_mf_device_one_check
2015-11-26 09:39:40,245 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(569)) - 15/11/26 09:39:40 INFO exec.Utilities: Adding 
input file 
hdfs://cluster1/user/hive/warehouse/vendorzhhs.db/dc_mf_device_one_check
2015-11-26 09:39:40,735 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(569)) - 15/11/26 09:39:40 INFO log.PerfLogger: 
<PERFLOG method=serializePlan from=org.apache.hadoop.hive.ql.exec.Utilities>
2015-11-26 09:39:40,735 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(569)) - 15/11/26 09:39:40 INFO exec.Utilities: 
Serializing MapWork via kryo
2015-11-26 09:39:40,902 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(569)) - 15/11/26 09:39:40 INFO log.PerfLogger: 
</PERFLOG method=serializePlan start=1448501980734 end=1448501980902 
duration=168 from=org.apache.hadoop.hive.ql.exec.Utilities>
2015-11-26 09:39:41,248 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(569)) - 15/11/26 09:39:41 INFO storage.MemoryStore: 
ensureFreeSpace(599952) called with curMem=0, maxMem=555755765
2015-11-26 09:39:41,250 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(569)) - 15/11/26 09:39:41 INFO storage.MemoryStore: 
Block broadcast_0 stored as values in memory (estimated size 585.9 KB, free 
529.4 MB)
2015-11-26 09:39:41,429 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(569)) - 15/11/26 09:39:41 INFO storage.MemoryStore: 
ensureFreeSpace(43801) called with curMem=599952, maxMem=555755765
2015-11-26 09:39:41,429 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(569)) - 15/11/26 09:39:41 INFO storage.MemoryStore: 
Block broadcast_0_piece0 stored as bytes in memory (estimated size 42.8 KB, 
free 529.4 MB)
2015-11-26 09:39:41,433 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(569)) - 15/11/26 09:39:41 INFO 
storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on 
192.168.0.69:39388 (size: 42.8 KB, free: 530.0 MB)
2015-11-26 09:39:41,437 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(569)) - 15/11/26 09:39:41 INFO spark.SparkContext: 
Created broadcast 0 from hadoopRDD at SparkPlanGenerator.java:188
2015-11-26 09:39:41,441 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(569)) - 15/11/26 09:39:41 ERROR util.Utils: uncaught 
error in thread SparkListenerBus, stopping SparkContext
2015-11-26 09:39:41,441 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(569)) - java.lang.AbstractMethodError
2015-11-26 09:39:41,441 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(569)) -        at 
org.apache.spark.scheduler.SparkListenerBus$class.onPostEvent(SparkListenerBus.scala:62)
2015-11-26 09:39:41,442 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(569)) -        at 
org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
2015-11-26 09:39:41,442 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(569)) -        at 
org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
2015-11-26 09:39:41,442 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(569)) -        at 
org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:56)
2015-11-26 09:39:41,442 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(569)) -        at 
org.apache.spark.util.AsynchronousListenerBus.postToAll(AsynchronousListenerBus.scala:37)
2015-11-26 09:39:41,442 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(569)) -        at 
org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(AsynchronousListenerBus.scala:79)
2015-11-26 09:39:41,442 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(569)) -        at 
org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1136)
2015-11-26 09:39:41,442 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(569)) -        at 
org.apache.spark.util.AsynchronousListenerBus$$anon$1.run(AsynchronousListenerBus.scala:63)
2015-11-26 09:39:41,467 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(569)) - 15/11/26 09:39:41 INFO 
handler.ContextHandler: stopped 
o.s.j.s.ServletContextHandler{/metrics/json,null}

> Hive on spark
> -------------
>
>                 Key: HIVE-9970
>                 URL: https://issues.apache.org/jira/browse/HIVE-9970
>             Project: Hive
>          Issue Type: Bug
>            Reporter: Amithsha
>            Assignee: Tarush Grover
>
> Hi all,
> Recently i have configured Spark 1.2.0 and my environment is hadoop
> 2.6.0 hive 1.1.0 Here i have tried hive on Spark while executing
> insert into i am getting the following g error.
> Query ID = hadoop2_20150313162828_8764adad-a8e4-49da-9ef5-35e4ebd6bc63
> Total jobs = 1
> Launching Job 1 out of 1
> In order to change the average load for a reducer (in bytes):
> set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
> set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
> set mapreduce.job.reduces=<number>
> Failed to execute spark task, with exception
> 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create
> spark client.)'
> FAILED: Execution Error, return code 1 from
> org.apache.hadoop.hive.ql.exec.spark.SparkTask
> Have added the spark-assembly jar in hive lib
> And also in hive console using the command add jar followed by the steps
> set spark.home=/opt/spark-1.2.1/;
> add jar 
> /opt/spark-1.2.1/assembly/target/scala-2.10/spark-assembly-1.2.1-hadoop2.4.0.jar;
> set hive.execution.engine=spark;
> set spark.master=spark://xxxxxxx:7077;
> set spark.eventLog.enabled=true;
> set spark.executor.memory=512m;
> set spark.serializer=org.apache.spark.serializer.KryoSerializer;
> Can anyone suggest!!!!
> Thanks & Regards
> Amithsha



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to