[ 
https://issues.apache.org/jira/browse/HIVE-7980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14743041#comment-14743041
 ] 

lgh commented on HIVE-7980:
---------------------------

Hello,everyone:
      I followed this 
guide(https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started).
      I also meet this problem on hadoop2.6+spark1.3+hive1.2.1.see the error 
log.
  

15/09/14 14:20:25 INFO exec.Utilities: No plan file found: 
hdfs://bird-cluster/tmp/hive/hadoop/e4c32482-eb6e-445a-bfbe-71b3bb6cafcc/hive_2015-09-14_14-20-11_084_5577536928543129148-1/-mr-10003/7bf965f8-17cb-465f-86e1-3280fccb0f5f/map.xml
15/09/14 14:20:25 ERROR executor.Executor: Exception in task 0.0 in stage 0.0 
(TID 0)
java.lang.NullPointerException
        at 
org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:255)
        at 
org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:437)
        at 
org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:430)
        at 
org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:587)
        at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:236)
        at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:212)
        at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
        at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
        at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
        at org.apache.spark.scheduler.Task.run(Task.scala:64)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
15/09/14 14:20:25 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 
1
15/09/14 14:20:25 INFO executor.Executor: Running task 0.1 in stage 0.0 (TID 1)
15/09/14 14:20:25 INFO rdd.HadoopRDD: Input split: 
Paths:/user/hive/warehouse/temp.db/test/test:0+12InputFormatClass: 
org.apache.hadoop.mapred.TextInputFormat


> Hive on spark issue..
> ---------------------
>
>                 Key: HIVE-7980
>                 URL: https://issues.apache.org/jira/browse/HIVE-7980
>             Project: Hive
>          Issue Type: Bug
>          Components: HiveServer2, Spark
>    Affects Versions: spark-branch
>         Environment: Test Environment is..
> . hive 0.14.0(spark branch version)
> . spark 
> (http://ec2-50-18-79-139.us-west-1.compute.amazonaws.com/data/spark-assembly-1.1.0-SNAPSHOT-hadoop2.3.0.jar)
> . hadoop 2.4.0 (yarn)
>            Reporter: alton.jung
>            Assignee: Chao Sun
>             Fix For: spark-branch
>
>
> .I followed this 
> guide(https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started).
>  and i compiled hive from spark branch. in the next step i met the below 
> error..
> (*i typed the hive query on beeline, i used the  simple query using "order 
> by" to invoke the palleral works 
>                    ex) select * from test where id = 1 order by id;
> )
> [Error list is]
> 2014-09-04 02:58:08,796 ERROR spark.SparkClient 
> (SparkClient.java:execute(158)) - Error generating Spark Plan
> java.lang.NullPointerException
>       at 
> org.apache.spark.SparkContext.defaultParallelism(SparkContext.scala:1262)
>       at 
> org.apache.spark.SparkContext.defaultMinPartitions(SparkContext.scala:1269)
>       at 
> org.apache.spark.SparkContext.hadoopRDD$default$5(SparkContext.scala:537)
>       at 
> org.apache.spark.api.java.JavaSparkContext.hadoopRDD(JavaSparkContext.scala:318)
>       at 
> org.apache.hadoop.hive.ql.exec.spark.SparkPlanGenerator.generateRDD(SparkPlanGenerator.java:160)
>       at 
> org.apache.hadoop.hive.ql.exec.spark.SparkPlanGenerator.generate(SparkPlanGenerator.java:88)
>       at 
> org.apache.hadoop.hive.ql.exec.spark.SparkClient.execute(SparkClient.java:156)
>       at 
> org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.submit(SparkSessionImpl.java:52)
>       at 
> org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:77)
>       at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:161)
>       at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
>       at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:72)
> 2014-09-04 02:58:11,108 ERROR ql.Driver (SessionState.java:printError(696)) - 
> FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.spark.SparkTask
> 2014-09-04 02:58:11,182 INFO  log.PerfLogger 
> (PerfLogger.java:PerfLogEnd(135)) - </PERFLOG method=Driver.execute 
> start=1409824527954 end=1409824691182 duration=163228 
> from=org.apache.hadoop.hive.ql.Driver>
> 2014-09-04 02:58:11,223 INFO  log.PerfLogger 
> (PerfLogger.java:PerfLogBegin(108)) - <PERFLOG method=releaseLocks 
> from=org.apache.hadoop.hive.ql.Driver>
> 2014-09-04 02:58:11,224 INFO  log.PerfLogger 
> (PerfLogger.java:PerfLogEnd(135)) - </PERFLOG method=releaseLocks 
> start=1409824691223 end=1409824691224 duration=1 
> from=org.apache.hadoop.hive.ql.Driver>
> 2014-09-04 02:58:11,306 ERROR operation.Operation 
> (SQLOperation.java:run(199)) - Error running hive query: 
> org.apache.hive.service.cli.HiveSQLException: Error while processing 
> statement: FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.spark.SparkTask
>       at 
> org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:284)
>       at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:146)
>       at 
> org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:69)
>       at 
> org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:196)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:415)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
>       at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(HadoopShimsSecure.java:508)
>       at 
> org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:208)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>       at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>       at java.lang.Thread.run(Thread.java:722)
> 2014-09-04 02:58:11,634 INFO  exec.ListSinkOperator 
> (Operator.java:close(580)) - 47 finished. closing... 
> 2014-09-04 02:58:11,683 INFO  exec.ListSinkOperator 
> (Operator.java:close(598)) - 47 Close done
> 2014-09-04 02:58:12,190 INFO  log.PerfLogger 
> (PerfLogger.java:PerfLogBegin(108)) - <PERFLOG method=releaseLocks 
> from=org.apache.hadoop.hive.ql.Driver>
> 2014-09-04 02:58:12,234 INFO  log.PerfLogger 
> (PerfLogger.java:PerfLogEnd(135)) - </PERFLOG method=releaseLocks 
> start=1409824692190 end=1409824692191 duration=1 
> from=org.apache.hadoop.hive.ql.Driver>



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to