[ https://issues.apache.org/jira/browse/HIVE-9078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14246629#comment-14246629 ]
Chengxiang Li commented on HIVE-9078: ------------------------------------- I make a mistake previously, bucket_map_join_spark4.q failed due to the following error: {noformat} 2014-12-15 04:48:56,241 ERROR [Executor task launch worker-0]: executor.Executor (Logging.scala:logError(96)) - Exception in task 0.3 in stage 7.0 (TID 15) java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.NullPointerException at org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:160) at org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:47) at org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:28) at org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList$ResultIterator.hasNext(HiveBaseFunctionResultList.java:96) at scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) at org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$2.apply(AsyncRDDActions.scala:115) at org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$2.apply(AsyncRDDActions.scala:115) at org.apache.spark.SparkContext$$anonfun$30.apply(SparkContext.scala:1390) at org.apache.spark.SparkContext$$anonfun$30.apply(SparkContext.scala:1390) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61) at org.apache.spark.scheduler.Task.run(Task.scala:56) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.NullPointerException at org.apache.hadoop.hive.ql.exec.spark.HashTableLoader.load(HashTableLoader.java:114) at org.apache.hadoop.hive.ql.exec.MapJoinOperator.loadHashTable(MapJoinOperator.java:193) at org.apache.hadoop.hive.ql.exec.MapJoinOperator.cleanUpInputFileChangedOp(MapJoinOperator.java:219) at org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1051) at org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055) at org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055) at org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055) at org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055) at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:486) at org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:149) ... 16 more Caused by: java.lang.NullPointerException at org.apache.hadoop.hive.ql.exec.spark.HashTableLoader.load(HashTableLoader.java:104) ... 25 more {noformat} Previous golden file contains unexpected "Status: Failed" which hide failed query, I've created HIVE-9101 to track it. > Hive should not submit second SparkTask while previous one has failed.[Spark > Branch] > ------------------------------------------------------------------------------------ > > Key: HIVE-9078 > URL: https://issues.apache.org/jira/browse/HIVE-9078 > Project: Hive > Issue Type: Sub-task > Components: Spark > Reporter: Chengxiang Li > Assignee: Chengxiang Li > Labels: Spark-M4 > Attachments: HIVE-9078.1-spark.patch, HIVE-9078.2-spark.patch, > HIVE-9078.3-spark.patch > > > {noformat} > hive> select n_name, c_name from nation, customer where nation.n_nationkey = > customer.c_nationkey limit 10; > Query ID = root_20141211135050_51e5ae15-49a3-4a46-826f-e27ee314ccb2 > Total jobs = 2 > Launching Job 1 out of 2 > In order to change the average load for a reducer (in bytes): > set hive.exec.reducers.bytes.per.reducer=<number> > In order to limit the maximum number of reducers: > set hive.exec.reducers.max=<number> > In order to set a constant number of reducers: > set mapreduce.job.reduces=<number> > Status: Failed > Launching Job 2 out of 2 > In order to change the average load for a reducer (in bytes): > set hive.exec.reducers.bytes.per.reducer=<number> > In order to limit the maximum number of reducers: > set hive.exec.reducers.max=<number> > In order to set a constant number of reducers: > set mapreduce.job.reduces=<number> > Status: Failed > OK > Time taken: 68.53 seconds > {noformat} > 2 issue in the above CLI output. > # For a query which would be translated into multi SparkTask, is previous > SparkTask failed, Hive should failed right away, the following SparkTask > should not be submitted any more. > # Print failed info in Hive console while query failed. > The correct CLI output while query failed: > {noformat} > hive> select n_name, c_name from nation, customer where nation.n_nationkey = > customer.c_nationkey limit 10; > Query ID = root_20141211142929_ddb7f205-8422-44b4-96bd-96a1c9291895 > Total jobs = 2 > Launching Job 1 out of 2 > In order to change the average load for a reducer (in bytes): > set hive.exec.reducers.bytes.per.reducer=<number> > In order to limit the maximum number of reducers: > set hive.exec.reducers.max=<number> > In order to set a constant number of reducers: > set mapreduce.job.reduces=<number> > Status: Failed > FAILED: Execution Error, return code 2 from > org.apache.hadoop.hive.ql.exec.spark.SparkTask > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)