[ 
https://issues.apache.org/jira/browse/HIVE-10073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14382253#comment-14382253
 ] 

Jimmy Xiang commented on HIVE-10073:
------------------------------------

[~xuefuz], I think it's an issue on Hive side. In SparkRecordHandler, we use 
the job conf passed in from Hive. So it should be Hive's responsibility to make 
sure it has all the needed information.
[~chengxiang li], though I called checkOutputSpecs for both MapWork and 
ReduceWork, I agree with you that it is better to call it in  
SparkPlanGenerator::generate(BaseWork work). Let me upload a new patch.

> Runtime exception when querying HBase with Spark [Spark Branch]
> ---------------------------------------------------------------
>
>                 Key: HIVE-10073
>                 URL: https://issues.apache.org/jira/browse/HIVE-10073
>             Project: Hive
>          Issue Type: Bug
>          Components: Spark
>    Affects Versions: spark-branch
>            Reporter: Jimmy Xiang
>            Assignee: Jimmy Xiang
>             Fix For: spark-branch
>
>         Attachments: HIVE-10073.1-spark.patch
>
>
> When querying HBase with Spark, we got 
> {noformat}
>  Caused by: java.lang.IllegalArgumentException: Must specify table name
> at 
> org.apache.hadoop.hbase.mapreduce.TableOutputFormat.setConf(TableOutputFormat.java:188)
> at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
> at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
> at 
> org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveOutputFormat(HiveFileFormatUtils.java:276)
> at 
> org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveOutputFormat(HiveFileFormatUtils.java:266)
> at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.initializeOp(FileSinkOperator.java:331)
> {noformat}
> But it works fine for MapReduce.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to