Chunjun Xiao created SPARK-2706:
---
Summary: Enable Spark to support Hive 0.13
Key: SPARK-2706
URL: https://issues.apache.org/jira/browse/SPARK-2706
Project: Spark
Issue Type: Dependency upgrade
Components: SQL
Affects Versions: 1.0.1
Reporter: Chunjun Xiao
It seems Spark cannot work with Hive 0.13 well.
When I compiled Spark with Hive 0.13.1, I got some error messages, as attached
below.
So, when can Spark be enabled to support Hive 0.13?
Compiling Error:
{quote}
[ERROR]
/ws/spark/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveContext.scala:180:
type mismatch;
found : String
required: Array[String]
[ERROR] val proc: CommandProcessor =
CommandProcessorFactory.get(tokens(0), hiveconf)
[ERROR] ^
[ERROR]
/ws/spark/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala:264:
overloaded method constructor TableDesc with alternatives:
(x$1: Class[_ : org.apache.hadoop.mapred.InputFormat[_, _]],x$2:
Class[_],x$3: java.util.Properties)org.apache.hadoop.hive.ql.plan.TableDesc
and
()org.apache.hadoop.hive.ql.plan.TableDesc
cannot be applied to (Class[org.apache.hadoop.hive.serde2.Deserializer],
Class[(some other)?0(in value tableDesc)(in value tableDesc)], Class[?0(in
value tableDesc)(in value tableDesc)], java.util.Properties)
[ERROR] val tableDesc = new TableDesc(
[ERROR] ^
[ERROR]
/ws/spark/sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala:140:
value getPartitionPath is not a member of
org.apache.hadoop.hive.ql.metadata.Partition
[ERROR] val partPath = partition.getPartitionPath
[ERROR]^
[ERROR]
/ws/spark/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/HiveTableScan.scala:132:
value appendReadColumnNames is not a member of object
org.apache.hadoop.hive.serde2.ColumnProjectionUtils
[ERROR] ColumnProjectionUtils.appendReadColumnNames(hiveConf,
attributes.map(_.name))
[ERROR] ^
[ERROR]
/ws/spark/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveTable.scala:79:
org.apache.hadoop.hive.common.type.HiveDecimal does not have a constructor
[ERROR] new HiveDecimal(bd.underlying())
[ERROR] ^
[ERROR]
/ws/spark/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveTable.scala:132:
type mismatch;
found : org.apache.hadoop.fs.Path
required: String
[ERROR]
SparkHiveHadoopWriter.createPathFromString(fileSinkConf.getDirName, conf))
[ERROR] ^
[ERROR]
/ws/spark/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveTable.scala:179:
value getExternalTmpFileURI is not a member of
org.apache.hadoop.hive.ql.Context
[ERROR] val tmpLocation = hiveContext.getExternalTmpFileURI(tableLocation)
[ERROR] ^
[ERROR]
/ws/spark/sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveUdfs.scala:209:
org.apache.hadoop.hive.common.type.HiveDecimal does not have a constructor
[ERROR] case bd: BigDecimal = new HiveDecimal(bd.underlying())
[ERROR] ^
[ERROR] 8 errors found
[DEBUG] Compilation failed (CompilerInterface)
[INFO]
[INFO] Reactor Summary:
[INFO]
[INFO] Spark Project Parent POM .. SUCCESS [2.579s]
[INFO] Spark Project Core SUCCESS [2:39.805s]
[INFO] Spark Project Bagel ... SUCCESS [21.148s]
[INFO] Spark Project GraphX .. SUCCESS [59.950s]
[INFO] Spark Project ML Library .. SUCCESS [1:08.771s]
[INFO] Spark Project Streaming ... SUCCESS [1:17.759s]
[INFO] Spark Project Tools ... SUCCESS [15.405s]
[INFO] Spark Project Catalyst SUCCESS [1:17.405s]
[INFO] Spark Project SQL . SUCCESS [1:11.094s]
[INFO] Spark Project Hive FAILURE [11.121s]
[INFO] Spark Project REPL SKIPPED
[INFO] Spark Project YARN Parent POM . SKIPPED
[INFO] Spark Project YARN Stable API . SKIPPED
[INFO] Spark Project Assembly SKIPPED
[INFO] Spark Project External Twitter SKIPPED
[INFO] Spark Project External Kafka .. SKIPPED
[INFO] Spark Project External Flume .. SKIPPED
[INFO] Spark Project External ZeroMQ . SKIPPED
[INFO] Spark Project External MQTT ... SKIPPED
[INFO] Spark Project Examples SKIPPED