Hi,
I am using SparkSQL on 1.1.0 branch.
The following code leads to a scala.MatchError
at
org.apache.spark.sql.catalyst.expressions.Cast.cast$lzycompute(Cast.scala:247)
val scm = StructType(inputRDD.schema.fields.init :+
StructField(list,
ArrayType(
StructType(
All values in Hive are always nullable, though you should still not be
seeing this error.
It should be addressed by this patch:
https://github.com/apache/spark/pull/3150
On Fri, Dec 5, 2014 at 2:36 AM, Hao Ren inv...@gmail.com wrote:
Hi,
I am using SparkSQL on 1.1.0 branch.
The following