[ 
https://issues.apache.org/jira/browse/SPARK-3033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106557#comment-14106557
 ] 

pengyanhong edited comment on SPARK-3033 at 8/22/14 6:54 AM:
-------------------------------------------------------------

I changed the file {quote}
sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveUdfs.scala
{quote}
changed the method "eval" in the class "HiveGenericUdf" as below:
{code}
    while (i < children.length) {
      val idx = i
      deferedObjects(i).asInstanceOf[DeferredObjectAdapter].set(() => {
        children(idx).eval(input)
      })
      if (deferedObjects(i).get().isInstanceOf[java.math.BigDecimal] == true) {
        val decimal = deferedObjects(i).get().asInstanceOf[java.math.BigDecimal]
        val data = new 
org.apache.hadoop.hive.common.`type`.HiveDecimal(decimal).asInstanceOf[EvaluatedType]
        deferedObjects(i).asInstanceOf[DeferredObjectAdapter].set(() => {
          data.asInstanceOf[EvaluatedType]
        })
      }
      i += 1
    }
{code}
also, changed the method "wrap" in the trait "HiveInspectors", add line:{code}
case b: org.apache.hadoop.hive.common.`type`.HiveDecimal => b
{code}

So this issue has been fixed.



was (Author: pengyanhong):
I changed the file {quote}
sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveUdfs.scala
{quote}
changed the method "eval" in the class "HiveGenericUdf" as below:
{quote}
    while (i < children.length) {
      val idx = i
      deferedObjects(i).asInstanceOf[DeferredObjectAdapter].set(() => {
        children(idx).eval(input)
      })
      if (deferedObjects(i).get().isInstanceOf[java.math.BigDecimal] == true) {
        val decimal = deferedObjects(i).get().asInstanceOf[java.math.BigDecimal]
        val data = new 
org.apache.hadoop.hive.common.`type`.HiveDecimal(decimal).asInstanceOf[EvaluatedType]
        deferedObjects(i).asInstanceOf[DeferredObjectAdapter].set(() => {
          data.asInstanceOf[EvaluatedType]
        })
      }
      i += 1
    }
{quote}
also, changed the method "wrap" in the trait "HiveInspectors", add line:{quote}
case b: org.apache.hadoop.hive.common.`type`.HiveDecimal => b
{quote}

So this issue has been fixed.


> [Hive] java.math.BigDecimal cannot be cast to 
> org.apache.hadoop.hive.common.type.HiveDecimal
> --------------------------------------------------------------------------------------------
>
>                 Key: SPARK-3033
>                 URL: https://issues.apache.org/jira/browse/SPARK-3033
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core, SQL
>    Affects Versions: 1.0.2
>            Reporter: pengyanhong
>            Priority: Blocker
>
> run a complex HiveQL via yarn-cluster, got error as below:
> {quote}
> 14/08/14 15:05:24 WARN 
> org.apache.spark.Logging$class.logWarning(Logging.scala:70): Loss was due to 
> java.lang.ClassCastException
> java.lang.ClassCastException: java.math.BigDecimal cannot be cast to 
> org.apache.hadoop.hive.common.type.HiveDecimal
>       at 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.JavaHiveDecimalObjectInspector.getPrimitiveJavaObject(JavaHiveDecimalObjectInspector.java:51)
>       at 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorUtils.getHiveDecimal(PrimitiveObjectInspectorUtils.java:1022)
>       at 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorConverter$HiveDecimalConverter.convert(PrimitiveObjectInspectorConverter.java:306)
>       at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFUtils$ReturnObjectInspectorResolver.convertIfNecessary(GenericUDFUtils.java:179)
>       at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFIf.evaluate(GenericUDFIf.java:82)
>       at org.apache.spark.sql.hive.HiveGenericUdf.eval(hiveUdfs.scala:276)
>       at 
> org.apache.spark.sql.catalyst.expressions.Alias.eval(namedExpressions.scala:84)
>       at 
> org.apache.spark.sql.catalyst.expressions.MutableProjection.apply(Projection.scala:62)
>       at 
> org.apache.spark.sql.catalyst.expressions.MutableProjection.apply(Projection.scala:51)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
>       at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>       at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>       at 
> org.apache.spark.sql.execution.BroadcastNestedLoopJoin$$anonfun$4.apply(joins.scala:309)
>       at 
> org.apache.spark.sql.execution.BroadcastNestedLoopJoin$$anonfun$4.apply(joins.scala:303)
>       at org.apache.spark.rdd.RDD$$anonfun$13.apply(RDD.scala:571)
>       at org.apache.spark.rdd.RDD$$anonfun$13.apply(RDD.scala:571)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>       at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>       at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
>       at org.apache.spark.scheduler.Task.run(Task.scala:51)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:183)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>       at java.lang.Thread.run(Thread.java:662)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to