Thank you.. it works in Spark 1.4.

On Sun, Jun 14, 2015 at 3:51 PM Michael Armbrust <mich...@databricks.com>
wrote:

> Sounds like SPARK-5456 <https://issues.apache.org/jira/browse/SPARK-5456>.
> Which is fixed in Spark 1.4.
>
> On Sun, Jun 14, 2015 at 11:57 AM, Sathish Kumaran Vairavelu <
> vsathishkuma...@gmail.com> wrote:
>
>> Hello Everyone,
>>
>> I pulled 2 different tables from the JDBC source and then joined them
>> using the cust_id *decimal* column. A simple join like as below. This
>> simple join works perfectly in the database but not in Spark SQL. I am
>> importing 2 tables as a data frame/registertemptable and firing sql on top
>> of it. Please let me know what could be the error..
>>
>> select b.customer_type, sum(a.amount) total_amount from
>> customer_activity a,
>> account b
>> where
>> a.cust_id = b.cust_id
>> group by b.customer_type
>>
>> CastException: java.math.BigDecimal cannot be cast to
>> org.apache.spark.sql.types.Decimal
>>
>>         at
>> org.apache.spark.sql.types.Decimal$DecimalIsFractional$.plus(Decimal.scala:330)
>>
>>         at
>> org.apache.spark.sql.catalyst.expressions.Add.eval(arithmetic.scala:127)
>>
>>         at
>> org.apache.spark.sql.catalyst.expressions.Coalesce.eval(nullFunctions.scala:50)
>>
>>         at
>> org.apache.spark.sql.catalyst.expressions.MutableLiteral.update(literals.scala:83)
>>
>>         at
>> org.apache.spark.sql.catalyst.expressions.SumFunction.update(aggregates.scala:571)
>>
>>         at
>> org.apache.spark.sql.execution.Aggregate$$anonfun$execute$1$$anonfun$7.apply(Aggregate.scala:163)
>>
>>         at
>> org.apache.spark.sql.execution.Aggregate$$anonfun$execute$1$$anonfun$7.apply(Aggregate.scala:147)
>>
>>         at org.apache.spark.rdd.RDD$$anonfun$14.apply(RDD.scala:634)
>>
>>         at org.apache.spark.rdd.RDD$$anonfun$14.apply(RDD.scala:634)
>>
>>         at
>> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>>
>>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
>>
>>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
>>
>>         at
>> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>>
>>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
>>
>>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
>>
>>         at
>> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
>>
>>         at
>> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>>
>>         at org.apache.spark.scheduler.Task.run(Task.scala:64)
>>
>>         at
>> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
>>
>>         at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>
>>         at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>
>>         at java.lang.Thread.run(Thread.java:745)
>>
>
>

Reply via email to