[ 
https://issues.apache.org/jira/browse/SPARK-9832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davies Liu reassigned SPARK-9832:
---------------------------------

    Assignee: Davies Liu

> TPCDS Q98 Fails
> ---------------
>
>                 Key: SPARK-9832
>                 URL: https://issues.apache.org/jira/browse/SPARK-9832
>             Project: Spark
>          Issue Type: Sub-task
>          Components: SQL
>            Reporter: Michael Armbrust
>            Assignee: Davies Liu
>            Priority: Blocker
>
> {code}
> select
>   i_item_desc,
>   i_category,
>   i_class,
>   i_current_price,
>   sum(ss_ext_sales_price) as itemrevenue
>   -- sum(ss_ext_sales_price) * 100 / sum(sum(ss_ext_sales_price)) over 
> (partition by i_class) as revenueratio
> from
>   store_sales
>   join item on (store_sales.ss_item_sk = item.i_item_sk)
>   join date_dim on (store_sales.ss_sold_date_sk = date_dim.d_date_sk)
> where
>   i_category in('Jewelry', 'Sports', 'Books')
>   -- and d_date between cast('2001-01-12' as date) and (cast('2001-01-12' as 
> date) + 30)
>   -- and d_date between '2001-01-12' and '2001-02-11'
>   -- and ss_date between '2001-01-12' and '2001-02-11'
>   -- and ss_sold_date_sk between 2451922 and 2451952  -- partition key filter
>   and ss_sold_date_sk between 2451911 and 2451941  -- partition key filter (1 
> calendar month)
>   and d_date between '2001-01-01' and '2001-01-31'
> group by
>   i_item_id,
>   i_item_desc,
>   i_category,
>   i_class,
>   i_current_price
> order by
>   i_category,
>   i_class,
>   i_item_id,
>   i_item_desc
>   -- revenueratio
> limit 1000
> {code}
> {code}
> Job aborted due to stage failure: Task 11 in stage 62.0 failed 4 times, most 
> recent failure: Lost task 11.3 in stage 62.0 (TID 5289, 10.0.227.73): 
> java.lang.IllegalArgumentException: Unscaled value too large for precision
>       at org.apache.spark.sql.types.Decimal.set(Decimal.scala:76)
>       at org.apache.spark.sql.types.Decimal$.apply(Decimal.scala:338)
>       at org.apache.spark.sql.types.Decimal.apply(Decimal.scala)
>       at 
> org.apache.spark.sql.catalyst.expressions.UnsafeRow.getDecimal(UnsafeRow.java:386)
>       at 
> org.apache.spark.sql.catalyst.expressions.JoinedRow.getDecimal(JoinedRow.scala:97)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown
>  Source)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown
>  Source)
>       at 
> org.apache.spark.sql.execution.joins.HashJoin$$anon$1.next(HashJoin.scala:101)
>       at 
> org.apache.spark.sql.execution.joins.HashJoin$$anon$1.next(HashJoin.scala:74)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
>       at 
> org.apache.spark.sql.execution.joins.HashJoin$$anon$1.fetchNext(HashJoin.scala:115)
>       at 
> org.apache.spark.sql.execution.joins.HashJoin$$anon$1.hasNext(HashJoin.scala:93)
>       at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>       at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>       at 
> org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator.processInputs(TungstenAggregationIterator.scala:353)
>       at 
> org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator.<init>(TungstenAggregationIterator.scala:587)
>       at 
> org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$1.apply(TungstenAggregate.scala:72)
>       at 
> org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$1.apply(TungstenAggregate.scala:64)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:706)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:706)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
>       at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
>       at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>       at org.apache.spark.scheduler.Task.run(Task.scala:88)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>       at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to