[ 
https://issues.apache.org/jira/browse/HIVE-19388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16461704#comment-16461704
 ] 

Vihang Karajgaonkar commented on HIVE-19388:
--------------------------------------------

The test {{spark_vectorized_dynamic_partition_pruning.q}} includes a query 
which triggers this code path. Following query creates an empty (HashTable file 
doesn't exist) from the .q file.

{noformat}
select count(*) from srcpart join srcpart_date on (srcpart.ds = 
srcpart_date.ds) join srcpart_hour on (srcpart.hr = srcpart_hour.hr) 
where srcpart_date.`date` = '2008-04-08' and srcpart.hr = 13
{noformat}

The error can be seen in the spark executor logs. Interestingly on master 
branch the query proceeds to the next task even when this task errors out. I 
may be another bug lying there somewhere unless I am mistaken.

> ClassCastException during VectorMapJoinCommonOperator initialization
> --------------------------------------------------------------------
>
>                 Key: HIVE-19388
>                 URL: https://issues.apache.org/jira/browse/HIVE-19388
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 2.1.1, 2.2.0, 3.0.0, 2.3.2, 3.1.0
>            Reporter: Vihang Karajgaonkar
>            Assignee: Vihang Karajgaonkar
>            Priority: Major
>         Attachments: HIVE-19388.01.patch
>
>
> I see the following exceptions when I a mapjoin operator is being initialized 
> on Hive-on-Spark and when vectorization is turned on.
> This happens when the hashTable is empty. The code in 
> {{MapJoinTableContainerSerDe#getDefaultEmptyContainer}} method returns a 
> HashMapWrapper while the VectorMapJoinOperator expects a 
> {{MapJoinBytesTableContainer}} when {{hive.mapjoin.optimized.hashtable}} is 
> set to true.
> {noformat}
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hive.ql.exec.persistence.HashMapWrapper cannot be cast to 
> org.apache.hadoop.hive.ql.exec.persistence.MapJoinTableContainerDirectAccess
>  at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.optimized.VectorMapJoinOptimizedHashTable.<init>(VectorMapJoinOptimizedHashTable.java:92)
>  ~[hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.optimized.VectorMapJoinOptimizedHashMap.<init>(VectorMapJoinOptimizedHashMap.java:127)
>  ~[hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.optimized.VectorMapJoinOptimizedStringHashMap.<init>(VectorMapJoinOptimizedStringHashMap.java:60)
>  ~[hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.optimized.VectorMapJoinOptimizedCreateHashTable.createHashTable(VectorMapJoinOptimizedCreateHashTable.java:80)
>  ~[hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinCommonOperator.setUpHashTable(VectorMapJoinCommonOperator.java:485)
>  ~[hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinCommonOperator.completeInitializationOp(VectorMapJoinCommonOperator.java:461)
>  ~[hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.exec.Operator.completeInitialization(Operator.java:471)
>  ~[hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
>  at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:401) 
> ~[hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
>  at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:574) 
> ~[hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:526) 
> ~[hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
>  at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:387) 
> ~[hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
>  at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.init(SparkMapRecordHandler.java:109)
>  ~[hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
>  ... 16 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to