[ 
https://issues.apache.org/jira/browse/HIVE-13570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15253916#comment-15253916
 ] 

Chaoyu Tang commented on HIVE-13570:
------------------------------------

[~ychena] I wonder if you can add more tests with combination of 
hive.ppd.remove.duplicatefilters, hive.cbo.enable, and hive.optimize.ppd if 
possible? Did the issue only happen when cbo is off? How about cbo is enabled 
but hive.ppd.remove.duplicatefilters disabled? Will hive.optimize.ppd off work 
around the issue?

> Some query with Union all fails when CBO is off
> -----------------------------------------------
>
>                 Key: HIVE-13570
>                 URL: https://issues.apache.org/jira/browse/HIVE-13570
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 2.0.0
>            Reporter: Yongzhi Chen
>            Assignee: Yongzhi Chen
>         Attachments: HIVE-13570.1.PATCH
>
>
> Some queries with union all throws IndexOutOfBoundsException
> when:
> set hive.cbo.enable=false;
> set hive.ppd.remove.duplicatefilters=true;
> The stack is as:
> {noformat}
> java.lang.IndexOutOfBoundsException: Index: 67, Size: 67 
>         at java.util.ArrayList.rangeCheck(ArrayList.java:635) 
>         at java.util.ArrayList.get(ArrayList.java:411) 
>         at 
> org.apache.hadoop.hive.ql.optimizer.ColumnPrunerProcCtx.genColLists(ColumnPrunerProcCtx.java:161)
>  
>         at 
> org.apache.hadoop.hive.ql.optimizer.ColumnPrunerProcCtx.handleFilterUnionChildren(ColumnPrunerProcCtx.java:273)
>  
>         at 
> org.apache.hadoop.hive.ql.optimizer.ColumnPrunerProcFactory$ColumnPrunerFilterProc.process(ColumnPrunerProcFactory.java:108)
>  
>         at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
>  
>         at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:94)
>  
>         at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:78)
>  
>         at 
> org.apache.hadoop.hive.ql.optimizer.ColumnPruner$ColumnPrunerWalker.walk(ColumnPruner.java:172)
>  
>         at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:109)
>  
>         at 
> org.apache.hadoop.hive.ql.optimizer.ColumnPruner.transform(ColumnPruner.java:135)
>  
>         at 
> org.apache.hadoop.hive.ql.optimizer.Optimizer.optimize(Optimizer.java:198) 
>         at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10327)
>  
>         at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:192)
>  
>         at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:222)
>  
>         at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:432) 
>         at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:305) 
>         at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1119) 
>         at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1167) 
>         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1055) 
>         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1045) 
>         at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:207) 
>         at 
> org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:159) 
>         at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:370) 
>         at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:305) 
>         at 
> org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:403) 
>         at 
> org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:419) 
>         at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:708) 
>         at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675) 
>         at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615) 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to