[ 
https://issues.apache.org/jira/browse/HIVE-27831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17781316#comment-17781316
 ] 

Stamatis Zampetakis commented on HIVE-27831:
--------------------------------------------

Running the precommit tests with CBO fallback disabled leads to ~100 failures 
that can be grouped in the following categories. For each category, we include 
the file name (fname) from one representative failing test case along with the 
SQL query and the exception.
h3. Union type not supported

fname=annotate_stats_select.q
{code:sql}
explain select CREATE_UNION(0, "hello") from alltypes_orc
{code}
{noformat}
org.apache.hadoop.hive.ql.optimizer.calcite.CalciteSemanticException: Union 
type is not supported
{noformat}
h3. TABLESAMPLE not supported

fname=archive_excludeHadoop20.q
{code:sql}
SELECT key FROM harbucket TABLESAMPLE(BUCKET 1 OUT OF 10) SORT BY key
{code}
{noformat}
org.apache.hadoop.hive.ql.optimizer.calcite.CalciteSemanticException: Table 
Sample specified for harbucket. Currently we don't support Table Sample clauses 
in CBO, turn off cbo for queries on tableSamples.
{noformat}
h3. Ambiguous column references

fname=ambiguous_col.q
{code:sql}
explain select * from (select a.key, a.* from (select * from src) a join 
(select * from src1) b on (a.key = b.key)) t 
{code}
{noformat}
org.apache.hadoop.hive.ql.optimizer.calcite.CalciteSemanticException: Cannot 
add column to RR: a.key => _col1: string due to duplication, see previous 
warnings
{noformat}
h3. Filter expression with non-boolean return type

fname=annotate_stats_filter.q
{code:sql}
explain select * from loc_orc where 'foo' 
{code}
{noformat}
org.apache.hadoop.hive.ql.optimizer.calcite.CalciteSemanticException: Filter 
expression with non-boolean return type.
{noformat}
h3. SELECT alias in HAVING clause not supported

fname=limit_pushdown_negative.q
{code:sql}
explain select value, sum(key) as sum from src group by value having sum > 100 
limit 20
{code}
{noformat}
org.apache.hadoop.hive.ql.optimizer.calcite.CalciteSemanticException: 
Encountered Select alias 'sum' in having clause 'sum > 100' This non standard 
behavior is not supported with cbo on. Turn off cbo for these queries.
{noformat}
h3. Unexpected rexnode

fname=nested_column_pruning.q
{code:sql}
EXPLAIN
SELECT count(s1.f6), s5.f16.f18.f19
FROM nested_tbl_1_n1
GROUP BY s5.f16.f18.f19 
{code}
{noformat}
org.apache.hadoop.hive.ql.optimizer.calcite.CalciteSemanticException: 
Unexpected rexnode : org.apache.calcite.rex.RexFieldAccess
{noformat}
fname=udaf_ngrams.q
{code:sql}
SELECT ngrams(sentences(lower(contents)), 1, 100, 1000).estfrequency FROM kafka 
{code}
{noformat}
org.apache.hadoop.hive.ql.optimizer.calcite.CalciteSemanticException: 
Unexpected rexnode : org.apache.calcite.rex.RexInputRef
{noformat}
h3. UNIQUE JOIN not supported

fname=explainuser_2.q
{code:sql}
EXPLAIN FROM UNIQUEJOIN PRESERVE src a_n19 (a_n19.key), PRESERVE src1 b_n15 
(b_n15.key), PRESERVE srcpart c_n4 (c_n4.key) SELECT a_n19.key, b_n15.key, 
c_n4.key 
{code}
{noformat}
org.apache.hadoop.hive.ql.optimizer.calcite.CalciteSemanticException: UNIQUE 
JOIN is currently not supported in CBO, turn off cbo to use UNIQUE JOIN.
{noformat}
h3. DirectSQL exception during partition pruning

fname=materialized_view_authorization_sqlstd.q
{code:sql}
explain select * from db1.testmvtable where year=2020 
{code}
{noformat}
java.lang.RuntimeException: org.apache.hadoop.hive.ql.parse.SemanticException: 
MetaException(message:See previous errors; Error executing SQL query "select 
"PARTITIONS"."PART_ID" from "PARTITIONS"  inner join "TBLS" on 
"PARTITIONS"."TBL_ID" = "TBLS"."TBL_ID"     and "TBLS"."TBL_NAME" = ?   inner 
join "DBS" on "TBLS"."DB_ID" = "DBS"."DB_ID"      and "DBS"."NAME" = ? inner 
join "PARTITION_KEY_VALS" "FILTER0" on "FILTER0"."PART_ID" = 
"PARTITIONS"."PART_ID" and "FILTER0"."INTEGER_IDX" = 0 where "DBS"."CTLG_NAME" 
= ?  and (((case when "FILTER0"."PART_KEY_VAL" <> ? and "TBLS"."TBL_NAME" = ? 
and "DBS"."NAME" = ? and "DBS"."CTLG_NAME" = ? and "FILTER0"."PART_ID" = 
"PARTITIONS"."PART_ID" and "FILTER0"."INTEGER_IDX" = 0 then 
cast("FILTER0"."PART_KEY_VAL" as decimal(21,0)) else null end) = ?))".Failed to 
execute [select "PARTITIONS"."PART_ID" from "PARTITIONS"  inner join "TBLS" on 
"PARTITIONS"."TBL_ID" = "TBLS"."TBL_ID"     and "TBLS"."TBL_NAME" = ?   inner 
join "DBS" on "TBLS"."DB_ID" = "DBS"."DB_ID"      and "DBS"."NAME" = ? inner 
join "PARTITION_KEY_VALS" "FILTER0" on "FILTER0"."PART_ID" = 
"PARTITIONS"."PART_ID" and "FILTER0"."INTEGER_IDX" = 0 where "DBS"."CTLG_NAME" 
= ?  and (((case when "FILTER0"."PART_KEY_VAL" <> ? and "TBLS"."TBL_NAME" = ? 
and "DBS"."NAME" = ? and "DBS"."CTLG_NAME" = ? and "FILTER0"."PART_ID" = 
"PARTITIONS"."PART_ID" and "FILTER0"."INTEGER_IDX" = 0 then 
cast("FILTER0"."PART_KEY_VAL" as decimal(21,0)) else null end) = ?))] with 
parameters [testmvtable, db1, hive, __HIVE_DEFAULT_PARTITION__, testmvtable, 
db1, hive, 2020])
 at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:182)
 at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:126)
 at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1321)
 at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:570)
 at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:13079)
 at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:465)
 at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
 at 
org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:180)
 at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
 at org.apache.hadoop.hive.ql.Compiler.analyze(Compiler.java:224)
 at org.apache.hadoop.hive.ql.Compiler.compile(Compiler.java:107)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:519)
 at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:471)
 at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:436)
 at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:430)
 at 
org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:121)
 at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:227)
 at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:257)
 at org.apache.hadoop.hive.cli.CliDriver.processCmd1(CliDriver.java:201)
 at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:127)
 at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:425)
 at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:356)
 at 
org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:733)
 at org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:703)
 at 
org.apache.hadoop.hive.cli.control.CoreCliDriver.runTest(CoreCliDriver.java:115)
 at org.apache.hadoop.hive.cli.control.CliAdapter.runTest(CliAdapter.java:157)
 at 
org.apache.hadoop.hive.cli.split14.TestMiniLlapLocalCliDriver.testCliDriver(TestMiniLlapLocalCliDriver.java:62)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at 
org.apache.hadoop.hive.cli.control.CliAdapter$2$1.evaluate(CliAdapter.java:135)
 at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
 at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
 at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
 at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
 at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
 at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
 at org.junit.runners.Suite.runChild(Suite.java:128)
 at org.junit.runners.Suite.runChild(Suite.java:27)
 at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
 at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
 at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
 at 
org.apache.hadoop.hive.cli.control.CliAdapter$1$1.evaluate(CliAdapter.java:95)
 at org.junit.rules.RunRules.evaluate(RunRules.java:20)
 at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
 at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
 at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
 at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
 at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
 at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
 at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377)
 at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138)
 at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465)
 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451)
Caused by: org.apache.hadoop.hive.ql.parse.SemanticException: 
MetaException(message:See previous errors; Error executing SQL query "select 
"PARTITIONS"."PART_ID" from "PARTITIONS"  inner join "TBLS" on 
"PARTITIONS"."TBL_ID" = "TBLS"."TBL_ID"     and "TBLS"."TBL_NAME" = ?   inner 
join "DBS" on "TBLS"."DB_ID" = "DBS"."DB_ID"      and "DBS"."NAME" = ? inner 
join "PARTITION_KEY_VALS" "FILTER0" on "FILTER0"."PART_ID" = 
"PARTITIONS"."PART_ID" and "FILTER0"."INTEGER_IDX" = 0 where "DBS"."CTLG_NAME" 
= ?  and (((case when "FILTER0"."PART_KEY_VAL" <> ? and "TBLS"."TBL_NAME" = ? 
and "DBS"."NAME" = ? and "DBS"."CTLG_NAME" = ? and "FILTER0"."PART_ID" = 
"PARTITIONS"."PART_ID" and "FILTER0"."INTEGER_IDX" = 0 then 
cast("FILTER0"."PART_KEY_VAL" as decimal(21,0)) else null end) = ?))".Failed to 
execute [select "PARTITIONS"."PART_ID" from "PARTITIONS"  inner join "TBLS" on 
"PARTITIONS"."TBL_ID" = "TBLS"."TBL_ID"     and "TBLS"."TBL_NAME" = ?   inner 
join "DBS" on "TBLS"."DB_ID" = "DBS"."DB_ID"      and "DBS"."NAME" = ? inner 
join "PARTITION_KEY_VALS" "FILTER0" on "FILTER0"."PART_ID" = 
"PARTITIONS"."PART_ID" and "FILTER0"."INTEGER_IDX" = 0 where "DBS"."CTLG_NAME" 
= ?  and (((case when "FILTER0"."PART_KEY_VAL" <> ? and "TBLS"."TBL_NAME" = ? 
and "DBS"."NAME" = ? and "DBS"."CTLG_NAME" = ? and "FILTER0"."PART_ID" = 
"PARTITIONS"."PART_ID" and "FILTER0"."INTEGER_IDX" = 0 then 
cast("FILTER0"."PART_KEY_VAL" as decimal(21,0)) else null end) = ?))] with 
parameters [testmvtable, db1, hive, __HIVE_DEFAULT_PARTITION__, testmvtable, 
db1, hive, 2020])
 at 
org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.getPartitionsFromServer(PartitionPruner.java:481)
 at 
org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:230)
 at 
org.apache.hadoop.hive.ql.optimizer.calcite.RelOptHiveTable.computePartitionList(RelOptHiveTable.java:480)
 at 
org.apache.hadoop.hive.ql.optimizer.calcite.rules.HivePartitionPruneRule.perform(HivePartitionPruneRule.java:63)
 at 
org.apache.hadoop.hive.ql.optimizer.calcite.rules.HivePartitionPruneRule.onMatch(HivePartitionPruneRule.java:46)
 at 
org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
 at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)
 at org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
 at 
org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
 at 
org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
 at org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202)
 at org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)
 at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.executeProgram(CalcitePlanner.java:2448)
 at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.executeProgram(CalcitePlanner.java:2407)
 at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.applyPreJoinOrderingTransforms(CalcitePlanner.java:1945)
 at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1688)
 at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1569)
 at 
org.apache.calcite.tools.Frameworks.lambda$withPlanner$0(Frameworks.java:131)
 at 
org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:914)
 at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:180)
 ... 65 more
Caused by: MetaException(message:See previous errors; Error executing SQL query 
"select "PARTITIONS"."PART_ID" from "PARTITIONS"  inner join "TBLS" on 
"PARTITIONS"."TBL_ID" = "TBLS"."TBL_ID"     and "TBLS"."TBL_NAME" = ?   inner 
join "DBS" on "TBLS"."DB_ID" = "DBS"."DB_ID"      and "DBS"."NAME" = ? inner 
join "PARTITION_KEY_VALS" "FILTER0" on "FILTER0"."PART_ID" = 
"PARTITIONS"."PART_ID" and "FILTER0"."INTEGER_IDX" = 0 where "DBS"."CTLG_NAME" 
= ?  and (((case when "FILTER0"."PART_KEY_VAL" <> ? and "TBLS"."TBL_NAME" = ? 
and "DBS"."NAME" = ? and "DBS"."CTLG_NAME" = ? and "FILTER0"."PART_ID" = 
"PARTITIONS"."PART_ID" and "FILTER0"."INTEGER_IDX" = 0 then 
cast("FILTER0"."PART_KEY_VAL" as decimal(21,0)) else null end) = ?))".Failed to 
execute [select "PARTITIONS"."PART_ID" from "PARTITIONS"  inner join "TBLS" on 
"PARTITIONS"."TBL_ID" = "TBLS"."TBL_ID"     and "TBLS"."TBL_NAME" = ?   inner 
join "DBS" on "TBLS"."DB_ID" = "DBS"."DB_ID"      and "DBS"."NAME" = ? inner 
join "PARTITION_KEY_VALS" "FILTER0" on "FILTER0"."PART_ID" = 
"PARTITIONS"."PART_ID" and "FILTER0"."INTEGER_IDX" = 0 where "DBS"."CTLG_NAME" 
= ?  and (((case when "FILTER0"."PART_KEY_VAL" <> ? and "TBLS"."TBL_NAME" = ? 
and "DBS"."NAME" = ? and "DBS"."CTLG_NAME" = ? and "FILTER0"."PART_ID" = 
"PARTITIONS"."PART_ID" and "FILTER0"."INTEGER_IDX" = 0 then 
cast("FILTER0"."PART_KEY_VAL" as decimal(21,0)) else null end) = ?))] with 
parameters [testmvtable, db1, hive, __HIVE_DEFAULT_PARTITION__, testmvtable, 
db1, hive, 2020])
 at 
org.apache.hadoop.hive.metastore.MetastoreDirectSqlUtils.executeWithArray(MetastoreDirectSqlUtils.java:81)
 at 
org.apache.hadoop.hive.metastore.MetaStoreDirectSql.executeWithArray(MetaStoreDirectSql.java:2228)
 at 
org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPartitionIdsViaSqlFilter(MetaStoreDirectSql.java:933)
 at 
org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPartitionsViaSqlFilter(MetaStoreDirectSql.java:700)
 at 
org.apache.hadoop.hive.metastore.ObjectStore$12.getSqlResult(ObjectStore.java:4130)
 at 
org.apache.hadoop.hive.metastore.ObjectStore$12.getSqlResult(ObjectStore.java:4121)
 at 
org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:4447)
 at 
org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByExprInternal(ObjectStore.java:4160)
 at 
org.apache.hadoop.hive.metastore.VerifyingObjectStore.getPartitionsByExpr(VerifyingObjectStore.java:79)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97)
 at com.sun.proxy.$Proxy57.getPartitionsByExpr(Unknown Source)
 at 
org.apache.hadoop.hive.metastore.HMSHandler.get_partitions_spec_by_expr(HMSHandler.java:7330)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at 
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:98)
 at 
org.apache.hadoop.hive.metastore.AbstractHMSHandlerProxy.invoke(AbstractHMSHandlerProxy.java:82)
 at com.sun.proxy.$Proxy59.get_partitions_spec_by_expr(Unknown Source)
 at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartitionsSpecByExprInternal(HiveMetaStoreClient.java:2472)
 at 
org.apache.hadoop.hive.ql.metadata.HiveMetaStoreClientWithLocalCache.getPartitionsSpecByExprInternal(HiveMetaStoreClientWithLocalCache.java:396)
 at 
org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.getPartitionsSpecByExprInternal(SessionHiveMetaStoreClient.java:2288)
 at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listPartitionsSpecByExpr(HiveMetaStoreClient.java:2484)
 at 
org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.listPartitionsSpecByExpr(SessionHiveMetaStoreClient.java:1346)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:213)
 at com.sun.proxy.$Proxy60.listPartitionsSpecByExpr(Unknown Source)
 at org.apache.hadoop.hive.ql.metadata.Hive.getPartitionsByExpr(Hive.java:4507)
 at 
org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.getPartitionsFromServer(PartitionPruner.java:457)
 ... 84 more
 {noformat}
h3. Stale output in negative tests

Most likely these are trivial failures and we just need to update the out files.

> Set hive.cbo.fallback.strategy to NEVER by default
> --------------------------------------------------
>
>                 Key: HIVE-27831
>                 URL: https://issues.apache.org/jira/browse/HIVE-27831
>             Project: Hive
>          Issue Type: Task
>          Components: CBO
>            Reporter: Stamatis Zampetakis
>            Assignee: Stamatis Zampetakis
>            Priority: Major
>              Labels: pull-request-available
>
> The hive.cbo.fallback.strategy property defines when Hive fallbacks to legacy 
> optimizer if an error occurs during the CBO phase.
> At the moment the default value is CONSERVATIVE, which is the backward 
> compatible option, and automatically fallbacks to legacy optimizer when 
> certain errors occur.
> The legacy optimizer is (hive.cbo.enable=false) is gonna soon be officially 
> deprecated (HIVE-27830); unofficially it was treated as such for a long time 
> now.
> To reduce maintenance cost and improve CBO coverage and stability we should 
> NEVER fallback to the legacy optimizer after a CBO error. 
> NEVER should be the default behavior in newer releases; users can still set 
> the property to CONSERVATIVE as a temporary workaround till the CBO error is 
> addressed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to