[ https://issues.apache.org/jira/browse/HIVE-23965?focusedWorklogId=512241&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-512241 ]
ASF GitHub Bot logged work on HIVE-23965: ----------------------------------------- Author: ASF GitHub Bot Created on: 16/Nov/20 09:41 Start Date: 16/Nov/20 09:41 Worklog Time Spent: 10m Work Description: zabetak commented on a change in pull request #1347: URL: https://github.com/apache/hive/pull/1347#discussion_r524036093 ########## File path: ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java ########## @@ -1743,7 +1743,9 @@ public void setLocalMapRedErrors(Map<String, List<String>> localMapRedErrors) { public String getCurrentDatabase() { if (currentDatabase == null) { - currentDatabase = DEFAULT_DATABASE_NAME; + currentDatabase = sessionConf.getVar(ConfVars.HIVE_CURRENT_DATABASE); Review comment: Finally, I put the data in the docker image under the `default` database and reverted the other changes. Check commit https://github.com/apache/hive/pull/1347/commits/0e0edae65b00b7e670daa0628912f6be2857ba42. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking ------------------- Worklog Id: (was: 512241) Time Spent: 2h 40m (was: 2.5h) > Improve plan regression tests using TPCDS30TB metastore dump and custom > configs > ------------------------------------------------------------------------------- > > Key: HIVE-23965 > URL: https://issues.apache.org/jira/browse/HIVE-23965 > Project: Hive > Issue Type: Improvement > Reporter: Stamatis Zampetakis > Assignee: Stamatis Zampetakis > Priority: Major > Labels: pull-request-available > Time Spent: 2h 40m > Remaining Estimate: 0h > > The existing regression tests (HIVE-12586) based on TPC-DS have certain > shortcomings: > The table statistics do not reflect cardinalities from a specific TPC-DS > scale factor (SF). Some tables are from a 30TB dataset, others from 200GB > dataset, and others from a 3GB dataset. This mix leads to plans that may > never appear when using an actual TPC-DS dataset. > The existing statistics do not contain information about partitions something > that can have a big impact on the resulting plans. > The existing regression tests rely on more or less on the default > configuration (hive-site.xml). In real-life scenarios though some of the > configurations differ and may impact the choices of the optimizer. > This issue aims to address the above shortcomings by using a curated > TPCDS30TB metastore dump along with some custom hive configurations. -- This message was sent by Atlassian Jira (v8.3.4#803005)